Meanwhile, just as FutureMark was introducing its new benchmark, graphics heavyweight NVIDIA initiated a public relations campaign aimed at undermining 3DMark03’s credibility as a benchmark and discouraging use of the test in the enthusiast press. NVIDIA’s first move was to mail out a whitepaper outlining its criticisms of 3DMark03. NVIDIA asked members of the press not to redistribute this document, only to paraphrase or offer excerpts. The document registered some specific complaints about 3DMark03’s methodology, but its primary thrust was an overall critique of FutureMark’s approach to 3DMark03 and of synthetic benchmarks in general.
The impact of NVIDIA’s PR push was immediate and impressive. A number of web sites published articles raising questions about 3DMark03in some cases, unfortunately, repeating NVIDIA’s claims about the test without attribution and without critical evaluation of those claims. Since that time, a number of players, including FutureMark themselves, have weighed in with responses to NVIDIA’s criticisms.
During the past couple of weeks, I’ve talked with representatives of NVIDIA, FutureMark, and ATI about this controversy in an attempt to better understand the issues involved. Also, over the past few days, some intriguing new details about the architecture of NVIDIA’s new GeForce FX chip have come to light, and those revelations may help explain why NVIDIA has objected so strenuously to 3DMark03’s design. I’ll try to cover what’s happened and why it matters. Let’s start with some background on FutureMark, NVIDIA, and the creation of 3DMark03.
FutureMark, NVIDIA, and the genesis of a conflict
FutureMark is a small company based in Finland whose business depends on two primary sources of income: sales of the “Pro” versions of its benchmarks to end users and sales of memberships in its beta programs to independent hardware vendors (IHVs) like AMD, Intel, ATI, and NVIDIA. The beta program has several membership tiers, with pricing tied to level of participation. Broad participation in the beta program has been key to FutureMark’s success. The beta program member list on FutureMark’s website reads like a Who’s Who of PC performance hardware. Tier-one participants include ATI, AMD, Intel, and Microsoft. Other members include graphics players like Matrox, S3 Graphics, SiS, Imagination Tech, and Trident, plus PC OEMs like Dell and Gateway.
The months-long process of developing a new revision of 3DMark involves input and feedback from beta program partners about a series of design documents, alpha builds, and beta builds of the benchmark. As I understand it, NVIDIA had been a top-tier FutureMark beta program member during the development of 3DMark03 until the first of December, when NVIDIA’s membership renewal came due. At that time, NVIDIA elected not to renew its membership. 3DMark03 was in the beta stage of development at this point, and was essentially feature-complete.
By all accounts, NVIDIA’s decision not to renew its membership was triggered by its dissatisfaction with the 3DMark03 product and with FutureMark’s responses to NVIDIA’s input on 3DMark03’s composition. Clearly the two parties had substantive disagreements over how 3DMark03 should be built. The questions now are, what were those disagreements, and who was right?
Was NVIDIA miffed because 3DMark03 wouldn’t give its new GeForce FX chip a fair shake? Or because the test would disadvantage NVIDIA’s current products in the GeForce4 line? Early benchmark results from 3DMark03 aren’t as instructive one might expect. The HardOCP tested the GeForce FX versus the Radeon 9700 Pro, and results were mixed. In the first round of tests, the Radeon 9700 Pro won handily. A second set of tests with updated drivers from NVIDIA, however, showed the GeForce FX taking a narrow lead in the overall game score.
Our own testing with NVIDIA’s current generation of 3D chips, the GeForce4 line, didn’t look too good for NVIDIA:
But such things are to be expected when one’s competitor is a technology generation ahead, especially in a benchmark that purports to be forward-looking. Besides, NVIDIA told me straight up its complaints aren’t about 3DMark03’s performance on its GF4 cards.
NVIDIA was kind enough to allow me time to speak at length with two key employees, Tony Tamasi, the company’s General Manager of Desktop Graphics Processors, and Mark Daly, Director of Technical Marketing, who manages the teams responsible for benchmarking and making NVIDIA’s graphics technology demos. Daly and Tamasi were very helpful in stating NVIDIA’s case against 3DMark03 and very patient in answering my (sometimes-boneheaded) questions. They were also both very consistently “on message,” sticking to the company line on 3DMark03 like George Bush sticks to a Karl Rove script on the campaign trail. I mention this fact because it’s so very, well, remarkable coming from techie types talking tech.
NVIDIA’s problems with 3DMark03 seem to encompass nearly everything about the benchmark. That is, the company sees very little good in the test as it exists now. However, NVIDIA’s complaints generally fall into two categories: general, overarching criticisms and specific, technical critiques. NVIDIA’s big-picture complaints can be summed up in two points:
- 3DMark03 is a bad benchmark This is a big point with lots of little sub-points, but the complaints all fall easily under this banner. NVIDIA’s key contention is that 3DMark03 isn’t representative of actual games. Near as I can tell, that means not now, nor ever in the future, although there is some ambiguity on this point. NVIDIA’s specific technical criticisms seem to bounce around from talking about now and talking about the future without much discernible pattern. NVIDIA suggests synthetic benchmarks are not a useful component of a graphics performance test suite, and recommends testing only with “actual games.”
- Wasted resources Optimizing for 3DMark03, says NVIDIA, pulls critical software engineering resources away from other tasks. Because 3DMark03 isn’t representative of actual games, optimizations for 3DMark are in no way beneficial for actual games. What’s more, online reviewers and editors who choose to use 3DMark in their performance evaluations create an irresistible need for NVIDIA to keep wasting resources optimizing code paths never used by real applications.
These larger complaints only make sense if NVIDIA’s more targeted technical criticisms of 3DMark03 hold up. I won’t cover all of the technical complaints in exacting detail, but in truth, NVIDIA’s whitepaper essentially makes four main complaints about 3DMark’s four game tests. A weighted average of these four tests alone determines the “overall” 3DMark score most users like to compare between systems.
- Not enough multitexturing in game test 1 The first game test is a WWII-era air battle scene supposedly representative of legacy DirectX 7-class games, and much of what’s on screen at any given time is simply sky or ground. These elements are made up of very few polygons, and only one texture is applied to the skybox and ground surfaces. As a result, NVIDIA claims, Game 1 is largely a test of single-textured (or pixel) fill rate, which isn’t representative of current or future games. Furthermore, 3DMark2001 was more “forward looking” than this test, because it employed multitexturing in its three DX7-class game tests.
- The stencil shadow volumes implementation Game tests 2 and 3 use the same basic rendering paths, and they both use stencil shadow volumes to create a realistic shadowing effect. However, NVIDIA’s whitepaper claims 3DMark03’s rendering method is “bizarre” because it requires objects to be skinned many times in the vertex shader for each frame rendered:
3DMark03 uses an approach that adds six times the number of vertices required for the extrusion. In our five light example, this is the equivalent of skinning each object 36 times! No game would ever do this. This approach creates such a serious bottleneck in the vertex portion of the graphics pipeline that the remainder of the graphics engine (texturing, pixel programs, raster operations, etc.) never gets an opportunity to stretch its legs.
The paper suggests caching the results of the vertex skinning operation between passes would be more efficient, and more like John Carmack’s implementation in Doom III.
- Too much pixel shader 1.4 Game tests 2, 3, and 4 all use pixel shader programs based on the pixel shader 1.4 specification from DirectX 8.1. In the case of game tests 2 and 3, pixel shader 1.4 is inappropriate because pixel shader 1.4 “is virtually non-existent in DX8 games.” Furthermore, if 1.4 pixel shaders aren’t available, the benchmark falls back to pixel shader 1.1 instead of 1.3 in order to render the scenes.
- Not enough DirectX 9 The game 4 test, dubbed “Mother Nature,” doesn’t use enough of DX9’s new features. Only two of the nine pixel shaders use the new PS 2.0 spec; the other seven use PS 1.4. Thus, “the amount of DX9 represented in the 3DMark03 score is negligible. It’s not a DX9 benchmark.”
While presenting these concerns, Tamasi acknowledged the difficulty of FutureMark’s task in constructing a good benchmark, and he expressed a deep skepticism about the feasibility of ever building a good forward-looking synthetic test representative of future games. He pointed out the difficulty of FutureMark’s business model, as well. NVIDIA seemed to be concerned that this one company could have so much power in determining the industry’s performance metrics. He pointed to the example of the SPEC committee as a possible alternative to FutureMark’s approach.
Tamasi also stressed the need for developers to include performance tests in their games, and said NVIDIA’s developer relations team has long been encouraging just that and offering resources to help make it possible.
I believe that’s a fair summation of NVIDIA’s complaints about 3DMark03. These things were, according to NVIDIA, problem enough to prompt NVIDIA to remove itself from FutureMark’s beta program and begin discouraging use of the benchmark created by a former partner.
Three days after 3DMark03’s release, FutureMark published its response to NVIDIA’s criticisms (as reported by the enthusiast press). This paper restates the case for synthetic benchmarks generally as a part of overall 3D performance evaluations, and it addresses some of NVIDIA’s specific complaints. Let’s leave the general arguments about benchmarking aside for now and look at FutureMark’s response to the specific tech issues.
- Not enough multitexturing in game test 1 FutureMark contends game test 1 is typical of current games in using a single texture for a skybox, and lists several games as examples: Crimson Skies, IL-2 Sturmovik, and Star Trek: Bridge Commander. The paper also shows signs of a past conflict with NVIDIA over this issue:
As this issue was brought up already during 3DMark03 development, we did a test by adding a second texture layer to the skybox. The performance difference stayed within the error margin (3%), and in our opinion the additional layer did not significantly add to the visual quality of the test. Thus, there were no game development or technical reasons for implementing a multitextured skybox.
Obviously, FutureMark and NVIDIA had indeed been at odds over this issue.
- The stencil shadow volumes implementation FutureMark takes on NVIDIA’s whitepaper directly here, arguing that the efficiency of vertex shader skinning justifies its approach. What’s more, NVIDIA’s example doesn’t quite fit FutureMark’s implementation, as explained in sparkling Finnish English:
Since each light is performance-wise expensive, game developers have level designs optimized so that as few lights as possible are used concurrently on one character. Following this practice, 3DMark03 sometimes uses as many as two lights that reach a character concurrently, not five as mentioned in some instances.
…instances like, perhaps, NVIDIA’s whitepaper? Hmm.
To back up its claims, FutureMark suggests running 3DMark03 in different resolutions, to see whether game tests 2 and 3 are bottlenecked by vertex shader performance. “If the benchmark was vertex shader limited, you would get the same score on all runs, since the amount of vertex shader work remains the same despite the resolution change.”
That’s easy enough. Let’s have a look.
Indeed, the game test results scale with fill rate, suggesting vertex shaders are not a primary performance limiter here, especially in the case of the DirectX 9-class GPUs. This fact may not completely justify FutureMark’s stencil shadow volumes implementation, but it certainly shoots down some claims made in NVIDIA’s whitepaper.
- Too much pixel shader 1.4 Because pixel shader 1.4 is a standard forged by ATI and Microsoft to accomodate ATI’s R200-series chips, we looked at 3DMark03’s use of pixel shader 1.4 with some skepticism. After all, other GPUs like the Matrox Parhelia and SiS Xabre support PS1.3, but of the non-DX9 chips, only ATI hardware supports PS1.4. Rather than refer to FutureMark’s whitepaper, let me offer our question for FutureMark’s Tero Sarkkinen and his direct response:
TR: Why did you use pixel shader 1.4 with a fallback to 1.1 instead of 1.3? Doesn’t this choice unfairly disadvantage NVIDIA cards and other non-ATI GPUs?
Tero Sarkkinen: Firstly, when we design a benchmark, we do not care which manufacturer happens to have what type of hardware out there. We follow DirectX standard and what game developers are doing. Pixel shader 1.4 is NOT an ATI specific technology, it is technology that belongs in the DirectX standard.
Fallback to 1.3 (instead of fallback to 1.1) would not have changed the performance at all. We tried it. There is very very little change from 1.1 to 1.2 to 1.3, the real change comes from 1.3 to 1.4. The 1.4 pixel shader only needs a single rendering path for each light (and the depth pass, which is similar to how Doom3 works). Note that 1.3 pixel shaders only add a few instructions to 1.1 pixel shaders. However, 1.4 pixel shaders allow 6 texture stages, compared to 4 in 1.1 (or in 1.3) pixel shaders. 1.4 shaders further allow each texture to be read twice.
That’s FutureMark’s story. We’ll explore the issue of pixel shader versions in more depth below.
- Not enough DirectX 9 FutureMark contends the game test 4 uses an appropriate mix of pixel shader types. “Because each shader model is a superset of the prior shader models, this will be very efficient on all DirectX 9 hardware.” Also, the scene’s most striking elements, the water, sky, and leaves, use 2.0 shaders.
Furthermore, FutureMark claims the test’s workload is appropriate for DX9-class hardware, with an average of 780,000 polys and “well over 100MB of graphics content” per frame. The paper states with confidence that “there will be a clear correlation between 3DMark03 and game benchmark results” once 3D games start using pixel and vertex shaders more thoroughly.
FutureMark defends the usefulness of its benchmark and claims the test’s impartiality is key. The implication is clear: sometimes it’s not easy being a benchmark house that produces unbiased products.
Evaluating the claims
When I began work on this article, I intended to offer my own attempt at an evaluation of the competing claims of FutureMark and NVIDIA. Now, Dave at Beyond3D has already offered an extensive evaluation with more detail than I was prepared to offer, so I will have to embarrass myself otherwise. I won’t attempt to match all of his analysis, but I will try to offer my thoughts on the four basic tech issues I’ve identified among NVIDIA’s complaints.
- Not enough multitexturing in game test 1 Complaints about game test 1 are a large part of NVIDIA’s case against 3DMark03. I’m compelled by NVIDIA’s argument that FutureMark concentrated here on making a nice demo rather than a test representative of current games (and this first game test is indeed meant to represent current games). The percentage of pixels onscreen that are part of a skybox is probably a bit excessive. And anyone who’s seen the 3DMark03 demo can see how FutureMark’s developers could have taken a liking to the game 1 models and scene layout. This part of the 3DMark03 demo mode is really, really cool.
However, FutureMark’s gaffe doesn’t seem too severe. Many current games are fill-rate bound, and many are bound by single-textured fill rate. Truly extensive multitexturing doesn’t dominate the scene quite like one might expect. Take, for instance, the poor Matrox Parhelia with its quad texture units per pipe and massive theoretical texel fill rate; two to three of those units are doomed to sit idle. This is one reason why ATI’s 8-pipe design for the R300 (Radeon 9700) chip makes so much sense.
I’d prefer to have seen more complex geometry in this test to give it a little bit better balance. But to claim it’s not representative of current games isn’t entirely fair.
- The stencil shadow volumes implementation We’ve already looked at FutureMark’s response to NVIDIA’s claim here, and we’ve seen benchmarks which prove the test isn’t bound entirely by vertex shader performance. I’ll leave the fight over the best methods of vertex skinning to graphics developers, but this one seems like a victory for FutureMark.
- Too much pixel shader 1.4 This is a tough one, because it’s an old fight (PS 1.1/1.3 vs. PS 1.4) between NVIDIA and ATI, yet it’s a very current fight about the immediate future (the next 6-12 months or so) of 3D games.
The primary reason pixel shader 1.4 has proven so useful to FutureMark is its ability to deliver per-pixel lighting effects in a single pass. As ATI pointed out to me in our conversation about 3DMark03, John Carmack’s now-famous .plan file update on the subject describes PS 1.4’s ability here:
The fragment level processing is clearly way better on the 8500 than on the Nvidia products, including the latest GF4. You have six individual textures, but you can access the textures twice, giving up to eleven possible texture accesses in a single pass, and the dependent texture operation is much more sensible. This wound up being a perfect fit for Doom, because the standard path could be implemented with six unique textures, but required one texture (a normalization cube map) to be accessed twice. The vast majority of Doom light / surface interaction rendering will be a single pass on the 8500, in contrast to two or three passes, depending on the number of color components in a light, for GF3/GF4
So PS 1.4 allows for single-pass rendering with per-pixel lighting, while pixel shader 1.1 and 1.3 require multiple passes to achieve the same effect. FutureMark apparently found the same thing in developing 3DMark03. I should note that no one has credibly claimed pixel shader 1.3 would reduce the number of rendering passes required versus PS 1.1 in 3DMark’s game tests.
However, it’s quite possible the introduction of tools like high-level shading languages and all the advanced features of DirectX 9-class hardware could cause a rather sharp break between DX8-class games and really out-there DX9-only games with gobs of complex pixel shaders of the 2.0 variety. In this case, PS 1.4 would never see widespread use.
I tend to think we will see an earnest transition period in which a mix of 1.1, 1.4, and 2.0 pixel shaders along the lines of those used in 3DMark03 will be common practice, with different rendering paths used for different types of hardware. FutureMark had to do some guessing here, and they haven’t yet been proven wrong.
- Not enough DirectX 9 The simple reality is, FutureMark could only go so far in making a “full DX9” test. DirectX 9 is a young API, and the tools are just now coming together. In light of this fact, 3DMark03’s Mother Nature test seems like a decent first crack at a DX9 scene, and the procedural shaders in the PS 2.0 feature test are the kinds of complex shader programs one would hope to see. I asked FutureMark a couple of questions about the 2.0 shaders used in 3DMark03. The answers are worth repeating here.
TR: How many instructions long are the pixel shader programs in the Mother Nature and PS 2.0 tests?
FutureMark: We haven’t published the actual shader code, but I can reveal that the ps2.0 test’s procedural texture generation shaders are about as much as you can fit into a 2.0 pixel shader.
TR: Did FutureMark use Microsoft’s High Level Shading Language or a similar tool in developing any of 3DMark03’s tests?
FutureMark: All shaders are written in the assembly like shader language. This is because HLSL produces a pretty optimized shader code, but you can optimize even further manually.
I have to admit, I’d rather see more and better shader programs written in HLSL, compiled at runtime, and running onscreen concurrently on the various cards. However, those are respectable answers for a first-generation DX9 benchmark. 3DMark03 isn’t the end-all, be-all DX9 test, but it seems like a reasonable start. FutureMark’s point about game test 4’s workloads being designed for DX9 class hardware is persuasive, as well.
Revelations about GeForce FX
In the course of all the hubbub over 3DMark03, some intriguing revelations about the GeForce FX chip have surfaced. In part because of 3DMark03’s own fill rate tests, some folks have raised questions about the architecture of the FX. NVIDIA has essentially led the world to believe the GeForce FX has 8 pixel pipelines like the Radeon 9700 by claiming the FX can deliver 8 pixels per clock. However, now that the first few cards have trickled out to developers and select press, folks are finding that the chip performs more like a 4-pipeline design with two texture units per pipe. That’s a significant difference, because the chip’s pixel fill rate is apparently half what we originally understood it to be.
Let me explain quickly with a trusty chip chart showing the before and after scenarios.
|Core clock (MHz)||Pixel pipelines||Peak fill rate (Mpixels/s)||Texture units per pixel pipeline||Textures per clock||Peak fill rate (Mtexels/s)||Memory clock (MHz)||Memory bus width (bits)||Peak memory bandwidth (GB/s)|
|Radeon 9700 Pro||325||8||2600||1||8||2600||620||256||19.8|
|BEFORE: GeForce FX 5800 Ultra||500||8||4000||1||8||4000||1000||128||16.0|
|AFTER: GeForce FX 5800 Ultra||500||4||2000||2||8||4000||1000||128||16.0|
As you can see, the pixel-pushing power of the FX in cases where only one fixed texture is being applied per poly is lower than the 9700 Pro, essentially erasing the FX’s clock speed advantage. Only in cases where multiple textures are being applied per polygon does the FX outrun the 9700 Pro.
This revelation goes a long way toward explaining why the GeForce FX isn’t much faster than the Radeon 9700 Pro in scenarios where, based on specs, many of us expected the FX to be faster.
Now, some caveats. While the GeForce FX appears to perform like a 4×2 design, the reality seems a little more complex. We don’t know the exact layout of the GeForce FX’s internals, because NVIDIA has elected not to make that information public yet. When we asked NVIDIA about the exact pipeline configuration of the FX, we received this reply:
GeForce FX 5800 and 5800 Ultra run at 8 pixels per clock for all of the following:
b) stencil operations
c) texture operations
d) shader operations
For advanced applications (such as Doom3) *most* of the time is spent in these modes because of the advanced shadowing techniques that use shadow buffers, stencil testing and next-generation shaders that are longer and therefore make the apps “shading-bound” rather than “color fill-rate” bound.
Only color+Z rendering is done at 4 pixels per clock, all other modes (z, stencil, texture, shading) run at 8 pixels per clock.
The more advanced the application, the less percentage of total rendering is color, because more time is spent texturing, shading and doing advanced shadowing/lighting.
So the FX can only deliver 4 pixels per clock in more traditional rendering scenarios, but it’s able to do 8 pixels per clock in some cases. Based on all the evidence, the GeForce FX is apparently a very complicated design, in some ways less conventional than ATI’s R300. The chip seems to have many functional units capable of interacting together flexibly, in programmable ways. We don’t know exactly how flexible or limited the FX is, and it’s possible we may not know exactly for a good, long time. We do know that NVIDIA talked a lot before the introduction of the FX about how compiler optimizations would play a very important role in determining the performance of future GPUs, and about how new performance metrics would be needed to evaluate such chips.
This radical design may have a great many technical advantages over the R300, but it appears to have some tangible disadvantages, too. First and foremost among them: the FX acts in many cases as a 4×2-pipe design like the GeForce4 Ti.
3DMark03 and the real GeForce FX
One of the most puzzling questions I’ve been asking myself about this whole controversy is: Why is NVIDIA upset over the content of 3DMark03? Yes, I understand the criticisms the company has offered to the press in its whitepaper, but those haven’t seemed entirely worthy of the fuss. Now that we understand a little bit more about the exact capabilities of the GeForce FX, however, the reasons behind NVIDIA’s complaints come into sharper focus.
For starters, the complaints about game test 1’s skybox full of single-textured pixels seem much more relevant. The GeForce FX 5800 Ultra’s 600Mpixel/s disadvantage in pixel fill rate versus the Radeon 9700 Pro doesn’t bode well for NVIDIA here. Too much emphasis on single-textured fill rate could make this testwhich comprises 26% of the 3DMark03 overall game scorea source of endless trouble for NV30-derived architectures.
Similarlyand this is pure, wild speculation herecomplaints about FutureMark’s use of pixel shader 1.4 could be related to this fill-rate limitation. 1.4 shaders can deliver per-pixel lighting in a single pass, which PS 1.1/1.3 cannot do. However, NVIDIA’s Mark Daly mentioned to me NVIDIA’s attempts to get FutureMark to use “simpler techniques” to achieve “a visually similar result.” One way to achieve such a result would be the technique NVIDIA advocates in its 3DMark03 whitepaper: the use of precomputed lightmaps, which would achieve similar results by laying down an additional, fixed texture in each rendering pass. Of course, precomputed lightmaps need not use pixel shaders at all, but they might help the FX’s showing.
Like I said, that’s pure speculation. NVIDIA may well have had a different shader-based technique in mind. My point here is simply to emphasize that we have, until very recently, not known about the FX’s four-pixels-per-clock limitation, and we still don’t know very much about how the GeForce FX really works. Our understanding of the conflict between FutureMark and NVIDIA may grow more acute as we learn more about the FX’s true strengths and weaknesses.
The future of the 3DMark03 controversy
I should say here that I mean no insult by not entirely taking NVIDIA’s complaints at face value. To the contrary, the fight over 3DMark03 may well be, in a sense, a proxy fight over the direction developers will take in writing upcoming games. NVIDIA’s claim that 3DMark03 doesn’t represent actual (future) games may jibe quite well with what NVIDIA’s developer relations team is currently recommending to game developers. The trouble is, the GeForce FX’s radical design may require some serious mindset adjustments among developers, and such things typically take some time to sink in.
Of course, this fight is primarily over 3DMark03’s widespread acceptance as a successor to 3DMark2001, and the issue is far from settled. FutureMark has a several key constituencies to win over, including end users, members of the media, and its beta program members.
I can’t comment much on the status of its beta program members other than to say that NVIDIA is a very big loss. Losing the graphics market sales leader will hurt the credibility of FutureMark’s graphics test, without a doubt. The fact NVIDIA’s primary rival, ATI, remains a first-tier member of the beta program will raise questions about undue influence as long as the situation persists. However, no other beta members have, to my knowledge, broken ranks yet. NVIDIA may be seen as the primary problem here, which could actually enhance FutureMark’s credibility, if other beta members see FutureMark as standing up to a bully.
Hardware reviewers are a more complicated case. NVIDIA’s initial PR push was successful, in part, because of the sort of the task at hand: taking apart a synthetic benchmark and criticizing the design choices the authors made in building it. By nature, the deconstructive task is easier than the constructive one. And we are a skeptical lot. I believe many hardware testers (and certainly their readers) carry in them an innate insecurity about the veracity of their own methods, which is, of course, in some ways key to avoiding pitfalls in performance testing. We often find the ground shifting under us as IHVs exert influence on makers of popular applications and benchmarks, which makes us skittish. These dynamics helped NVIDIA’s criticism fall on fertile ground.
As for me and my house, we will keep an eye on this controversy, but barring any unforeseen changes, we will use 3DMark03 as we have used 3DMark2001. That is, we will continue to offer 3DMark results as part of a wide range of tests. We’ll continue to present 3DMark results in more detail than most publications, so readers can see the scores behind the score, and we’ll offer context wherever we can. We will also keep looking for new and better benchmarks, especially those from new games and game engines. And I can’t wait for better DX9 pixel shader tests.
As for end users, well, I’ll let you all decide.