NVIDIA, FutureMark come together


— 7:00 PM on June 2, 2003

In a shock reconciliation after a nasty public conflict, FutureMark and NVIDIA have issued a joint statement. This is a long one, but I'm going to give it all to you, so you can see what it's all about:

Futuremark Statement

For the first time in 6 months, as a result of Futuremark's White Paper on May 23rd, 2003, Futuremark and NVIDIA have had detailed discussions regarding NVIDIA GPUs and Futuremark's 3DMark03 benchmark.

Futuremark now has a deeper understanding of the situation and NVIDIA's optimization strategy. In the light of this, Futuremark now states that NVIDIA's driver design is an application specific optimization and not a cheat .

The world of 3D Graphics has changed dramatically with the latest generation of highly programmable GPUs. Much like the world of CPUs, each GPU has a different architecture and a unique optimal code path. For example, Futuremark's PCMark2002 has different CPU test compilations for AMD's AthlonXP and Intel's Pentium4 CPUs.

3DMark03 is designed as an un-optimized DirectX test and it provides performance comparisons accordingly. It does not contain manufacturer specific optimized code paths. Because all modifications that change the workload in 3DMark03 are forbidden, we were obliged to update the product to eliminate the effect of optimizations identified in different drivers so that 3DMark03 continued to produce comparable results.

However, recent developments in the graphics industry and game development suggest that a different approach for game performance benchmarking might be needed, where manufacturer-specific code path optimization is directly in the code source. Futuremark will consider whether this approach is needed in its future benchmarks.

NVIDIA Statement

NVIDIA works closely with developers to optimize games for GeForceFX. These optimizations (including shader optimizations) are the result of the co-development process. This is the approach NVIDIA would have preferred also for 3DMark03.

Joint NVIDIA-Futuremark Statement

Both NVIDIA and Futuremark want to define clear rules with the industry about how benchmarks should be developed and how they should be used. We believe that common rules will prevent these types of unfortunate situations moving forward.

After you awake from fainting, let me offer a few tidbits of instant punditry here which may help put this statement into context for you. First, this agreement does not, from what I read, have anything to say about NVIDIA's use of custom clipping planes in 3DMark03, nor does it address failing to do back buffer clearing. Those optimizations, which require an application's viewpoint or camera to be "on rails," probably do not apply here.

However, the most relevant issue is a big one, and it's one FutureMark had not been addressing adequately: the issue of mathematical precision in pixel shaders and in pixel shader optimizations. Unlike ATI R3x0 chips, which have only one level of pixel shader precision (24-bit per color channel floating point), we have come to believe that NVIDIA's FX architectures use two different types of pixel shader units—integer-only FX12 pixel shaders with 12 bits of color precision per channel, plus full floating-point pixel shaders—and offer three different precision modes: 12-bit integer, 16-bit floating point, and 32-bit floating point. We don't have much info from NVIDIA, but multiple external analyses and tests we have seen (see this thread at B3D, for one) suggest the NV30 architecture, for example, has more 12-bit integer pixel shader units than it does FP pixel shaders. Not only that, but NV3x performance drops significantly when more than two FP registers are used. NVIDIA's driver seems to be intercepting shader calls looking for higher precision and doing the calculations at lower precision levels, including possibly 12-bit integer, in order to boost performance.

This issue complicates the benchmarking task significantly, because it becomes very difficult to do an apples-to-apples comparison between ATI and NVIDIA chips. 3DMark03 didn't account for this reality adequately, as NVIDIA argued, because game developers would very likely offer optimized code paths for different architectures, taking the NV3x chips' architectural precision-performance tradeoffs in mind.

That is my take, anyhow. It appears FutureMark was swayed by NVIDIA's arguments—or perhaps by the arguments of beta program partners like Dell. What remains unresolved: What level of precison for shaders will be deemed acceptable by developers, testers, and consumers? And will the 800-pound gorilla in the corner—DX9 custodian Microsoft—finally weigh in publicly on this issue?

If NVIDIA's decision to convert shaders to lower precision levels—and specifically to 12-bit integer—becomes an acceptable method of handling DX9-class pixel shaders, the gamer's consensus on the ATI R3x0 chips versus the NV3x lineup may become roughly this: GeForce FX faster, Radeon prettier.

   
Register
Tip: You can use the A/Z keys to walk threads.
View options

This discussion is now closed.