Recently, Dr. Kirk shared his views with us on a range of topics related to the current PC graphics landscape. Our questionsin yellowand his answers follow.
Will we be seeing any more technology advances using software and our current GeForce3 cards, or are they hardware limited? We're thinking of the introduction of 3D textures with this last Titanium release.
It is always our goal to provide continuing value and improved performance to our customers with new driver releases. I can't say what's coming, but you can always expect that each release will expose just a little bit more. This is further leveraged by our Universal Driver Architecture strategy. All of our drivers are compatible with all of our hardware, both backwards and forwards. This means that we can very easily expose more and more functionality over time, in a compatible way. I can confirm that there are more hardware features in the GeForce3 that have not yet been exposed in software. Stay tuned!
Considering NVIDIA is way ahead of the game developers in terms of games on the shelf, wouldn't it be advantageous for buyers to hold out on the NV20 as long as possible?
I believe that the combination of GeForce3, GeForce3 Ti500, and now, GeForce3 Ti200 in the mainstream, create a dynamite "virtual console" platform on the PC. So many gamers and enthusiasts will have graphics processors from the GeForce3 family, that this becomes an excellent platform for game development. I expect to see a lot of games that offer exclusive or special support for GeForce3 features.
We have had some success overclocking the reference Ti 500 cards, which is somewhat surprising. We have also seen card specs for "Ti 550" cards from Gainward. Does the GeForce3 chip have some headroom for scaling even higher in clock speed? If so, what do you think can be squeezed out of the current version of the architecture?
We choose our manufacturing and production clock rates to strike a balance between performance (as much as possible :) ) and stability. It is important to us that customers who buy our products get a consistent, high quality experience. Because of our how conservative we are, there is ample headroom for overclocking. While we do not advocate this behavior, it is certainly an opportunity for those who want to push the bleeding edge!
We have discovered that there is more than one way to support higher order surfaces and anti-aliasing. Is there more than one technique to do shadow buffers, or is it an established technique? Also can you tell us what shadow buffers are really going to deliver to a person playing a game?
There are many techniques for doing shadows. GeForce3 supports not only shadow buffers, but also stencil shadows. Each technique has different benefits and limitations. Even given the choice of shadow buffers, there are multiple ways to achieve the goal. What shadow buffers provide is simply, shadows. Games and rendered scenes look a lot more realistic when the characters and environments have realistic shadows. The shadow buffer implementation on GeForce3 provides for high quality, smooth edged soft shadows and objects that can cast shadows on themselves. No other hardware provides equivalent quality.
There has been lots of talk about Direct3D pixel shader versions and DirectX 8.1. We also keep hearing about NVIDIA OpenGL Shader Extensions in benchmarks like DroneZ, 3DMark 2001, and GLMark. What are they, and how do they compare to standard shading operations like DirectX shader operations?
NVIDIA's approach has always been to be API-agnostic. By that, I mean that the hardware supports every feature equally well in both OpenGL and Direct3D. In some cases, there are features that our hardware supports that are not exposed in Direct3D, but that should be remedied over time. Usually, the hardware's full capability set is exposed under OpenGL, so sometimes OpenGL is ahead for a time. I keep hearing that these benchmarks are specifically written for our hardware, and this is just nonsense. It's almost certainly true that these benchmarks were developed ON our hardware; at the time, GeForce3 was the only DX8 hardware available to developers, and it's still all that most developers have. It's clear that a benchmark developed on a particular piece of hardware will run on that hardware!
On the DX8 front, shouldn't any DX8 instruction run equally well on any DX8 specified graphics card?
In theory, yes. In practice, no. The DX8 vertex shader instruction set is identical to the hardware operations on the Geforce3. This happened because Microsoft licensed the technology from NVIDIA. Other implementations probably only approximate the instruction setthey don't actually implement it fully.
|Radeon Pro specs hint at a full-fat Polaris 11 GPU in MacBook Pros||5|
|We're giving away our Aimpad R5 review unit||7|
|Apple's latest MacBook Pros ditch the F keys||63|
|In the lab: Gigabyte's GeForce GTX 1050 G1 Gaming graphics card||6|
|Google's Jamboard takes the whiteboard into the cloud||8|
|Transcend hops on the 3D NAND bandwagon with the SSD 230||1|
|Apple puts its AirPods in the oven a little longer||29|
|Microsoft helps hardware companies make VR more affordable||17|
|Intel P3100 M.2 SSD has datacenters in mind||9|