Single page Print

Popping the hood on Nvidia's Turing architecture


Taking the first steps into a ray-traced future

It's Turing Day at TR. We've been hearing about the innovations inside Nvidia's Turing GPUs for weeks, and now we can tell you a bit more about what's inside them. Turing implements a host of new technologies that promise to reshape the PC gaming experience for many years to come. While much of the discussion around Turing has concerned the company's hardware acceleration of real-time ray-tracing, the tensor cores on board Turing GPUs could have even more wide-ranging effects on the way we game—to say nothing of the truckload of other changes under Turing's hood that promise better performance and greater flexibility for gaming than ever before.


A die shot of the TU102 GPU. Source: Nvidia

On top of the architectural details that we can discuss this morning, Nvidia sent over both GeForce RTX 2080 and RTX 2080 Ti cards for us to play with. As of this writing, those cards are on a FedEx truck and headed for the TR labs. Nvidia has hopped on the "unboxing embargo" bandwagon, meaning we can show you the scope of delivery of those cards later today. Performance numbers will have to wait, though. First, Nvidia is pulling back the curtain on the Turing architecture and the first implementations thereof. Let's discuss some of the magic inside.

Despite Nvidia's description of ray-tracing as the holy grail of computer graphics during its introduction of the Turing architecture, these graphics cards do not replace rasterization—the process of mapping 3D geometry onto a 2D plane and the way real-time graphics have been produced for decades—with ray-tracing, or the process of casting rays through a 2D plane into a 3D scene to directly model the behavior of light. Real-time ray tracing for every pixel of a scene remains prohibitively expensive, computationally speaking.


Some potential roles for ray-tracing and rasterization in hybrid rendering. Slide: Colin Barré-Brisebois, EA SEED

Instead, the company wants to continue using rasterization for the things it's good at and add certain ray-traced effects where those techniques would produce better visual fidelity—a technique it refers to as hybrid rendering. Nvidia says rasterization is a much faster way of determining object visibility than ray-tracing, for example, so ray-tracing only needs to enter the picture for techniques where fidelity or realism is important yet difficult to achieve via rasterization, like reflections, refractions, shadows, and ambient occlusion. Nvidia notes that the traditional rasterization pipeline and the new ray-tracing pipeline can operate "simultaneously and cooperatively" in its Turing architecture.


Logical representations of the pipelines for ray-tracing and rasterization. Source: Nvidia

The software groundwork for this technique was laid earlier this year when Microsoft revealed the DirectX Raytracing API, or DXR, for DirectX 12. DXR provides access to some of the basic building blocks for ray-tracing alongside existing graphics-programming techniques, including a method of representing the 3D scene that can be traversed by the graphics card, a way to dispatch ray-tracing work to the graphics card, a series of shaders for handling the interactions of rays with the 3D scene, and a new pipeline state object for tracking what's going on across raytracing workloads. 


The RTX platform. Source: Nvidia

Microsoft notes that DXR code can run on any DirectX 12-compatible graphics card in software as a fallback, since it behaves as a compute-like workload. That fallback method won't be a practical way of achieving real-time ray-traced performance, though. To make DXR code practical for use in real-time rendering, Nvidia is implementing an entire platform it calls RTX that will let DXR code run on its hardware. In turn, GeForce RTX cards are the first hardware designed to serve as the foundation for real-time ray-traced effects with DXR and RTX.