Radeon RX Vega primitive shaders will need API support


Some controversy has arisen today regarding the state of primitive-shader support on Radeon RX Vega graphics cards, stemming from a post on the 3DCenter forums. That poster (who the AMD subreddit identifies as Marc Sauter, an editor at German site Golem.de) asserts AMD has dropped its efforts to include an alternate driver path for primitive shaders on those cards. Some have questioned the validity of this single source, so I wanted to back up this statement with my own breakout-session experience during the company's pre-CES tech day. At least in the session I was present for, AMD gave us a surprise update regarding the status of various Vega architectural features after I asked about the ongoing confusion regarding the status of primitive shader support in the latest Radeon drivers. The company may have offered that same briefing to other members of the press in other sessions, as well.


Source: AMD

Here's what we learned during that brief presentation. First, AMD re-asserted that the Draw Stream Binning Rasterizer (DSBR), an essentially tile-based approach to rendering that lets the GPU more efficiently shade pixels by only shading those triangles that are visible in the final image, has been enabled and functioning in the Radeon RX Vega drivers since those cards' launch. The DSBR "just happens," and developers don't need to do anything to take advantage of its performance benefits. The High Bandwidth Cache Controller, or HBCC, has also been available for users to toy with from (or near) the start of RX Vega's tenure on the desktop. AMD notes that most applications simply don't need more than the 8 GB of VRAM on board RX Vega cards, so this feature has a limited impact on performance in today's titles. Finally, games are taking advantage of the lower-precision arithmetic available through Vega cards' Rapid Packed Math support today, and future titles like Far Cry 5 will also support this feature.

The issue sparking the most controversy today is the status of the Next-Generation Geometry Engine, better known as "primitive shaders" in enthusiast shorthand. AMD emphasized that the Next-Generation Geometry path has several components, not just the more flexible programming model exposed through primitive shaders. One of those is the Intelligent Workgroup Distributor, a feature that ensures the GPU avoids performance-harming operations like context rolls (or context switches) and enjoys maximum occupancy of its execution units. AMD confirmed that the IWD is enabled, and that any performance benefits from that feature are already being demonstrated by Vega cards today.

Primitive shaders have remained a point of interest for Vega GPUs because of the potential performance increases that AMD's own materials have promised. The Vega architectural whitepaper notes that Vega's next-generation geometry path can process more than 17 primitives per clock, compared to the four primitives per clock that the Vega 10 GPU can process using the traditional Direct3D rendering path. That whitepaper figure came from internal AMD tests with a pre-release driver, and justifiably or not, the common expectation among enthusiasts has been that primitive shader support is a driver feature waiting to be enabled.

In the "justifiably" column, other GPU reviewers have confirmed in conversations with AMD that a manual primitive shader API was in the works, and that the graphics driver could have had a method to invoke the next-generation geometry path automatically. AMD employees have also confirmed that the graphics driver would handle invocation of the Next-Generation Geometry path automatically on Twitter.

At its pre-CES tech day, AMD turned this expectation on its ear a bit. The company explained that instead of being an "it just works" affair, full support for primitive shaders will require explicit inclusion in future versions of APIs like Direct3D 12 and Vulkan. Unlike some Vega features, AMD says that "primitive shader support isn't something we can just turn on overnight and it happens," noting instead that it'll need to work with the development community to bring this feature to future API versions. Game developers would then presumably need to take advantage of the feature when programming new games in order to enjoy the full performance benefits of the Next-Generation Geometry path. 

The company notes that it can work with developers to help them employ rendering techniques similar to primitive shaders today, however, and it cited Wolfenstein II: The New Colossus' GPU culling graphics setting as an operation similar in principle to that of primitive shaders. When this feature is enabled on AMD graphics cards, the game uses the copious Vega shader array to accelerate geometry processing. We didn't perform any directed testing of this feature when we last benchmarked Wolfenstein II, but ComputerBase has done so, and the site saw negligible performance differences when the setting was enabled versus when it was disabled. We may need to revisit this feature in the near future.

Ultimately, it seems as though Radeon RX Vega performance will continue to improve gradually over time instead of undergoing a step-function improvement from architectural features still waiting to be enabled. The greater challenge facing would-be RX Vega owners is getting hands on those cards in the first place. Stock of those products continues to be thin or nonexistent at major e-tailers thanks at least in part to cryptocurrency demand, and until the industry-wide shortages of graphics cards ease, the status of one architectural feature would seem to be more of a curiosity than a make-or-break line item for RX Vega cards.

Tip: You can use the A/Z keys to walk threads.
View options

This discussion is now closed.