Single page Print

Fable Legends DirectX 12 performance revealed

A peek at the future of games and graphics
— 8:00 AM on September 24, 2015

One of the more exciting features built into Windows 10 is DirectX 12, a new programming interface that promises to modernize the way games talk to graphics chips.

Prior versions of DirectX—and specifically its graphics-focused component, known as Direct3D—are used by the vast majority of today's PC games, but they're not necessarily a good fit for how modern GPUs really work. These older APIs tend to impose more overhead than necessary on the graphics driver and CPU, and they're not always terribly effective at keeping the GPU fed with work. Both of these problems tend to sap performance. Thus, DirectX has often been cited as the culprit when console games make a poor transition to the PC platform in spite of the PC's massive advantage in raw power.

Although, honestly, you can't blame an API for something like the Arkham Knight mess. Console ports have other sorts of problems, too.

Anyhow, by offering game developers more direct, lower-level access to the graphics processor, DirectX 12 promises to unlock new levels of performance in PC gaming. This new API also exposes a number of novel hardware features not accessible in older versions of Direct3D, opening up the possibility of new techniques that provide richer visuals than previously feasible in real-time rendering.

So yeah, there's plenty to be excited about.

DirectX 12 is Microsoft's baby, and it's not just a PC standard. Developers will also use it on the Xbox One, giving them a unified means of addressing two major gaming platforms at once.

That's why there's perhaps no better showcase for DX12 than Fable Legends, the upcoming game from Lionhead Studios. Game genres have gotten wonderfully and joyously scrambled in recent years, but I think I'd describe Legends as a free-to-play online RPG with MOBA and FPS elements. Stick that in yer pipe and smoke it. Legends will be exclusive to the Xbox One and Windows 10, and it will take advantage of DX12 on the PC as long as a DirectX 12-capable graphics card is present.

In order to demonstrate the potential of DX12, Microsoft has cooked up a benchmark based on a pre-release version of Fable Legends. We've taken it for a spin on a small armada of the latest graphics cards, and we have some interesting results to share.

This Fable Legends benchmark looks absolutely gorgeous, thanks in part to the DirectX 12 API and the Unreal 4 game engine. The artwork is stylized in a not-exactly-photorealistic fashion, but the demo features a tremendously complex set of environments. The video above utterly fails to do it justice, thanks both to YouTube's compression and a dreaded 30-FPS cap on my video capture tool. The animation looks much smoother coming directly from a decent GPU.

To my eye, the Legends benchmark represents a new high-water mark in PC game visuals for this reason: a near-complete absence of the shimmer, crawling, and sparkle caused by high-frequency noise—both on object edges and inside of objects. (Again, you'd probably have to see it in person to appreciate it.) This sheer solidity makes Legends feel more like an offline-rendered scene than a real-time PC game. As I understand it, much of the credit for this effect belongs to the temporal anti-aliasing built into Unreal Engine 4. This AA method evidently offers quality similar to full-on supersampling with less of a performance hit. Here's hoping more games make use of it in the future.

DX12 is a relatively new creation, and Fable Legends has clearly been in development for quite some time. The final game will work with DirectX 11 as well as DX12, and it was almost surely developed with the older API and its requirements in mind. The question, then, is: how exactly does Legends take advantage of DirectX 12? Here's Microsoft's statement on the matter.

Lionhead Studios has made several additions to the engine to implement advanced visual effects, and has made use of several new DirectX 12 features, such as Async Compute, manual Resource Barrier tracking, and explicit memory management to help the game achieve the best possible performance.

That's not a huge number of features to use, given everything DX12 offers. Still, the memory management and resource tracking capabilities get at the heart of what this lower-level API is supposed to offer. The game gets to manage video memory itself, rather than relying on the GPU driver to shuffle resources around.

Asynchronous compute shaders, meanwhile, have been getting a lot of play in certain pockets of the 'net since the first DX12 benchmark, built around Oxide Games' Ashes of the Singularity, was released. This feature allows the GPU to execute multiple kernels (or basic programs) of different types simultaneously, and it could enable more complex effects to be created and included in each frame.

Early tests have shown that the scheduling hardware in AMD's graphics chips tends to handle async compute much more gracefully than Nvidia's chips do. That may be an advantage AMD carries over into the DX12 generation of games. However, Nvidia says its Maxwell chips can support async compute in hardware—it's just not enabled yet. We'll have to see how well async compute works on newer GeForces once Nvidia turns on its hardware support.

For now, well, I suppose we're about to see how the latest graphics cards handle Fable Legends. Let's take a look.

Our testing methods
The graphics cards we used for testing are listed below. Please note that many of them are not stock-clocked reference cards but actual consumer products with faster clock speeds. For example, the GeForce GTX 980 Ti we tested is the Asus Strix model that won our recent roundup. Similarly, the Radeon R9 Fury and 390X cards are also Asus Strix cards with tweaked clock frequencies. We prefer to test with consumer products when possible rather than reference parts, since those are what folks are more likely to buy and use.

The Asus Strix Radeon R9 390X

As ever, we did our best to deliver clean benchmark numbers. Our test systems were configured like so:

Processor Core i7-5960X
Motherboard Gigabyte X99-UD5 WiFi
Chipset Intel X99
Memory size 16GB (4 DIMMs)
Memory type Corsair Vengeance LPX
DDR4 SDRAM at 2133 MT/s
Memory timings 15-15-15-36 2T
Hard drive Kingston SSDNow 310 960GB SATA
Power supply Corsair AX850
OS Windows 10 Pro

Driver revision GPU base
core clock
GPU boost
Sapphire Nitro R7 370 Catalyst 15.201 beta - 985 1400 4096
MSI Radeon R9 285 Catalyst 15.201 beta - 973 1375 2048
XFX Radeon R9 390 Catalyst 15.201 beta - 1015 1500 4096
Asus Strix R9 390X Catalyst 15.201 150922a - 1070 1500 8192
Radeon R9 Nano Catalyst 15.201 150922a - 1000 500 4096
Asus Strix R9 Fury Catalyst 15.201 150922a - 1000 500 4096
Radeon R9 Fury X Catalyst 15.201 150922a - 1050 500 4096
Gigabyte GTX 950 GeForce 355.82 1203 1405 1750 2048
MSI GeForce GTX 960 GeForce 355.82 1216 1279 1753 2048
MSI GeForce GTX 970  GeForce 355.82 1114 1253 1753 4096
Gigabyte GTX 980 GeForce 355.82 1228 1329 1753 4096
Asus Strix GTX 980 Ti GeForce 355.82 1216 1317 1800 6144

Thanks to Intel, Corsair, Kingston, and Gigabyte for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.