Single page Print

CPU core and thread scaling
I'm afraid I haven't had time to pit the various integrated graphics solutions against one another in this Fable Legends test, but I was able to take a quick look at how the two fastest graphics chips scale up when paired with different CPU configs. Since the new graphics APIs like DirectX 12 are largely about reducing CPU overhead, that seemed like the thing to do.

For this little science project, I used the fancy firmware on the Gigabyte X99 boards in my test rigs to enable different numbers of CPU cores on their Core i7-5960X processors. I also selectively disabled Hyper-Threading. The end result was a series of tests ranging from a single-core CPU config with a single thread (1C/1T) through to the full-on 5960X with eight cores and 16 threads (8C/16T).

Interesting. The sweet spot with the Radeon looks to be the four-core, four-thread config, while the GeForce prefers the 6C/6T config. Perhaps Nvidia's drivers use more threads internally. The performance with both cards suffers a little with eight cores enabled, and it drops even more when Hyper-Threading is turned on.

Why? Part of the answer is probably pretty straightforward: this application doesn't appear to make very good use of more than four to six threads. Given that fact, the 5960X probably benefits from the power savings of having additional cores gated off. If turning off those cores saves power, then the CPU can probably spend more time running at higher clock speeds via Turbo Boost as a result.

I'm not sure what to make of the slowdown with Hyper-Threading enabled. Simultaneous multi-threading on a CPU core does require some resource sharing, which can dampen per-thread performance. However, if the operating system scheduler is doing its job well, then multiple threads should only be scheduled on a CPU core when other cores are already occupied—at least, I expect that's how it should work on a desktop CPU. Hmmm.

The curves flatten out a bit when we raise the resolution and image quality settings because GPU speed constraints come into play, but the trends don't change much. In this case, the Fury X doesn't benefit from more than two CPU cores.

Perhaps we can examine CPU scaling with a lower-end CPU at some point.

So now what?
We've now taken a look at one more piece of the DirectX 12 puzzle, and frankly, the performance results don't look a ton different than what we've seen in current games.

The GeForce cards perform well generally, in spite of this game's apparent use of asynchronous compute shaders. Cards based on AMD's Hawaii chips look relatively strong here, too, and they kind of embarrass the Fiji-based R9 Fury offerings by getting a little too close for comfort, even in 4K. One would hope for a stronger showing from the Fury and Fury X in this case.

But, you know, it's just one benchmark based on an unreleased game, so it's nothing to get too worked up about one way or another. I do wish we could have tested DX12 versus DX11, but the application Microsoft provided only works in DX12. We'll have to grab a copy of Fable Legends once the game is ready for public consumption and try some side-by-side comparisons.

Enjoy our work? Pay what you want to subscribe and support us.TR

Nvidia Quadro vDWS brings greater flexibility to virtualized pro graphicsPascal Teslas play host to Quadro virtues 2
The Tech Report's summer 2017 mobile staff picksThe best gear for on-the-go computing 53
AMD's Radeon RX Vega 64 and RX Vega 56 graphics cards reviewedRadeons return to the high-end graphics market 276
SteelSeries' Rival 500 gaming mouse reviewedTactile feedback goes fat-free 12
AMD's Radeon RX Vega 64 and RX Vega 56 graphics cards revealedGamers get Vegas to call their own 177
Radeon Software Crimson ReLive Edition 17.7.2 boasts refinements galoreTidying up ahead of RX Vega 22
Corsair's Hydro GFX GeForce GTX 1080 Ti graphics card reviewedNo assembly required 28
Computex 2017: Corsair goes high-conceptClothe your hardware in carbon and silica 20