Single page Print

CPU core and thread scaling
I'm afraid I haven't had time to pit the various integrated graphics solutions against one another in this Fable Legends test, but I was able to take a quick look at how the two fastest graphics chips scale up when paired with different CPU configs. Since the new graphics APIs like DirectX 12 are largely about reducing CPU overhead, that seemed like the thing to do.

For this little science project, I used the fancy firmware on the Gigabyte X99 boards in my test rigs to enable different numbers of CPU cores on their Core i7-5960X processors. I also selectively disabled Hyper-Threading. The end result was a series of tests ranging from a single-core CPU config with a single thread (1C/1T) through to the full-on 5960X with eight cores and 16 threads (8C/16T).

Interesting. The sweet spot with the Radeon looks to be the four-core, four-thread config, while the GeForce prefers the 6C/6T config. Perhaps Nvidia's drivers use more threads internally. The performance with both cards suffers a little with eight cores enabled, and it drops even more when Hyper-Threading is turned on.

Why? Part of the answer is probably pretty straightforward: this application doesn't appear to make very good use of more than four to six threads. Given that fact, the 5960X probably benefits from the power savings of having additional cores gated off. If turning off those cores saves power, then the CPU can probably spend more time running at higher clock speeds via Turbo Boost as a result.

I'm not sure what to make of the slowdown with Hyper-Threading enabled. Simultaneous multi-threading on a CPU core does require some resource sharing, which can dampen per-thread performance. However, if the operating system scheduler is doing its job well, then multiple threads should only be scheduled on a CPU core when other cores are already occupied—at least, I expect that's how it should work on a desktop CPU. Hmmm.

The curves flatten out a bit when we raise the resolution and image quality settings because GPU speed constraints come into play, but the trends don't change much. In this case, the Fury X doesn't benefit from more than two CPU cores.

Perhaps we can examine CPU scaling with a lower-end CPU at some point.

So now what?
We've now taken a look at one more piece of the DirectX 12 puzzle, and frankly, the performance results don't look a ton different than what we've seen in current games.

The GeForce cards perform well generally, in spite of this game's apparent use of asynchronous compute shaders. Cards based on AMD's Hawaii chips look relatively strong here, too, and they kind of embarrass the Fiji-based R9 Fury offerings by getting a little too close for comfort, even in 4K. One would hope for a stronger showing from the Fury and Fury X in this case.

But, you know, it's just one benchmark based on an unreleased game, so it's nothing to get too worked up about one way or another. I do wish we could have tested DX12 versus DX11, but the application Microsoft provided only works in DX12. We'll have to grab a copy of Fable Legends once the game is ready for public consumption and try some side-by-side comparisons.

Enjoy our work? Pay what you want to subscribe and support us.TR

Computex 2017: Corsair goes high-conceptClothe your hardware in carbon and silica 20
G.Skill's Ripjaws KM570 RGB gaming keyboard reviewedSimplify, simplify 6
The Tech Report System Guide: May 2017 editionRyzen 5 takes the stage 111
Aorus' GeForce GTX 1080 Ti Xtreme Edition 11G graphics card reviewedThe eagle has landed 35
Corsair's K95 RGB Platinum gaming keyboard reviewedA lean, mean macro machine 15
HyperX's Pulsefire gaming mouse reviewedKeeping it simple the first time out 9
AMD's Radeon RX 580 and Radeon RX 570 graphics cards reviewedIteration marches on 162
AMD's Ryzen 5 1600X and Ryzen 5 1500X CPUs reviewed, part oneGetting our game on 192