Single page Print

CPU core and thread scaling
I'm afraid I haven't had time to pit the various integrated graphics solutions against one another in this Fable Legends test, but I was able to take a quick look at how the two fastest graphics chips scale up when paired with different CPU configs. Since the new graphics APIs like DirectX 12 are largely about reducing CPU overhead, that seemed like the thing to do.

For this little science project, I used the fancy firmware on the Gigabyte X99 boards in my test rigs to enable different numbers of CPU cores on their Core i7-5960X processors. I also selectively disabled Hyper-Threading. The end result was a series of tests ranging from a single-core CPU config with a single thread (1C/1T) through to the full-on 5960X with eight cores and 16 threads (8C/16T).

Interesting. The sweet spot with the Radeon looks to be the four-core, four-thread config, while the GeForce prefers the 6C/6T config. Perhaps Nvidia's drivers use more threads internally. The performance with both cards suffers a little with eight cores enabled, and it drops even more when Hyper-Threading is turned on.

Why? Part of the answer is probably pretty straightforward: this application doesn't appear to make very good use of more than four to six threads. Given that fact, the 5960X probably benefits from the power savings of having additional cores gated off. If turning off those cores saves power, then the CPU can probably spend more time running at higher clock speeds via Turbo Boost as a result.

I'm not sure what to make of the slowdown with Hyper-Threading enabled. Simultaneous multi-threading on a CPU core does require some resource sharing, which can dampen per-thread performance. However, if the operating system scheduler is doing its job well, then multiple threads should only be scheduled on a CPU core when other cores are already occupied—at least, I expect that's how it should work on a desktop CPU. Hmmm.

The curves flatten out a bit when we raise the resolution and image quality settings because GPU speed constraints come into play, but the trends don't change much. In this case, the Fury X doesn't benefit from more than two CPU cores.

Perhaps we can examine CPU scaling with a lower-end CPU at some point.

So now what?
We've now taken a look at one more piece of the DirectX 12 puzzle, and frankly, the performance results don't look a ton different than what we've seen in current games.

The GeForce cards perform well generally, in spite of this game's apparent use of asynchronous compute shaders. Cards based on AMD's Hawaii chips look relatively strong here, too, and they kind of embarrass the Fiji-based R9 Fury offerings by getting a little too close for comfort, even in 4K. One would hope for a stronger showing from the Fury and Fury X in this case.

But, you know, it's just one benchmark based on an unreleased game, so it's nothing to get too worked up about one way or another. I do wish we could have tested DX12 versus DX11, but the application Microsoft provided only works in DX12. We'll have to grab a copy of Fable Legends once the game is ready for public consumption and try some side-by-side comparisons.

Enjoy our work? Pay what you want to subscribe and support us.TR

 
Aorus' GeForce GTX 1080 Xtreme Edition 8G graphics card reviewedFlying high 26
The curtain comes up on AMD's Vega architectureRadeons get ready for the workloads of the future 156
Nvidia unveils its GTX 1050 and GTX 1050 Ti for laptopsThe pint-size Pascal empowers portable players 16
AMD opens up machine learning with Radeon InstinctVega lights the way 65
Radeon Software Crimson ReLive Edition: an overviewStream, capture, Chill 103
Fatal1ty by Monster's FXM 200 gaming headset reviewedClassy cans with a gaming bent 30
A technology overview of the Aimpad R5 analog keyboardDead zones? We don't need no stinkin' dead zones! 24
Nvidia's GeForce GTX 1060 graphics card reviewedDouble trouble 153