Questions about 24 - 4 lanes of PCIe 4.0 versus 48 - 4 lanes of PCIe 3.0 aside, $foo Lake-X's big claim to fame versus the 3900x is quad-channel memory support. And I've got a question pertaining to that: while nobody can speak for Cascade Lake-X yet since the hardware isn't out, is there an actual, meaningful advantage to running nosebleed DDR4 speeds on Skylake-X? Everything quantitative I've read indicates that pushing past the nominal DDR4-2666 memory controller speed just results in the IMC running hotter with almost no meaningful performance benefit. If I'm wrong, I'd love to hear about it. Thank you!
It depends on the workload. In games (most of them anyways) it makes a difference. In many (not all) workstation tasks, it makes no real difference. I have some Samsung B dominators (dual rank) 4x16 (64gb total).
If you want me to run a specific free test, I can do that.
Its something that is incredibly important for some workloads and absolutely irrelevant for others. In some workstation loads - i.e. CFD - you simply cannot get enough memory bandwidth.
A typical rule of thumb, for a non trivial simulation, is that you need 1 memory channel per 3 threads to avoid memory bandwidth saturation.
So, on a 4 ch memory controller, anything above 12 cores (SMT = OFF) starts to see greatly diminished scaling. Anything above 16 cores will likely see degrading performance!!
Now, that approximate rule of thumb will be based off workstations operating within JDEC spec. So if you can increase bandwidth by clocking DDR4 higher, you can change that around a bit.
For instance, if you are on a Skylake with memory rated at 2400 MHz, but can run at 4000 MHz on DDR4; you have just upped bandwidth by 55%. So instead of 3 cores a channel, you are nearing 5 cores /memory channel.
Or Zen1.1; at 2666 MHz DDR4, bandwidth is ~40 GB/s. At 3733, that is ~53 GB/s. Or shifting toward 4/cores channel.