Personal computing discussed
Moderators: renee, Flying Fox, morphine
chuckula wrote:The most interesting part is that this isn't Intel's first mesh. That distinction goes to Knights Landing where up to 72 cores are connected together with a mesh. So if KNL hits 72 cores, it looks like there is room to scale this interconnect.
Waco wrote:The more we advance, the more nuanced process layout and memory access have to be.
That said, it probably won't make much difference for consumers. Intel could easily "tile" 4+ cores into a node without much engineering if they've already done 2. They generally aren't so shortsighted to design a new structure without thinking about the next few iterations.
Losergamer04 wrote:Any smart gerbils have an idea on how this compares to AMD's Infinity Fabric?
Waco wrote:Intel could easily "tile" 4+ cores into a node without much engineering if they've already done 2.
Mr Bill wrote:Working with HBM coupling to CPU and GPU has probably given AMD a boost in experience with respect to infinity fabric and memory controllers. My guess is that looming on the horizon; both Intel and AMD are converging on the same weave based paradigm.
UberGerbil wrote:If Intel puts four CPUs on a ring, and then weaves the rings together in a mesh... would the result be chainmail?
UberGerbil wrote:Waco wrote:Intel could easily "tile" 4+ cores into a node without much engineering if they've already done 2.Mr Bill wrote:Working with HBM coupling to CPU and GPU has probably given AMD a boost in experience with respect to infinity fabric and memory controllers. My guess is that looming on the horizon; both Intel and AMD are converging on the same weave based paradigm.
If Intel puts four CPUs on a ring, and then weaves the rings together in a mesh... would the result be chainmail?
Vhalidictes wrote:I had to look that up! Its pretty interesting.UberGerbil wrote:If Intel puts four CPUs on a ring, and then weaves the rings together in a mesh... would the result be chainmail?
Only if you're using the International pattern (European 1-into-4 if you want to get technical).
Mr Bill wrote:I wonder if the interposer used in HBM can be used to provide mesh or infinity link between modules in a package.
the wrote:Mr Bill wrote:I wonder if the interposer used in HBM can be used to provide mesh or infinity link between modules in a package.
Yes. Raja of AMD has dropped hints that that is the direction he's going for GPUs. Same principles for scaling the number of CPUs cores in a design.
Intel also has their EMIB technology to link multiple dies together. The Stratix 10 part is currently shipping using it.
the wrote:Raven Ridge will offer a CPU+GPU combo in AM4 packaging. It is due later this year.
Vhalidictes wrote:I know that APUs are and will continue to be a thing, it's just that having the GPU in its own socket would help with power delivery and heat dissipation, and possibly PCIe lane tracing as well.
Flying Fox wrote:Vhalidictes wrote:I know that APUs are and will continue to be a thing, it's just that having the GPU in its own socket would help with power delivery and heat dissipation, and possibly PCIe lane tracing as well.
For weaker APU-style GPU, the separate packaging will create issues with PCIe lane tracing and additional wires on the motherboard. For discrete type stronger/bigger GPUs, not sure about power delivery and heat dissipation; how do you channel 200W+ power over the motherboard traces without frying the board itself? Not to mention you need another tower-style heatsink to carry the heat from the socket, unless you are doing watercooling.
Vhalidictes wrote:Flying Fox wrote:Vhalidictes wrote:I know that APUs are and will continue to be a thing, it's just that having the GPU in its own socket would help with power delivery and heat dissipation, and possibly PCIe lane tracing as well.
For weaker APU-style GPU, the separate packaging will create issues with PCIe lane tracing and additional wires on the motherboard. For discrete type stronger/bigger GPUs, not sure about power delivery and heat dissipation; how do you channel 200W+ power over the motherboard traces without frying the board itself? Not to mention you need another tower-style heatsink to carry the heat from the socket, unless you are doing watercooling.
Those are excellent points, but CPU sockets have already solved those problems as there are most definitely CPUs that sustain that level of heat and power. The Bulldozer-based FX processors come to mind.
Vhalidictes wrote:Flying Fox wrote:Vhalidictes wrote:I know that APUs are and will continue to be a thing, it's just that having the GPU in its own socket would help with power delivery and heat dissipation, and possibly PCIe lane tracing as well.
For weaker APU-style GPU, the separate packaging will create issues with PCIe lane tracing and additional wires on the motherboard. For discrete type stronger/bigger GPUs, not sure about power delivery and heat dissipation; how do you channel 200W+ power over the motherboard traces without frying the board itself? Not to mention you need another tower-style heatsink to carry the heat from the socket, unless you are doing watercooling.
Those are excellent points, but CPU sockets have already solved those problems as there are most definitely CPUs that sustain that level of heat and power. The Bulldozer-based FX processors come to mind.
NTMBK wrote:chuckula wrote:The most interesting part is that this isn't Intel's first mesh. That distinction goes to Knights Landing where up to 72 cores are connected together with a mesh. So if KNL hits 72 cores, it looks like there is room to scale this interconnect.
KNL has cores organised in "tiles" of 2 cores, meaning it only has 36 nodes on the mesh.
DragonDaddyBear wrote:Any smart gerbils have an idea on how this compares to AMD's Infinity Fabric?
the wrote:There have been murmurs of socketed GPUs for awhile. nVidia's P100 and V100 cards using nvLink mezzanine are probably the closest to a socketed GPU today. The connector used is similar to what several other high end chips like Itanium or IBM main frame books used in the past.
Until recently, it really hasn't made much sense to do a socketed GPU: bandwidth wasn't there. Epyc's socket has 512 bit wide interface with DDR4-3200 has enough raw bandwidth to satisfy a midrange GPU now. This also doesn't factor in the usage of HBM which would fit into the socket itself. When AMD decides to shatter their CPU and GPU dies for full interposer interchangeable blocks, swapping CPU cores for GPU blocks would be trivial while the IO remains consistent.
Vhalidictes wrote:the wrote:There have been murmurs of socketed GPUs for awhile. nVidia's P100 and V100 cards using nvLink mezzanine are probably the closest to a socketed GPU today. The connector used is similar to what several other high end chips like Itanium or IBM main frame books used in the past.
Until recently, it really hasn't made much sense to do a socketed GPU: bandwidth wasn't there. Epyc's socket has 512 bit wide interface with DDR4-3200 has enough raw bandwidth to satisfy a midrange GPU now. This also doesn't factor in the usage of HBM which would fit into the socket itself. When AMD decides to shatter their CPU and GPU dies for full interposer interchangeable blocks, swapping CPU cores for GPU blocks would be trivial while the IO remains consistent.
Yes, I was thinking that a modern APU-socketed GPU would simply use HBM, although a flexible socket would have access to DIMM slots as well.
Mr Bill wrote:Pretty good server article over at Anandtech intel-skylake-ep-vs-amd-epyc-7000-cpu-battle-of-the-decade
Kougar wrote:Mr Bill wrote:Pretty good server article over at Anandtech intel-skylake-ep-vs-amd-epyc-7000-cpu-battle-of-the-decade
Promising results for EPYC too. AMD's is offering more cores at lower prices and still winning a lot of the benches.
DancinJack wrote:Kougar wrote:Mr Bill wrote:Pretty good server article over at Anandtech intel-skylake-ep-vs-amd-epyc-7000-cpu-battle-of-the-decade
Promising results for EPYC too. AMD's is offering more cores at lower prices and still winning a lot of the benches.
Yeah, unfortunately for AMD though, the server space isn't quite as forgiving as the desktop space. (I couldn't think of a better word than forgiving) I doubt AMD will see a ton of market share this year for Epyc (I still hate the name), but over the next few years we should see a good uptick for them. Intel still obviously has a big advantage of being the incumbent in a lot of cases, but AMD has done a really good job with those chips. I hope they can get the platform to a place as stable as Intel (for the most part) quickly.