Bauxite wrote:More likely because 16GB unbuffered DDR4 modules came out after the cpu launch, it has enough hardware bits to address it.
For a well confirmed earlier example: socket 1366 desktop models list a maximum of 24GB (6x4) but are widely known to work with 48GB (6x8) unbuffered DDR3. For a counterpoint 16GB unbuffered DDR3 modules are known to not be supported by 115x cpus, though they were a very late update near the end of life of the standard.
Wirko wrote:46-bit address space is certainly not the limit. It amounts to 64 terabytes. Maybe terawords. Or, expressed in a more sinister way, tebibytes.
just brew it! wrote:If Windows sees it and the system boots and runs clean maybe it is just a matter of Intel not validating operation with 128GB. In which case you may be OK.
the wrote:Wirko wrote:46-bit address space is certainly not the limit. It amounts to 64 terabytes. Maybe terawords. Or, expressed in a more sinister way, tebibytes.
That is indeed the current limit for x86 systems (64 terabytes). SGI (now part of HPE) makes some systems capable of reaching that capacity in a single NUMA system. Those large systems also hit the limit of how many CPU cores can be in a system.
Sky Lake-EP is extending both of those limits when it ships later this year.
Takeshi7 wrote:If it were me, doing professional engineering work with 128GB of RAM i'd definitely get ECC RAM which requires a Xeon. I know you said you need overclocking, but the peace of mind of knowing that my final results are accurate and didn't have bits flipped during the calculation would be worth it to me. The more RAM you have, the higher the probability that one of those bits can be flipped erroneously.
Bauxite wrote:I'd get a xeon too (and did w/ 8x32 reg ecc, same $/GB as 16 sticks at the time) because the venn diagram for "engineering", "simulation", "loves big pools of ram" and "heavily threaded" is a strong overlap.
Not sure what you're paying for a 5960x, but at its usual store price range you can easily find a 16/18/20/22 core xeon on fleabay of various providence. My production 22 core was the same as the stupidly overpriced 10 core i7. If you carefully select ES/QS they can be a lot cheaper, and conveniently asrock is the best choice for non-production cpus as well.
Running NAMD with GPU acceleration can increase performance by a factor of 8-10 over CPU alone! This is enough performance to facilitate moderate sized MD simulations to be run in a reasonable amount of time on a single node workstation.
sophisticles wrote:1) Take it as gospel that Intel knows exactly how much ram the processors they make are capable of using, even if the limit is an artificial one that Intel introduced as a way of getting potential buyers to pick a Xeon over an i7.
sophisticles wrote:2) See if the application you normally use is GPU accelerated (I'm willing to bet money that it does)
just brew it! wrote:I would think that if there was a GPU accelerated version, OP would've already looked into that angle. He knows what software he's using, and would presumably be aware of its capabilities. OP also said: "VFX software is a pretty mixed bag of multi-threaded and single-threaded processes". Single-threaded isn't going to benefit at all from GPU acceleration.
GrimDanfango wrote:Well, as an update to all this - I put it all together, and everything works great!
System with everything running stock worked perfectly. The x99 Taichi already seemed to have the latest bios, and the 128GB RAM worked out of the box.
I tried loading the XMP profile for 3000MHz, 1.35V, 125 BLCK, and 14-14-14-34 timings... and well, it didn't get far, as expected
I've since settled on a conservative middle-ground - 2666MHz, 1.2V, 100 BCLK, and the same 14-14-14-34 timings, and it seems to be rock-solid so far.
The 5960x handles being run at 4.2GHz with 1.15V core, which seems like a respectable enough middle-ground overclock. Not sure I particularly want to push it any further... kicks out too much heat for my case to successfully exhaust