Optane DC Persistent Memory puts 3D Xpoint goodness in DIMM slots

Intel and Micron's 3D Xpoint non-volatile memory has proven a unique bridge between NAND and DRAM in the storage hierarchy. Today, Intel is bringing the persistent, low-latency characteristics of 3D Xpoint to DIMM slots with a product called Optane DC Persistent Memory. Intel says Optane DC Persistent Memory sticks will be available in capacities up to 512 GB per module, versus about 128 GB in today's largest DIMMs using DDR4 SDRAM. With Optane DIMMs, Intel claims its servers will be able to employ up to 3 TB of non-volatile yet lighting-fast memory per socket for crunching the huge data sets businesses have gathered for analysis in today's world.

 As a result, the company claims servers with Optane DC PM inside should be able to keep data-hungry analytics applications fed without the trip out to the PCIe bus and remote storage that still has to be negotiated even with Optane DC SSDs. Intel says the high capacity and potentially high performance of these DIMMs will be ideal for “cost-effective, large-capacity in-memory database solutions” while also allowing servers to stay up longer and recover more quickly after shutdowns or blackouts. Optane DC PM sticks can also secure persistent data using encryption acceleration built into the hardware itself.

Intel loves to cite the Aerospike NoSQL database as a performance reference point for Optane products, and in the case of Optane DIMMs, the blue team says an Aerospike server can take seconds to restart versus the minutes required of a traditional system with bus-connected storage and traditional DRAM. Intel also notes that a Redis server running on Optane DIMMs can host more instances of that service at the same SLA compared to a system populated with nothing but DRAM.

Intel says Optane DC Persistent Memory sticks are sampling today and will be shipping to “select customers” later this year. Broad availability for the technology is slated to begin in 2019. The company believes software developers will want to familiarize themselves with the characteristics of Optane DIMMs ahead of time, so it's ofering remote access to test hardware hosted in its data centers to those who want to take the tech for a spin with their workloads.

Comments closed
    • Amiga500+
    • 1 year ago

    I see Intel are maneouvering to prop up the Xeon line when EPYC goes 7nm and they are still struggling to get to 10nm.

    *I know the processes aren’t as easy to equate as 7 >> 10 >> 12 >> 14 – but no one can disagree that at 7nm AMD will be closer than ever, if not ahead, of Intel in process node than at any time before. Furthermore, in Zen, they have a great baseline architecture to develop into Zen2 to take an outright performance and performance/watt lead by significant margins.

      • chuckula
      • 1 year ago

      Yes, the same idiots at Intel who actually thought 10nm was going to be on the market in 2016 were the EXACT SAME PEOPLE who developed Optane for the sole purpose of pushing Xeons that they knew were never going to be on 10nm.

        • Generic
        • 1 year ago

        I know, right!? I couldn’t possibly count the times I’ve brought heretofore theoretical technologies to market to cover up my mistakes in real time!

        • Anonymous Coward
        • 1 year ago

        Only the paranoid survive.

        • Amiga500+
        • 1 year ago

        Even for someone as intellectually challenged as yourself, your (lack of) reading comprehension is embarrassing.

        Where did I indicate anywhere that Intel **planned from the start** to use Optane to prop up the Xeon line?

        They are reacting to the new market realities by locking it down to Xeon only. No doubt for a very nebulous “technical reason” that will in time be proved to be utter BS.

    • psuedonymous
    • 1 year ago

    Watching the unveil webcast (Anandtech have a [url=https://www.anandtech.com/show/12826/intel-persistent-memory-event-live-blog<]liveblog if you missed it[/url<]) it's pretty clear this is not really a suitable product for client workloads. Or at least, any client workloads that currently exist ([i<]maybe[/i<] if Atomontage suddenly becomes the standard overnight then big point-cloud datasets will become a consumer thing for gaming). You'd be much better off using Optane as a transparent cache in front of your SSD/HDD than trying to convince everyone who writes any piece of software you run to re-architect how it accesses data. Even 'just' threading workloads has not had huge penetration outside of already embarrassingly parallel tasks (encoding, rendering), getting them to use [url=https://www.snia.org/tech_activities/standards/curr_standards/npm<]NPM[/url<] is about as likely as everyone switching to IA64 on the desktop (or developing for Heterogeneous Compute).

    • brucethemoose
    • 1 year ago

    No mention of endurance? Warranty length?

    I’m not sure how much RAM is really written to, but it seems like a high-end Xeon could blow past 10 DWPD on an I/O intensive load. Which is exactly what these are aimed at.

      • Anonymous Coward
      • 1 year ago

      RAM is written to [i<]a lot[/i<]. Huge amounts of scratchspace-style temporary usage stand behind every operation being performed. That kind of stuff has no business being on persistent storage, its like an incomplete thought in your own head. In many cases that kind of thing can't be safely written to persistent storage without being encrypted, which I guess would have to be done without writing any temporary values to the persistent storage in question.

    • Shobai
    • 1 year ago

    Looks like I’ve missed the mention, but have they detailed which chipset/s this will work with?

    • Shobai
    • 1 year ago

    [quote<]these DIMMs will be idea for [/quote<] That should probably be "ideal".

    • Chrispy_
    • 1 year ago

    Intel’s made the technology.
    Microsoft just need to make use of it.

      • chuckula
      • 1 year ago

      And Apple needs to make it [b<]insanely[/b<] great!

      • Freon
      • 1 year ago

      Right, if app/os can made aware, it would be interesting to mix in 2 channels of DDR4 with 2 channels of this stuff.

        • Waco
        • 1 year ago

        I think you’ll find that desktop uses for this are pretty limited.

    • Waco
    • 1 year ago

    The latency increase / bandwidth decrease over DRAM along with the lack of ecosystem to use them effectively (at the moment) is going to severely hamper their adoption in anything other than large database applications for the next couple years after they’re available to buy.

    They could be interesting devices for filesystem/storage logs to be written to if you don’t care about HA (or write to multiple servers via other mechanisms).

      • Jeff Kampman
      • 1 year ago

      MIT once demonstrated that if systems performing distributed computing needed to request data from disk just 5% of the time, performance was no better than it would be if the RAM wasn’t even there: [url<]http://news.mit.edu/2015/cutting-cost-power-big-data-0710[/url<] I have to imagine that for big data applications, having better-performing media like this in NAND-like capacities communicating directly with the CPU's memory controller is going to be a boon where size matters more than speed.

        • Waco
        • 1 year ago

        Out of core computation, thankfully, isn’t something I should ever have to deal with. Going out of DRAM at all (or dropping to a lower-bandwidth “memory” tier) is disastrous to performance in HPC codes.

        “Big data” is slowly learning all of the hard lessons that the industry learned over the last 30 years. If you make it too simple, you pay the price in either hardware or performance.

        • guardianl
        • 1 year ago

        OK, in big data land, if can’t fit your data in your current RAM, you just buy more RAM. If your data won’t fit in max RAM capacity, it probably won’t fit in 4x-sized Optane sticks either, because that’s actually a narrow window.

        However, let’s look at the niche where your data does fit in that window.

        Now you *just* have to meet all these conditions to justify Optane usage:
        – You are willing to buy brand new servers, compatible with Optane (least of your worries!)
        – Your software isn’t impacted by slow Optane access latency (compared to DRAM)
        – Your software isn’t impacted by slow Optane bandwidth (compared to DRAM)
        – Your software isn’t impacted by Optane endurance limits (compared to DRAM)
        – Your software shows a meaningful performance improvement over just using NVM SSD arrays etc.
        – The performance impact is so much that it’s worth being vendor-locked with Optane equipped systems.
        – You don’t need to modify the software (assuming you even can, which is not a given) to work with Optane. If you can modify it, justify the software developer time required. Every day your devs work on Optane optimization/compatibility, is a day you could have been running your dataset through existing hardware or writing other features. From experience, stakeholders always chose features > performance. ALWAYS.

        Optane will probably find some niche users in big data because of a huge marketing push from Intel, but I bet Optane DIMM sticks die out within ~5 years. Check back on this comment in 2023 pls.

          • NoOne ButMe
          • 1 year ago

          Optane enables double the max capacity that RAM does for Intel’s platforms– 3TB versus 1.5TB.

          Pretty big if it works properly, as it kills the advantages that AMD and ThunderX (whoever owns it now) of having 8 memory slots for up to 2TB.

            • guardianl
            • 1 year ago

            You cannot just dump Optane DIMMS in as a replacement for RAM, that’s not how this works. Intel hasn’t published endurance numbers for the DIMMS afaik, but the enterprise Optane drives are rated for ~12 000 write cycles. That’s about 2.5X enterprise SSDs (5 000 cycles).

            Lots of COTS software would blow through that in a matter of hours. Intel isn’t pretending that Optane is a replacement for RAM, but some of the media reporting has been messing up the message. From Intel:

            “As technologies like Intel Optane DC persistent memory come to market, systems architects and developers should consider new methods for data access and storage, and uncover opportunities to remove throughput bottlenecks. These new methods could also result in deriving more value from data. The combination of Intel Optane DC persistent memory with our performance-optimized Intel Optane SSDs and next-generation cost-optimized 3D NAND SSDs with Quad-Level Cell (QLC) technology will further deliver storage efficiency to warm data as an alternative to relying on HDDs.”

            They are talking about replacing hard drives. “Warm data”. All hints as to what you can actually use Optane for. NVDIMM Optane is just a special tier of SSD storage, and one that effectively needs custom software to take advantage of. In some cases it might be usable as special cache to avoid needing RAM, but if your software needs more RAM, Optane wont’ fix that.

            • tuxroller
            • 1 year ago

            It’s 60 DWD x 5 years = 109 500

            • Anonymous Coward
            • 1 year ago

            I’m not seeing that double the capacity is a big deal, considering the compromises. You have to be a pretty big business to spend time re-architecting for this, just like its only huge players that have messed around with IBM’s recent CPUs, when an advantage can be found doing so.

        • uni-mitation
        • 1 year ago

        I will preface with saying this is beyond my training & skill level, but I believe what the MIT team did was offload some of the major computational lifting to the controllers of the SSDs. To wit:

        [quote<]The researchers were able to make a network of flash-based servers competitive with a network of RAM-based servers by moving a little computational power off of the servers and onto the chips that control the flash drives. By preprocessing some of the data on the flash drives before passing it back to the servers, those chips can make distributed computation much more efficient. And since the preprocessing algorithms are wired into the chips, they dispense with the computational overhead associated with running an operating system, maintaining a file system, and the like.[/quote<] So I think it is a bit misleading to cite this source when mainstream data centers rely on system ram because you don't have the luxury I imagine to outsource beforehand the type of big data workloads. It is my naked speculation that if you didn't have the ability to outsource those computations to the specific controllers in storage, then this claim about having no latency penalty for having to access storage vs. DRAM for large computations workloads would be quite different. The research, in my layman interpretation(again, no computer science background), means something quite different than saying that there will be no latency penalty for using this new Optane technology when you could better be served with just adding more ram because it is quite hard to guess what kind of data you will need to crunch. If it is a niche product, then it really reduces its usability and therefore I don't think it would have mass acceptance in the market. I would love much more knowledgeable folks to chime in because I have my doubts in this conclusion. Beyond my comfort zone here. uni-mitation

    • chuckula
    • 1 year ago

    [quote<]Broad availability for the technology is slated to begin in 2019.[/quote<] Yeah, I've heard that one before Intel. [Downthumbed? Oh wait I forgot: Intel FINALLY introduces 512GB DIMMs after artificially stopping us from having them for YEARS. THANK YOU AMD!]

      • Srsly_Bro
      • 1 year ago

      Cascade lakes should hopefully use these. Optane dimms might be less costly than dram. Make a few phone calls or wait until you return to work tomorrow and speed the process up.

        • chuckula
        • 1 year ago

        [quote<]Make a few phone calls or wait until you return to work tomorrow and speed the process up.[/quote<] Trust me, if I could speed things up we'd be launching Ice Lake next month and not Skylake++++++!

Pin It on Pinterest

Share This