Optane DIMMs and companion CPUs will arrive in 2018

Intel's Optane Memory tech showcases what 3D Xpoint memory can do on a limited scale, but the potential for Optane extends well beyond hard-drive acceleration. Intel began shipping Optane DIMMs to partners for testing early this year, and yesterday the company committed to delivering Optane DIMMs as a product called "Intel persistent memory" alongside a refresh of its Xeon Processor Scalable family called Cascade Lake. Both the persistent memory product and Cascade Lake CPUs will arrive sometime in 2018.

An Optane DIMM. Source: Intel

Intel says that Optane DIMMs will upend the traditional "small, volatile, and expensive" view of system memory by providing a much denser and persistent way of putting data close to the processor, all at lower prices than DRAM. The company showed off these potential benefits at the SAP Sapphire conference by running the ERP giant's HANA analytics tool on a pool of Optane DIMMs. Intel says that for in-memory database apps like HANA, these DIMMs will be a no-brainer for improving performance because of the much larger data sets they'll put closer to CPUs.

Without a doubt, Intel Optane DIMMs have the potential to reshape how we think about computer architecture in general. That said, questions about whether 3D Xpoint is ready to endure memory-like workloads have dogged the technology since its commercial introduction earlier this year. Presumably, Intel is taking time to improve the fundamental characteristics of the medium before it begins large-scale deployment of the tech in a DIMM form factor. Still, the fact that Optane DIMMs are up and running is a promising sign for Intel's convergent vision of DRAM and persistent storage.

Comments closed
    • MrJRtech
    • 2 years ago

    Toms hardware just (May/24/2017) posted a “Pre-viewreview” of what consumers can expect when we can buy optane 3d-xpoint ssd’s in the sizes that we need to really do something. They raide’d zero, three optane 32gig parts (96gig)and posted the results:
    [url<]http://www.tomshardware.com/reviews/intel-optane-raid-report,5060-2.html[/url<] When you read it you will see that it totally "smokes" nvme ssds. Once again, "SMOKES" Nvme drives. Yeah. Raid zero, so what? I can raid zero my nvme drives then what would you say? I would say that consumer nvme 3d x-point drives (optane) would be manufactured as a raid zero part that you would install as if it was just another nvme drive. Many ssd's are already made this way.........internally raided. The numbers did unexpectedly falter on writes, but this experiment is just a proof of concept or a preface to what could be expected when the first batch of consumer drives hit the market. I think that optane still deserves the recognition that it has gotten. Its just a matter of time before we talk about how slow our ssd's are when compared to optane. Lets just hope that im correct.

    • MrJRtech
    • 2 years ago

    Am I the only one that remembers how most tech is given to consumers? Consumers first get the “get ready for something new” version. Then they get the “better than before” version. Then technology levels out and it becomes a “multiple manufactures making the same thing but some make it better than the others” version. With intel outting the first really good consumer SSd’s, who would have guessed that Samsung’s ssd’s are out performing all others – at the consumer level. The new Optane, accessory small capacity modules are only the first step. The first SSd’s available to consumers were almost exactly the same size. Just enough to install an os on, but not large enough for much else. Also, the speeds were slower , albeit on sata and before nvme. Optane is a new tech that will need time to sort it all out. Will it be like ssd’s and need a new standard to get the most out of it like Nvme did for ssd’s? Possibly! Optane is a bit level nonvolitale memory tech. Instead of needing to erase an entire block if you only need to write a bit, you can just write a new bit. Its a new dynamic, although related to dimms, is far better than the trim commands on ssd’s. Other advantages also.
    I predict that given 12 months, optane will start to show its advantages as some of the optane (ssds) that are being used by enterprise filters down to the consumer in capacities that can be used for more than an Os install. And we will stand in line to get one.

    • BehemothJackal
    • 3 years ago

    Jeff, is part two of the Ryzen 1600s still coming?

    • Sahrin
    • 3 years ago

    $20 says Optane gets Intel in front of a District Court Judge for Sherman Act violations within 10 years.

      • the
      • 3 years ago

      What is the logic if behind that?

    • sophisticles
    • 3 years ago

    I can definitely see this being useful even in the average desktop or laptop; picture the average motherboard that only has 2-4 dimm slots, you spend well under $100 on 8gb of ram and then put in a massive 1 terabyte worth of Optane dimms in the rest of the slots and install the OS and apps on it so that all of them are as close to the cpu as possible.

    I think this could redefine general computing as we know it by eliminating I/O bottlenecks for all practical purposes.

      • LostCat
      • 3 years ago

      It’s also possible it’ll improve to the point where we won’t need seperate memory and disks in some devices. I’ve been expecting that to happen someday.

      • Liron
      • 3 years ago

      But the Optane SSDs are only 16 to 32 GB.
      How can the DIMM form-factor Optanes be bigger than the SSD form-factor ones?
      Because if they aren’t orders of magnitude bigger you’re going to need 32 to 63 DIMM slots to get a TB of Optane storage.

        • sophisticles
        • 3 years ago

        I’m going by the fact that they are talking about Optane Dimms being used in database apps like HANA; if you look up typical HANA server configurations they have 4tb to 16tb of ram, and Intel is talking about pricing comparable to SSD for cost per size so I’m thinking that it will probably be feasible that 256gb to 512gb Optane Dimms should be available.

        • chuckula
        • 3 years ago

        Optane SSDs have (at most) two optane chips on them in the 32GB models.

        The form factor isn’t dictated by the sizes of those chips, but by the fact that you need to have a device that properly fits the slot.

        • Ifalna
        • 3 years ago

        Give it some time, these are essentially early prototypes.

        The tech will mature and density will increase.

      • freebird
      • 3 years ago

      It’s price will have to come down a lot before that happens… Intels 375GB ssd on PCIe card goes for $1520.

      CA-CHING$$$$

      So NO, it would be cheaper to load up your PC with 64GB of memory and an NVMe drive like I did.

      • ptsant
      • 2 years ago

      A 1TB Optane would cost in the thousands. You would almost certainly be better served by $400 on RAM and $500 for a nice NVMe SSD. You also get to keep the change.

      Optane will make sense only if its price drops faster than RAM price. There are always niche cases where persistent storage will be a plus, but the performance difference is huge and the cost structure currently does not make sense.

    • LostCat
    • 3 years ago

    It’d be cool if you could have DDR4/5/whatever and this so you’d have the OS in the non volatile RAM and anything else in the normal stuff.

    • Vhalidictes
    • 3 years ago

    Doesn’t anyone know or care that Optane performance is currently ten times worse than DRAM?

    Sure it’s nonvolatile, and that will help some use cases, but this is a huge step backwards in general RAM performance…

      • chuckula
      • 3 years ago

      [quote<]Doesn't anyone know or care that Optane performance is currently ten times worse than DRAM?[/quote<] Does anybody know that? If you can link to the Optane DIMM memory benchmarks that you ran showing this performance they we can talk. Of course, nobody, and especially not Intel, has said that Optane is designed to outperform or outright replace regular DRAM. But that's not the same thing as hard proof that it's ten times slower, or even that having something ten times slower than RAM (allegedly) is a bad idea compared to storing the same bits on an enterprise-grade SSD array that's 100 to 1000 times slower than RAM.

      • maxxcool
      • 3 years ago

      This is not for desktops, or even 99% of all workstations.

      • UberGerbil
      • 3 years ago

      And what’s the performance compared to DRAM that is constantly getting paged to SSD?

      Edit: started to add a link to an MIT experiment where they replaced all the RAM with SSDs; after a few edits decided to move it into its own comment (see below).

        • Ifalna
        • 3 years ago

        The point of DRAM is that once the data is in there it’s basically the fastest we’ve got so we can work with it as fluently as possible.

        IF you run out of DRAM and caching starts to happen, performance goes into the crapper, sure. The easy solution is to throw more DRAM at it, b/c consumer software doesn’t need that much DRAM to begin with.

        So far, I don’t see much of a use case for Optane in the consumer sector.

          • Waco
          • 3 years ago

          Good thing they aren’t really pitching it to consumers yet, eh?

          Short of cache drives, this isn’t intended for consumer workloads. However, I could see it making its way into cheap devices to save cost (it’s still far cheaper than DRAM).

          • Beahmont
          • 3 years ago

          Where these are going, even in systems with hundreds of GBytes of RAM they still have to do tons of caching.

          Caching in easily at least 100 times slower than DRAM. These new DIMM’s are supposed to be at least half again to more than Triple the size of DRAM DIMM’s and no less than 10 times slower than DRAM while being persistent state so unexpected power offs don’t destroy all the data stored on them.

          That means that these things are miles ahead of even DRAM-SSD only systems that have to make routine calls to storage.

          Essentially the Big Iron these things will go in are trading a little bit of overall top speed for much better cruising speed.

            • the
            • 3 years ago

            Intel was claiming 1 TB 3D Xpoint DIMMs were possible so that’d bring capacity up to 12 TB per socket. Combine that with an 8 socket system for 96 TB Online in a coherent fashion. SGI/HPE have coherent glue ASICs that can link up to 256 sockets for those few data sets that are even larger. That’d be 3 Petabytes of memory for this keeping track.

            • UberGerbil
            • 3 years ago

            A couple of years ago there was [url=http://news.mit.edu/2015/cutting-cost-power-big-data-0710<]this research project at MIT[/url<], where they replaced [i<]all[/i<] the RAM with SSDs. From [url=http://www.itworld.com/article/2947839/big-data/mit-comes-up-with-a-no-memory-solution-for-big-data.html<]ITWorld summary[/url<]: [quote<]if the nodes in a cluster need to request data from disk as little as five percent of the time, the overall performance of the task drops to a level comparable to that of the experimental flash-storage-only cluster. So if it had to hit the swap file more than 5% of the time, the DRAM became pointless. It was no faster than the all-SSD/no-DRAM solution. "40 servers with 10 terabytes’ worth of RAM couldn’t handle a 10.5-terabyte computation any better than 20 servers with 20 terabytes’ worth of flash memory, which would consume only a fraction as much power," the researchers wrote in their paper.[/quote<] Now, if you can add more DRAM, of course that's the better way to go. But if you can't, either because it costs too much or there's just no more DIMM slots, and Optane means you can increase the amount of "memory" by an order of magnitude or more, so that a lot more problems fit, then it's a huge win.

            • nexxcat
            • 3 years ago

            So I used to be in the low-latency world, where we profiled system performances and shaved single-digit microseconds because our customers would ask us why their median order acknowledgement times went up by 20 µs and if we were aware of the performance degradations.

            We would very carefully make sure our workloads not only fit in RAM, but we would address it in such a way so that it would spend as many time in the processors’ L2 cache as possible (L1 is sadly frequently too small, whilst L3 is typically shared between the cores on the same socket and has unpredictable performance characteristics), and we came up with a novel sorting algorithm where it was O(N logN), but was orders of magnitude faster than what was available because we worked on sets that fit in L2 cache as much as possible.

            While admittedly we were working with tens of gigabytes of data at a time — and thus 3 orders of magnitude smaller data than the MIT Reserchers’, I would be very surprised if a carefully calibrated algorithm would suffer that steep a performance loss, especially if the systems were appropriately loaded and various workloads were appropriately distributed amongst the various processes; while one workload was waiting for IO, another can be happily calculating away on the same core.

            • UberGerbil
            • 3 years ago

            Yeah, latency is crucial for OLTP systems. These aren’t OLTP systems though. These are “big data” HPC systems, where throughput matters and latency doesn’t. I’m not saying these systems couldn’t be tuned better (this was an academic project, after all) but almost by definition the data is not going to fit into memory (and if it does, they’re just going to increase the data set until it doesn’t).

            • nexxcat
            • 3 years ago

            I’m not quite sure what “big data” means anymore 🙂

            In my low-latency days, our big data was backtracking sector-level data a few years, including all the aggregated news. We also backtested our NLP models with regard to impact to both the markets and individual companies’ shares, and used that information to add appropriate weights to incoming news. The backtesting bits we really didn’t care if it took longer, but I remember our data sizes being measured in 100s of TB.

            I suppose they’re working on 1, maybe 2 or even 3 orders of magnitude greater though, and I totally see what you mean. We optimised our system for latency; they do for throughput and they can be very different tradeoffs.

            • ptsant
            • 2 years ago

            Everything comes down to the relative cost of a server and DIMMs vs Optane. Naples will support 2TB in a single server (not a cluster!).

            Plus, using SSDs as RAM will chew through NAND cycles in a few weeks. OK, maybe a little more. But your RAM will work 3 years later, for example.

      • Waco
      • 3 years ago

      It’s not meant to replace DRAM. It’s meant to supplant NVMe at a similar cost for many times better performance in terms of latency.

      Databases are going to LOVE this. Intent logs for filesystems will fly on it. Anything low queue depth will shine if it can’t fit in memory.

      • DragonDaddyBear
      • 3 years ago

      I think the main point of this stuff is the capacity you can get out of it at NVMe speeds. Some working sets and DB’s really need memory, and lots of it. And when they don’t have enough they dump it to a disk cache or have to read the data again. If you can keep all of that in memory then performance could increase. It’s a niche technology but one that could have some big, expensive buyers.

    • tsk
    • 3 years ago

    Does this have any benefit for general consumers?

      • chuckula
      • 3 years ago

      Generals sure.
      Colonels, not so much.

        • Neutronbeam
        • 3 years ago

        You should have kept that Private.

          • UberGerbil
          • 3 years ago

          A decent kernel keeps everything private.

            • CuttinHobo
            • 3 years ago

            I find all this punnery very admiralable.

            • DragonDaddyBear
            • 3 years ago

            That’s some first-class puns right there.

            • Grahambo910
            • 3 years ago

            You guys must be pun specialists.

            • Growler
            • 2 years ago

            When the situation warrants, we step up.

      • Michelob
      • 3 years ago

      Major Waiting…

      • the
      • 3 years ago

      Short term, not at all.

      Medium term, consumers will have 3D Xpoint options as conventional storage devices behind a SSD controller. The first products like this just arrived on the market but as high end options. It’ll take time for it to trickle down into the mainstream.

      Long term? More than likely but the time frame could be upwards of a decade, easily. The reason being is that to really take advantage this technology to its fullest either software and/or operating systems have to be rewritten. There is still a chance that this technology will make a surprise appearance before then in a market segment that is builds form the ground up in the coming years, negating the need to migrate legacy applications over.

      The uptake of this in the data center is highly variable as well. I do see some key projects adopting this quickly as it permits some big data problems to move from a clustered approach due to memory/storage requirements to a single logical system instance. Performance gains here are huge due to the overhead involved with networking and storage are removed. Due to cost, many will simply wait until prices come down and more software has been rewritten to take advantage of this technology.

      This technology is a game changer but the game [i<]just[/i<] got started. There are plenty of innings left this ball game.

    • JosiahBradley
    • 3 years ago

    Intel wants us to spend less on memory so we spend more on CPUs, oh and the memory is from them too, it’s a win-win for them.

      • Ninjitsu
      • 3 years ago

      It’s almost like the 70’s all over again…

    • chuckula
    • 3 years ago

    Who cares about the companion CPUs.

    I want to know when the companion cubes are launching.

    If you really want to see where this is headed, Linux has been receiving patches to implement 5-level page tables that boost the maximum physical memory size in a single memory space up to 4 Petabytes: [url<]http://www.phoronix.com/scan.php?page=news_item&px=Intel-5-LVL-Paging-4.12-MM[/url<]

      • Waco
      • 3 years ago

      I care. :shrug:

      An Intel guy at MSST yesterday was asked about Optane endurance. He made no specific claim, but said it was designed for a 5 year lifetime in machine. He did follow that up with the comment that “you HPC folks will probably find a way” to exhaust it.

      I’m hopeful we’ll be pleasantly surprised by the endurance. The reduction in latency is incredible compared to anything other than DRAM.

        • chuckula
        • 3 years ago

        Rowhammer type attacks might not leak information in the same way that DRAM leaks, but they could wear out portions of an Optane DIMM prematurely.

          • Waco
          • 3 years ago

          Perhaps. I’ll have a few systems to play with in a few months. 🙂

            • chuckula
            • 3 years ago

            I thought you’d be too busy putting together an EPYC server cluster!

            • Waco
            • 3 years ago

            As soon as they’re available, that too.

            • RAGEPRO
            • 3 years ago

            Very interested to hear your thoughts on both. 🙂

        • freebird
        • 3 years ago

        …with Intel saying that Optane SSDs can safely be written 30 times per day, compared to a typical 0.5-10 whole drive writes per day. (with 5 year expect life I believe) SO 30 x 365 x 5 = ~55K writes depending on whether they count in a Leap Year Day or not… 😀

        [url<]https://arstechnica.com/information-technology/2017/03/intels-first-optane-ssd-375gb-that-you-can-also-use-as-ram/[/url<]

          • Waco
          • 3 years ago

          Sure, but that number includes *all* workloads, not just streaming writes. I’d bet it’s well north of 100K cycles.

Pin It on Pinterest

Share This