SanDisk’s ULLtraDIMM is an SSD on a memory stick

It’s an SSD, it’s a DIMM, it’s… both? SanDisk’s new ULLtraDIMM storage device puts non-volatile flash on a DDR3 memory module. We’ll indulge the funky capitalization, because the concept is kind of cool. It’s also fairly straightforward. Instead of serving up flash storage via Serial ATA or PCI Express, the ULLtraDIMM takes a more direct path to the CPU through its memory interface.

Traditional DIMMs are based on volatile DRAM memory that doesn’t hold data when the power is cut, but ULLtraDIMMs retain their, ahem, memory when unpowered. They’re also available in much larger capacities. SanDisk is rolling out individual modules with 200GB and 400GB of 19-nm MLC flash storage.

Putting the flash right next to the CPU gives the ULLtraDIMMs especially low latency. SanDisk claims a read latency of 150 µs and a write latency of under 5 µs. The modules are said to deliver 150k random read IOps and 65k random writes. Sequential I/O is pegged at 1GB/s for reads and 760MB/s for writes, so these puppies are pretty fast all around. Impressively, SanDisk claims that performance scales linearly with additional modules—and that write latencies remain consistently low. The modules will plug into existing servers, too.

Thanks to Guardian Technology, a collection of flash management features from SanDisk-owned SMART Storage Systems, ULLtraDIMMs should have excellent endurance. They’re rated for 10 full drive writes per day for five years, which works out to over seven petabytes for the 400GB model. The flash management algorithms are also smart enough to adapt to the changing characteristics of the NAND as it wears. On top of that, the Guardian tech includes backup capacitors, end-to-end data protection, and what appears to be a RAID-like flash redundancy scheme.

SanDisk says the ULLtraDIMMs are "shipping for qualification," so enterprise types should be able to get their hands on the modules soon. IBM has already signed on to use them in its System x3850 and x3950 X6 servers, which will offer configurations with up to 12.8TB of flash.

Comments closed
    • chrissodey
    • 6 years ago

    Maybe motherboard manufacturers can add two dedicated slots for storage with raid capabilities

      • Krogoth
      • 6 years ago

      Not going to happen anytime soon.

      This memory isn’t going towards normal desktops and enthusiast. It is going straight for professional market in datacenter and number crunching boxes.

    • SonicSilicon
    • 6 years ago

    Here’s a video featuring the president/CEO of the software co-developer, Diablo Technologies, giving a more in depth explanation:
    [url<]http://www.youtube.com/watch?v=GWVw6Ku8XTI[/url<] In terms of hardware it seems to be like a typical SSD: an upfront memory management system with a high speed cache that writes to and reads from the actual flash memory. At some point he mentions that software is needed by the operating system for this to work. Unfortunately this video doesn't seem to really provide an explanation of how this will appear to the system. My best guess is that the cache on the DIMM will act like standard RAM during post and boot. Once the OS loads the driver / software it treats that section of addressable space differently, utilizing only the regular RAM for processing.

    • Arclight
    • 6 years ago

    Gentlemen, this be the future.once we create universal memory.

    • Krogoth
    • 6 years ago

    This is very niche product. It is meant for certain datacenters and applications that need ultra-high bandwidth along with extremely low latencies.

    It is overkill for 90%+ of the systems that are out there.

      • Mourmain
      • 6 years ago

      I’m pretty sure I read “640K” somewhere in what you just said, but can’t find it anymore…

        • Krogoth
        • 6 years ago

        There’s demand for it, but it doesn’t exist for vast majority of the people out there and this will be the case for another decade or two barring any ground-breaking breakthroughs in computing technology. FYI, semiconductors are reaching their physical/economical limits within this decade.

      • Krogoth
      • 6 years ago

      Just saying what this product is being pitched towards.

      SSD drives (desktops, lower-end workstations/servers and gaming rigs) and PCIe SSD cards (higher-end workstations/servers) are able to handle the vast majority of the workloads that are out there. These guys are meant for situations where you need tons of bandwidth along with very tight latencies that cannot be found in SSD drives and SSD PCIe cards that you can find in the market.

      It is accurate to said that this is product is for a tiny niche. Getting it for a normal system is the epitome of “epenis and having far more $$$$ than sense”

      • Goofus Maximus
      • 6 years ago

      I think all those thumbs-down you’re getting are coming from gamers who think “this bandwidth would make my games load up in no time flat!” 😉

        • Krogoth
        • 6 years ago

        Game load times are mostly CPU-bound anyway. Any I/O bottleneck can be removed by a modest SSD 2.5″ disk. SSD PCie card is just flat-out overkill. These ULLtraDiMMs would be bloody overkill to the tenth power….

    • Anovoca
    • 6 years ago

    So if you slot these in DIMM 1 & 3 or 2 & 4 will it RAID? 😀

      • Goofus Maximus
      • 6 years ago

      … it makes you wonder how the drivers for this will be written. If the drivers understand multi-DIMM multi-channel memory, I’d at least think it could work like that, but I’m not technical enough to know.

      My knowledge of electronics pretty much ends at the very start of the digital logic era. I’m more comfortable in the land of PNP and NPN junction transistors, and even vacuum tubes, than with modern electronics…

    • Wirko
    • 6 years ago

    This could only work if memory controllers already support different timings for each slot, AND indefinite latencies (can wait for the device until it’s ready). Write latencies in SSDs are very unpredictable and can extend to hundreds of milliseconds in worst case.

      • just brew it!
      • 6 years ago

      That’s why I think this is going to behave more like a conventional SSD, which just happens to use a DDR3 memory bus to pass commands and data back and forth, with a DRAM cache to buffer the data to/from the CPU. That way you can run the DDR3 bus at full speed, and the bulk transfers to/from the cache behave just like normal DRAM accesses.

      Yes, that means it isn’t a “drop in” replacement for normal DIMMs, but that’s not what it’s intended to be. The write wear issue means you couldn’t use it like a regular DIMM regardless.

        • Wirko
        • 6 years ago

        It certainly isn’t a drop-in replacement for RAM. Such a module must not even say a [i<]word[/i<] before it's woken up by a driver; it would prevent booting if it did. Operation would certainly require additional signaling between the CPU and the SSD (data ready for reading/write buffer full), and I don't know if such signals/commands/interrupts can be passed through the memory controller.

    • UnfriendlyFire
    • 6 years ago

    No go for AMD APU setups. Feeding a GPU with single-channel RAM…

      • Krogoth
      • 6 years ago

      It would pretty worthless in that application, because DRAM is much faster than flash.

      Flash’s advantage over DRAM is that it is nonvolatile. Video Memory doesn’t need to have permanent data and performance penalty isn’t worth it.

    • Shoki
    • 6 years ago

    I want this.

    • DarkMikaru
    • 6 years ago

    Wow, that is awesome. I wonder what the performance will mean for us normal people once it trickles down. Great idea… can’t wait to see this implemented.

    • ronch
    • 6 years ago

    Perhaps future CPUs will have dedicated on-die storage controllers to directly access non-volatile storage such as these. Those AMD rumors about Kaveri having on-die SSD controllers seem to have caught on with Sandisk.

    Great piece of innovation. I love it.

    • Aliasundercover
    • 6 years ago

    DIMM slots are precious and memory is better than flash. Flash can go on PCIe or SATA.

      • nafhan
      • 6 years ago

      In reality the situation is going to be application dependent. For instance, if your dataset is larger than system memory, this might be faster. It’s definitely not a general purpose solution.

        • derFunkenstein
        • 6 years ago

        But if your data set is larger than system memory and you have free memory slots….

        …………
        ….

        YOUR HEAD A SPLODE

          • smilingcrow
          • 6 years ago

          A 400GB DIMM is a larger capacity than many servers have in total.

            • nafhan
            • 6 years ago

            Yep. A current gen HP BL460c maxes out at 512GB. You can get more than that with full height blades or rackmounts of course, but nowhere near what you could potentially do with these SSD on a memory module things.

            At the same time, I think they’re pushing the “low latency” thing because for sequential access you could probably do just as well with a PCI express SSD.

      • ColeLT1
      • 6 years ago

      All 10 of my (work) servers have 24 ram slots, 4 of 24 are populated (16GB modules), plenty of room for this. Also, my servers have no SAS/RAID card, just 2 SD cards in RAID1 for hypervisor os storage, then iSCSI to a 24x400GB SSD SAN. I was looking on adding local PCIe based SSD storage for some roaming profile servers, but this would work also.

      • just brew it!
      • 6 years ago

      It is definitely a niche product.

      I could see it being popular for rackmount servers though, especially 1U form factor where expansion slots and drive bays are at a premium.

    • internetsandman
    • 6 years ago

    Judging from the comments I’m thinking there was one key detail missing, namely how the hell this works at all, let alone alongside traditional RAM. Perhaps in a dual processor system, one CPU is linked to storage and the other to RAM, but then you would need to update the firmware to tell the system what to use as RAM and what to use as storage, not to mention that in terms of the architecture, it’s not a storage interface to begin with; it can be modified in software to be used as storage but it’s not a native interface and I would imagine that that kind of bodged together system architecture wouldn’t be conducive to the rock solid stability that is mandatory for servers

      • Nevarre
      • 6 years ago

      That’s really the question– they say this is going out for validation to vendors, but how is it expected to work in practice? What kind of specialized setup does the vendor need to do, and how does this present to an OS?

      It doesn’t look like this is configured for an RDIMM system, as that would add unwanted latency

      Between the press release and article, there are a lot of unanswered questions.

        • just brew it!
        • 6 years ago

        The extra latency of RDIMM is per transaction, not per word transferred. For a burst transfer you only take the hit once, at the start of the transfer.

      • just brew it!
      • 6 years ago

      I would bet money that this requires a special ULLtraDIMM-aware BIOS and a custom device driver to do anything useful. The most likely result from shoving one into a “normal” system is probably a POST failure.

      • Geonerd
      • 6 years ago

      Rather cool idea, IMO. The bandwidth and latency will be unrivaled!

      I’d think this would require BIOS-level support.

      Best would be to have the BIOS sniff out the SSDIMs and map them to an unused address. Boot the OS and run a driver that can come along and set that area up as a drive or cache.

      In a pinch, perhaps (?!) the SSDIMMs can emulate standard DDR? If so, the system could boot normally until it’s able to load a low-level driver that can remap everything.

      As mentioned, you’d probably need per-slot memory timing configuration.

    • jdaven
    • 6 years ago

    I think you left out another ‘L’ in the product name.

      • Nevarre
      • 6 years ago

      ULL = Ultra Low Latency.

        • jdaven
        • 6 years ago

        ULLtra means super awesome
        ULLLtra means super duper awesome

    • DPete27
    • 6 years ago

    My prediction is coming true!! These look like they’ll be SUPER expensive though.

      • just brew it!
      • 6 years ago

      Yes. You can count on them being priced way beyond the budgets of mere mortals. TLAs like the NSA will probably be big customers though.

    • tipoo
    • 6 years ago

    So if you pop these in a memory spot, the system will see it as a storage drive rather than as RAM? And it won’t affect the detected speed of the other RAM? Interesting concept. Probably pretty niche though, as people may want to spread regular RAM to all their slots for multi channel.

    • DragonDaddyBear
    • 6 years ago

    So, would this be perfect for a PCI-E Ram Drive? Because it’s not much use as system memory.

    EDIT: it’s old, but this is what I’m talking about. [url<]http://news.softpedia.com/news/RAM-Drives-the-New-Trend-in-Storage-73962.shtml[/url<]

      • cynan
      • 6 years ago

      At that point, you’re better off (cost, if for no other reason) just going with a PCIe SSD.

    • zenlessyank
    • 6 years ago

    I see what you did there. 😉

    Will SanDisk ULLtraDIMM SSD eliminate OS boot drive??

    Postposted on Tue Jan 21, 2014 8:39 pm
    If the only reason we have a hard drive (other than long term storage,etc) is to load the OS into RAM, and these bad boys don’t lose their contents on power cycle then am I to assume an almost instant-on System minus the POST time?

    Link….

    [url<]http://www.guru3d.com/news_story/sandis[/url<] ... m_ssd.html

      • puppetworx
      • 6 years ago

      I imagine that could save quite a bit of energy in a server farm.

      It shouldn’t be long until we see this approach in mobile devices.

      • silentbrains
      • 6 years ago

      Goodbye, hiberfil.sys?
      Or would it be more like, hello, infinite recursive hiberfil.sys?

    • colinstu12
    • 6 years ago

    so instead of +/-25GB/s we’re looking at only 1GB/s? I’ll stick with traditional memory for now.

    This is the future though. Moving everything closer and closer to the processor, eventually right into it.

    12 core + HT + GTX Titan + all of its memory + 32-64GB of cpu ram + 256+GB SSD all on a single die. Sounds way far off now but I think we’ll all see the day.

      • tipoo
      • 6 years ago

      We’re going to start hitting some major hurdles after 14nm though. Unless fabrication plants have materials other than silicon already to use in their secret labs.

    • ApockofFork
    • 6 years ago

    Wait wait wait…. How does this even work? I don’t think I’m wrong to assume that the standard memory bus on all computers is not a general purpose interface. I’m pretty sure you can’t just put 200gb of memory on a dimm stick in there and expect it to work. What kind of black magic are they using to make this happen?!

      • Duct Tape Dude
      • 6 years ago

      I can imagine some OCer sticking this (ha) in their rig alongside other DIMMs, only to find that entire memory channel is operating at 5µs timings with 760MB/s throughput.

      I find this scenario much more amusing than I should.

      • just brew it!
      • 6 years ago

      As I noted in post #43, I’ll bet it is logically something like a SATA or SAS interface that just happens to be using a DRAM bus as the physical transport. They’re probably mapping certain addresses on the DIMM as control/status registers. Set up a read or write operation for one or more blocks by by diddling the registers, wait for the device to say it is ready to send/receive the data, then burst the data over the memory bus at DDR3 speeds.

    • Theolendras
    • 6 years ago

    Nice, this will get rather confusing when we’ll talk about Ram disk.

    • Jigar
    • 6 years ago

    Cost and also desktop compatibility ?

      • Walkintarget
      • 6 years ago

      If you have to ask …..

      Generally when a price is not even guesstimated, its ALWAYS a LOT more than you expect.

    • blastdoor
    • 6 years ago

    Sounds interesting… but how does this actually work? Wouldn’t there need to be OS support? Otherwise, wouldn’t the OS just see this as RAM — but really slow RAM? How would the OS know what to put in the really slow RAM versus the regular fast RAM?

      • nico1982
      • 6 years ago

      This. More info about this ‘detail’?

      • kcarlile
      • 6 years ago

      Yup, that’s my question. I can think of a number of applications for this (VMware vSAN not the least among them), but the big question is OS support. Hell, being able to throw MySQL up against it would be awesome.

      Even better, if Enterprise NAS vendors could use them… maybe a call to my Isilon rep is in order…

      They seem to be claiming compatibility with existing servers, which is a big surprise.

      • Sargent Duck
      • 6 years ago

      Yeah, that’s the first thing that was going through my mind as well.

      • just brew it!
      • 6 years ago

      Yes, it would need both BIOS and OS support, because you can’t just treat it like normal RAM. They’re using the existing DRAM bus electrical spec to get a *really* high bandwidth channel in/out of the CPU, nothing more.

    • drfish
    • 6 years ago

    Sweet! Wonder if a BIOS update would let us run a pair of 8GB RAM along side a pair of 200GB SSDRAM?

      • Sahrin
      • 6 years ago

      The memory bus runs at the same speeds as the slowest module, so I don’t think this is a good idea.

        • grantmeaname
        • 6 years ago

        So can you not run these in a computer that also has RAM?

          • grantmeaname
          • 6 years ago

          These have got to be unsuitable for use as RAM with how few times they can be written to (comparatively)…

            • bwcbiz
            • 6 years ago

            I have to think that they would have a DDR “cache” as their front end that operates at full bus speeds. I’m curious to see if it appears as a drive/mountpoint (or possibly a RAMDisk) to the OS, or if it behaves more like a memory-mapped file in UNIX.

            • just brew it!
            • 6 years ago

            Yup, I’m pretty sure they would need to do some significant caching on the module to match the DDR3 bus speed to the (slower) flash chips. How it appears to the OS is a device driver detail.

            • just brew it!
            • 6 years ago

            Yup. Don’t think of it as non-volatile RAM; think of it as an SSD that happens to physically fit into a DRAM slot.

        • drfish
        • 6 years ago

        That’s why I’m wondering if it can be done in the BIOS or if it has to be custom hardware.

        • Airmantharp
        • 6 years ago

        Consider that an X79 system has four memory channels, yet even with six or eight cores on the CPU it doesn’t really ‘need’ all of them, you could dedicate one channel with two ‘SSD DIMMs’ without unduly impacting normal performance for all but the most specialized, memory bandwidth-hungry applications.

        • bwcbiz
        • 6 years ago

        They just have to have a DDR cache as their front end to avoid impacting bus speeds. Say 2 GB

        • just brew it!
        • 6 years ago

        As long as the electrical interface can handle normal DDR3 clock speeds that shouldn’t be an issue. There’s got to be some RAM-based buffers on the module to hold the data being transferred to/from the bus, and a limit on how much data you can write to the thing before the system needs to wait for the flash array to catch up.

        I’d be willing to bet that while the electrical interface is DDR3, there’s a logical interface layered on top that is nothing like “normal” DIMMs. Probably a range of reserved addresses on the module that act as command and status registers. At the driver level it may even act more like a traditional block-based SATA interface. That would actually make sense, as it would require less re-work on the driver/OS side.

      • internetsandman
      • 6 years ago

      For some reason I was thinking this sentence would end in something regarding SLI:

      I wanna run my 200GB storage DIMMs in SLI for maximum performance

Pin It on Pinterest

Share This