Intel and Micron say their quad-level-cell NAND is ready to rip

Intel and Micron jointly announced today that they've begun production and shipments of what they claim is the first quad-level-cell (QLC) NAND flash memory in the industry. With a 64-layer 3D structure, the chips achieve one terabit (125 gigabytes) of density per die. If that's not enough bits for you, the companies also claim that their third-generation 96-layer 3D NAND is in the works to allow for even higher density per die.

Most 3D NAND these days uses triple-level-cell (TLC) NAND and 3D fabrication techniques, so the storage of four bits of information per cell represents a substantial increase in per-cell capacity. The flip side of higher bit capacity per cell is a reduction in write performance due to the increased complexity of programming those cells, as well as a reduction in endurance.

Image: Micron

Intel says that IMFT QLC NAND mitigates these performance liabilities somewhat by using what it calls "CMOS under the array" technology, or CuA. CuA apparently allows Intel and Micron to organize QLC NAND using a greater number of planes—four in IMFT flash versus two in competitors' NAND—and the higher number of planes apparently lets this NAND write more data in parallel.

In any case, Intel isn't making any performance claims for the new media yet, nor is it announcing a shipping product with QLC NAND inside. The company says its product announcements will come later this year at the Flash Memory Summit, to be held August 6 through August 9 in Santa Clara, CA.

Micron, on the other hand, is introducing the 2.5" 5210 Ion SATA enterprise SSD today. The company claims the 5210 is the industry's first shipping drive with QLC NAND inside. The 5210 Ion will have capacities ranging from 1.92 TB to 7.68 TB. Micron says this density will allow businesses to serve read-heavy workloads with higher storage capacities per rack node, all with higher performance and lower latencies than what it calls a comparable setup using multiple 10K-RPM SAS hard drives.

Image: Micron

The company provided sample benchmarks using three read-heavy workloads from the Yahoo! Cloud Serving Benchmark using the Apache Cassandra application. A single Micron 5210 Ion delivers a claimed 2.2x to 3.9x speedup in the number of YCSB operations per second versus that quartet of 10K-RPM drives across those three workloads, all with much lower read latencies than the spinning rust.

The flip side of this arragement is that update operations for QLC NAND are only in the ballpark of the four-hard-drive-per-node legacy configuration that Micron cites. The company cautions that write-heavy workloads will be better served by other products in its lineup. That may be down to not just performance, but also the 1000 program-erase cycles the media can endure. Compare that to 3000 P-E cycles for enterprise TLC NAND and 10,000 P-E cycles for enterprise MLC.

Given these characteristics, it'll be interesting to see whether initial generations of QLC are well-suited to client storage applications given the mix of reads and writes that most users perform on their PCs. Most people aren't performing multiple drive writes per day on their client SSDs, so the extra density afforded by QLC could be just the thing for storing massive data sets like hundreds of Steam games.

Micron says the 5210 Ion is shipping to select customers today, and that broad availability will begin this fall.

Comments closed
    • uni-mitation
    • 2 years ago

    I want to take this time to properly and genuinely give my heartfelt appreciation for all those brave early-adopters that will take the mantle of technological progress. Your sacrifice will not be forgotten.

    Semper NAND!

    uni-mitation

    • davidbowser
    • 2 years ago

    “You are my… density!”

      • ronch
      • 2 years ago

      How about make like a tree and get outta here!

    • DragonDaddyBear
    • 2 years ago

    I presume the power requirements are significantly lower just sitting there idle compared to a hard drive spinning. In theory if the drivers are just sitting there idle most of the time wouldn’t an SSD be now reliable than spinning disks? This could be awesome for a home NAS or similar where you want speed but don’t really write to it a lot.

      • blastdoor
      • 2 years ago

      If an HDD is idle most of the time, it can be put to sleep so that it isn’t spinning.

      The penalty, of course, if a big pile of latency when you have to wait for it to spin back up again.

      A solution to that problem would be a block-level file management scheme in which frequently used files, or at least pieces of the files, are placed on an SSD so as to mask the latency. Perhaps something like AMD’s StoreMI.

    • GrimDanfango
    • 2 years ago

    Could QLC be the point where it’s just-about-viable to use these as a bulk-storage option for a home-office server I wonder? 4x8TB drives in a RAID10 would probably suffice for my needs, although 8×8 would be better. Wonder how cheap they’ll get.
    It’d be nice to compact my huge 4U, 12×3.5″ slab down to something ITX-sized 🙂

      • psuedonymous
      • 2 years ago

      Why RAID10 when you can RAIDZ2? If you’re bothering with RAID at all you care about on-line failure tolerance, but RAID10 just half-arses it.

        • GrimDanfango
        • 2 years ago

        What’s the performance like? I use my server for storing/reading very large data sets for fluid simulations, renders, etc. I did a fair bit of research at the time I set the server up, and found the recurring theme was that any RAID that required parity calculations was pretty much useless for high-throughout sequential performance without preposterously expensive hardware host cards.
        In the end, I put OmniOS+Napp-It on, set up a ZFS pool across a 12-disk RAID10 array, all hosted on cheap LSI basic non-RAID drive controllers, added in some SSD cache drives, a decent CPU, and a bunch of RAM cache too, and the thing absolutely flies with any workload I throw at it, easily maxing out 4x 10GbE connections.

        Could RAIDZ2 keep up with that without expensive extra hardware?

          • DragonDaddyBear
          • 2 years ago

          It would depend on the CPU. If extreme I/O is your concern then btrfs and zfs are exactly the fastest because of COW and check summing. That said, it’s very reliable.

          EDIT: check out phoronix.com. He does all kinds of benchmarks for file systems. You can search his results, too, on openbenchmarking.org.

        • stdRaichu
        • 2 years ago

        I might need to refresh my calculations on this one, but parity RAIDs typically result in much higher write amplification than modes like RAID10 due to their read-modify-write nature. Therefore, all other things being equal, discs like this should have a longer lifespan under RAID10 and similar.

          • Waco
          • 2 years ago

          That’s true with most traditional file systems. ZFS isn’t much different thanks to it’s CoW nature.

      • BurntMyBacon
      • 2 years ago

      [quote=”GrimDanfango”<]Wonder how cheap they'll get.[/quote<] Considering that you only get a 33% increase in storage per cell over TLC, I wouldn't get my hopes up. The 96 layer QLC should be more interesting as it will allow ~ 50% more cells per die footprint area than the current 64 layer version. The 96 layer QLC should allow for twice the capacity at the same cost as 64 layer TLC. This assumes, of course, that the cost savings are entirely dependent on die size (unlikely) and no extra die area is used by the 96 layer QLC vs 64 layer TLC (hard to say). Other considerations are yields/spare cells/acceptable defect density, manufacturer margins, controller complexity/cost, and differences in arrangement.

    • gamoniac
    • 2 years ago

    [quote<]In any case, Intel isn't making any performance claims for the new media yet,...[/quote<] [quote<]Micron, on the other hand, is introducing the 2.5" 5210 Ion SATA enterprise SSD today.[/quote<] Intel to Micron: Yo bro, you go first.

    • meerkt
    • 2 years ago

    [quote<]The flip side of higher bit capacity per cell is a reduction in write performance due to the increased complexity of programming those cells, as well as a reduction in endurance.[/quote<]And, presumably, retention. Any way to get manufacturers to publish this?

    • Eversor
    • 2 years ago

    If the endurance claims are correct, these will be all over the place. 2D TLC was around 500 erase cycles, sometimes less.
    These are diminishing returns though: now it’s only a 33% increase in density.

    • ronch
    • 2 years ago

    Should be fine as a data drive for storing your pron though.

      • davidbowser
      • 2 years ago

      That’s what I use my current Micron drives for!

      • MOSFET
      • 2 years ago

      cm0n n0w pr0nch

    • chuckula
    • 2 years ago

    Meh. When they announce octo-cell parts then and only then will I say THANK YOU AMD!

      • Srsly_Bro
      • 2 years ago

      I think a thank you Mr. Scott “The Almighty Savior of PC Gaming” Wasson will be appropriate.

    • Ultracer
    • 2 years ago

    Ready to … rest in pieces???

      • Srsly_Bro
      • 2 years ago

      Yup. Rip with the trash tier endurance ratings imo.

Pin It on Pinterest

Share This