Micron integrates ECC controller into next-gen flash memory

Moving the production of NAND flash memory to finer fabrication processes is sure to drive down the price of SSDs. The more memory chips can be squeezed onto a wafer, the cheaper each one becomes. Next-generation solid-state drives are expected to use NAND chips built on 25-nano process technology, and as one might expect from the cutting edge, there are some challenges associated with the shift.

As this article over at AnandTech points out, flash endurance and error rates present bigger problems as the size of each NAND cell shrinks. The move from 50- to 34-nm NAND cut the write-erase cycle endurance from 10,000 cycles to just 5,000, and the 25-nm flash chips currently coming off the line are reportedly lasting only 3,000 cycles. There doesn’t seem to be a solution to weakening write-erase endurance, putting the onus on controller designers to devise ways to lower their write amplification factor. I expect we’ll see more solutions like the compression-infused black box of technologies inside SandForce’s SF-1000 series SSD controllers.

In addition to running out of steam before their 34-nm predecessors, 25-nm flash chips also have higher error rates. Rather than requiring drive makers to combat this at the controller level, Micron has introduced a family of ClearNAND chips that integrate a 24-bit ECC engine. The chips themselves appear as standard flash devices, making the error correction transparent to the controller. Two flavors of ClearNAND will be available: a standard version with 4-way command queuing and transfer rates up to 50 MT/s, and an enhanced derivative that’ll queue 16 commands and push up to 200 MT/s. The former will be available in 8-32GB capacities, while the latter starts at 16GB and goes up to 64GB. I expect we’ll find enhanced ClearNAND in more than a handful of next-gen SSDs, perhaps including Intel’s follow-up to the X25-M G2.

Comments closed
    • anotherengineer
    • 9 years ago

    “The move from 50- to 34-nm NAND cut the write-erase cycle endurance from 10,000 cycles to just 5,000, and the 25-nm flash chips currently coming off the line are reportedly lasting only 3,000 cycles.”

    Wow, I think I want 50nm flash please.

    I wonder how many cycles a conventional hard drive can do before it wears out??????????

      • OneArmedScissor
      • 9 years ago

      That doesn’t mean older 50nm drives last longer. Their controllers were much less efficient than even existing ones, and wrote all sorts of garbage. Newer controllers will just be even more efficient.

        • anotherengineer
        • 9 years ago

        Yep know that also. I meant 50nm flash with a new controller.

          • Duck
          • 9 years ago

          I think you really want 25nm still.

      • just brew it!
      • 9 years ago

      If you /[

        • anotherengineer
        • 9 years ago

        Obviously. A hdd will blow a bearing before the platter wears out, I was just wondering how many cycles a platter can take compared to an ssd.

      • Farting Bob
      • 9 years ago

      l[

        • voodootronix
        • 9 years ago

        Dramatic as that is, you can’t really compare a rotation of the platter (which doesn’t imply any actual read/write activity) to the write/erase/write cycles being measured on the SSDs.

          • Farting Bob
          • 9 years ago

          Well he asked how many cycles a mechanical HDD can do.
          And HDD’s can read/write the same bit over and over without it “wearing out”. You get mechanical wear and tear but that is not dependent on how much you write to it.

            • The Wanderer
            • 9 years ago

            l[

            • just brew it!
            • 9 years ago

            Writing (or erasing) flash memory essentially involves punching electrons through a thin insulating layer, storing them on (or removing them from) what is called a “floating gate”. Eventually the insulator sustains enough damage from the write/erase process that it isn’t effective any more, allowing electrons to leak into or out of the floating gate. This causes data errors.

            On a hard drive, you’re magnetizing small regions of a metallic coating. Magnetizing and re-magnetizing magnetic material does not wear it out; you can do it as many times as you want.

            • UberGerbil
            • 9 years ago

            g[

            • just brew it!
            • 9 years ago

            Yup… in a normal (non-floating-gate) transistor, you never intentionally punch electrons through an insulating layer. If it /[

            • The Wanderer
            • 9 years ago

            Yeah – that’s why I didn’t question / object to the omission very much, because I did know that there were very good reasons why the similarity might be merely superficial. On closer examination, I more or less figured it out, but I didn’t think it was worth posting to announce the fact.

    • not@home
    • 9 years ago

    How about some perspective. How many write-erase cycles does a regular mechanical hard drive get?

      • ncspack
      • 9 years ago

      infinite write/erase cycles; mechanical hard drives do not have write/erase cycle limits, that is a problem unique to flash memory

        • just brew it!
        • 9 years ago

        Well, ultimately you’ll be limited by the fact that the mechanical components of the drive will wear out, so it isn’t really “infinite”. But assuming you’ve got a enterprise-class drive (i.e. one that is certified for continuous 24×7 operation), you ought to be able to write to the drive continuously for years without causing a problem.

      • just brew it!
      • 9 years ago

      Write cycles for a hard drive are limited only by the mechanical durability of its moving parts (spindle motor and head actuator). Absent any catastrophic failures (e.g. head crash), the magnetic coating does not “wear out”.

      • derFunkenstein
      • 9 years ago

      I’d rather have the question answered of “how many write cycles does the average client need?” I’m curious if 3000 is adequate for a drive I’d want to use for several years.

        • Farting Bob
        • 9 years ago

        The average client needs exactly x number of cycles.
        It depends entirely on the controller and workload. SF controllers are very good at it, older ones are bad. But then older chips would have better longevity.
        And the question shouldn’t be “how many write cycles will i need”, its “how long will x write cycles last me”.

          • just brew it!
          • 9 years ago

          Well, assuming you want the drive to last at least n years, the questions are roughly equivalent.

    • Sahrin
    • 9 years ago

    /[

      • jpostel
      • 9 years ago

      mmmm… honey glazed

    • lilbuddhaman
    • 9 years ago

    How about making 50nm parts cheaper / higher yield ? Those SSD’s are very small as is, stuff em with more.

      • AlvinTheNerd
      • 9 years ago

      There is still the cost of silicon. A wafer coming out costs a huge amount. If you don’t increase GB per wafer but the cost per GB drops you eventually lose profitability.

      Increasing yeild is a first order improvement in GB per wafer but eventually after about 80-90% yield, it doesn’t return investment to improve anymore. Moore’s Law is going to put you out of business. Decreasing node size is a second order improvement and the only way to keep up.

      The only way around it is to charge a premium for your GB’s based on durability which I don’t think many around here would buy up.

        • just brew it!
        • 9 years ago

        Yup. And the people who really care about durability are still using devices based on SLC flash chips, which is a whole different (niche) market segment.

    • blastdoor
    • 9 years ago

    I suppose the write-erase cycle decline can be partially offset by having larger capacities, at least in some contexts.

      • Firestarter
      • 9 years ago

      Well, isn’t 60GB with 3000 erase cycles functionally at least equivalent to 30GB with 6000 erase cycles?

        • Waco
        • 9 years ago

        If you provision the 60 GB drive as a 30 GB drive with 30 GB of extra space to compensate for the original 30 GB going bad – yes.

        They’ll both absorb the same number of reads before going static.

          • mcnabney
          • 9 years ago

          No, I don’t think you would need to ‘block-off’ half of the drive for that to work.

          As long as the usage of the drive is the same I don’t see why making a larger drive to spread out the TRIM wouldn’t be an effective method to prolong life in a desktop environment. Servers environments would be different though.

            • just brew it!
            • 9 years ago

            Yup, agreed. If the wear leveling is doing its job, a 2x larger drive should wear roughly 1/2 as much for a given amount of write activity, regardless of how it is partitioned at the OS level.

    • Jigar
    • 9 years ago

    What exactly will ECC do ?

      • bcronce
      • 9 years ago

      SSDs have massive amounts of Error Correction to combat corrupted data. Since you don’t know what an individual bit may go bad, you put in tons of ECC.

      • just brew it!
      • 9 years ago

      ECC provides enough redundancy to detect and correct bit errors when they occur. Mechanical hard drives have already been using it internally for years; RAM on servers and high end desktops also incorporate ECC (ECC DIMMs are 72 bits wide instead of 64, to accommodate the extra ECC info).

Pin It on Pinterest

Share This