Tom’s Hardware hammers an Intel 600p SSD for science

Intel's first stab at making an affordable PCIe NVMe storage device resulted in last summer's crop of 600p-series SSDs packing 3D TLC flash. The company offers these drives in four capacities ranging from 128GB up to 1TB, but curiously launched them with the same 72 TBW endurance level for all versions. Those numbers were revised later on with figures as high as 576 TBW. TR has previously shown that most SSDs are capable of reliably writing data far past their specified endurance rating. Now, Chris Ramseyer over at Tom's Hardware decided to see what would happen if he pushed an Intel 600p 256GB drive to the bloody limit, and then shared the results with the world.

In our testing, a now somewhat-ancient Intel 335 Series 240 GB SSD was able to write just over 700TB of random data before its internal monitors triggered autophagy. At the very end of its life, the drive went into a read-only state, and after a reboot, the data on it was completely inaccessible. For testing the 600p 256GB unit, Tom's used an even harsher torture test with non-stop 4K write operations.

Bearing the difference in testing methods in mind, it's still a little surprising to see that the 600p that Tom's tested switched to read-only mode after less than 110TB of writes. On a brighter note, the data on the drive was recoverable even after the drive was unpowered for a full thirty days. The Intel 600p 256GB drive bears an endurance rating of 144 TBW, though that figure is almost certainly not derived from the extremely punishing methods used in Tom's testing.

Ramseyer's report includes additional information about the drive's performance degradation nearing its time of death, as well as more general information about the SSD's error-checking and correction. The piece is an interesting read and well worth checking out.

Comments closed
    • bbmap
    • 3 years ago

    I’m not interested in buying a SSD until manufacturers figure out how to allow you to read data. Currently, SSD’s are cute. But as far as I can tell, from this and TR’s torture-test, they are completely broken, and choose some random point to declare, “Sorry, I decided you no longer can access your data.”

    I would never trust data to a device designed like that.

      • bbmap
      • 3 years ago

      However, I would be [i<]very[/i<] interested in a TR article about which SSDs you can buy that will correctly go into read-only mode and allow you to recover data upon end of writability. Drives that randomly destroy themselves are not very interesting to me...

      • Krogoth
      • 3 years ago

      That’s how flash memory works. SSDs were never meant be use for archival data storage. HDDs have their own set of issues as well. The sectors on modern HDDs are becoming too small that it is becoming next to impossible to recovery data from.

      • drfish
      • 3 years ago

      1) Data is never safe if it’s only in one place
      2) You’re missing out on a lot of awesome

      • MrDweezil
      • 3 years ago

      I mean, the sun is designed to wear out at some point in the future and fail catastrophically but I’m not turning down its benefits in the meantime. Your SSD isn’t going to wear out under a non-server workload. Take backups like you would for a mechanical drive and come join the rest of us in the glorious modern technological age of 2011.

        • UberGerbil
        • 3 years ago

        This reminded me of an old friend of my dad’s who refused to buy a modern car with fuel injection because, and I quote, “I know how to rebuild a carburetor. I don’t know how to rebuild fuel injectors.”

      • Ifalna
      • 3 years ago

      That’s why the IT Gods invented backups.

    • NeelyCam
    • 3 years ago

    This is a bit unrelated, but I’m slightly disappointed that Tech Report hasn’t picked up the news on Qualcomm being investigated by FTC about using its modem market position to kill WiMAX:

    [url<]https://www.bloomberg.com/news/articles/2017-01-17/qualcomm-forced-apple-to-exclusively-use-modem-chips-ftc-says[/url<] I would love to see analysis and commentary by TR on these sorts of shenanigans.

      • blastdoor
      • 3 years ago

      Analysis:
      By eliminating competitors, firms can increase their profits.

      Commentary:
      I don’t like it when firms do that, unless I bought a lot of their stock before they implemented their shenanigans. Since I have never owned Qualcomm stock, I think this sucks — go FTC!

    • odizzido
    • 3 years ago

    [quote<]At the very end of its life, the drive went into a read-only state, and after a reboot, the data on it was completely inaccessible.[/quote<] I still don't understand what intel was thinking when they decided to make your data inaccessible instead of leaving it as read only. It seems completely insane.

      • just brew it!
      • 3 years ago

      Probably wasn’t their original intent (i.e. a bug in the drive’s handling of wearout).

      • MOSFET
      • 3 years ago

      It appears that is handled more sensibly on the newer 600p than it was on the older 335.

    • DPete27
    • 3 years ago

    Nice to see they go into read-only when the NAND wears out now. Locking the drive is BS. That’s called a brick. Nobody wants that.

      • Waco
      • 3 years ago

      This. Although expiring prior to actually writing the rated amount is troubling as well…

        • mczak
        • 3 years ago

        They clearly state the way they did it (using 4k random writes) should be more stressful than “normal”, so I don’t think that’s much of an issue.
        Albeit I’d like to know how 4k vs. linear writes actually affects this, with all the sector logic voodoo ssd drives do I have really no idea…
        But in any case, IMHO the drive failing gracefully is way more important than if it would not quite meet the TBW rating (it’s at least in the same ballpark).

          • Waco
          • 3 years ago

          Oh, absolutely. It didn’t brick, and that’s a blessing.

      • jihadjoe
      • 3 years ago

      I seem to recall reading over at PCPER that the bricking behaviour is likely a remnant of Intel SSD drive’s server roots. In that use case the drive is almost certainly mirrored or in a stripe with parity, so in a failure event rather than dealing with an sketchy drive possibly causing I/O errors, it’s better to just have it fail directly so it can be replaced.

        • UberGerbil
        • 3 years ago

        That would make some sense: going off-line is the right behavior in a server context. But ideally there’d be some fall-back mechanism to get it to come back in read-only mode.

        • yuhong
        • 3 years ago

        The right solution would be to have the RAID controller deal with read only mode properly, and there is a lot that can be done including rebuilding the array by copying data.

    • tipoo
    • 3 years ago

    No one came into this article thinking TomsHardware actually took a hammer to an SSD as an endurance test, right?

    No? G-good, m-me neither.

      • nico1982
      • 3 years ago

      Me one minute ago: “An hammer? What’s the point? What a dumb kind of tes… oh”.

      • the
      • 3 years ago

      *rises hand*

      Bad things happen to electronics that fail me.

      I also have the reputation of walking into a room with some misbehaving equipment that gets fixed by my mere presence.

        • tipoo
        • 3 years ago

        In university one of my co-op jobs ran out of things for me to do, so between other stuff I had a basket of hard drives to destroy however I chose 😛

        Took most of them apart and stuck the rare earth magnets to a door sill for anyone to take.

          • not@home
          • 3 years ago

          I had the pleasure of destroying about a hundred hard drives once. I took the first two apart like you. Then I decided that was too much work. I had to be supervised while destroying them, so I invited my “supervisor” to go to the gun range after work with me. The next day he told our boss that we volunteer to destroy any and all future hard drives. I think he had fun. He had never shot a gun before that.

            • UberGerbil
            • 3 years ago

            Back when Comdex Las Vegas was a thing, there was a tradition of taking …”problematic” or otherwise-despised hardware to one of the ranges in town, and destroying it with all sorts of high-caliber weapons. I saw a lot of prototypes and balky demo machines wrecked. Watching a full-tower desktop getting taken out by a .50 cal is highly entertaining.

      • K-L-Waster
      • 3 years ago

      “SSDs are truly amazing devices, but one question remains: can you construct a deck with them?”

    • Chrispy_
    • 3 years ago

    The edurance rating is only an estimate at best. The NAND pages can be erased and re-written [i<]x[/i<] times and as the size of the writes gets further from the page size, the higher the write amplification will be. Intel clearly used an average write size when giving it a 144TBW value, and you can assume that it wasn't "4K"

      • Waco
      • 3 years ago

      This was only 7.8 TB of host writes though. Drives are rated based off of the actual writes to flash, not the host writes…no?

      NAND writes are NAND writes, however the drive manages them.

        • notfred
        • 3 years ago

        No. 7.8TB of writes as part of testing for the review of the drive when it was released, followed by ~98TB of 4k writes to kill the drive for a grand total of 105.6TB before it went to read-only state.

          • Waco
          • 3 years ago

          Ah, I see. I’m usually not so bad at reading comprehension. 😛

          I would like to know the total NAND writes under such a workload – it could be anywhere from just over the host writes to many times that.

            • Chrispy_
            • 3 years ago

            I don’t think THG used a tool that exposes raw NAND writes. It looks like they used only CrystalDiskInfo which does not.

            Host writes are all that really matter, since NAND writes are never ever exposed to the host other than as a stat via supported utilities like smartmontools. If you torture a drive with 4K writes when it’s full, it is going to be forced in a synthetic and unrealistic way to constantly erase entire pages. If the page size is 1MB, then 4K writes could have worst-case write amplification of 256x

            • Waco
            • 3 years ago

            Right, we covered that. I failed at reading. Lol

    • chuckula
    • 3 years ago

    Interestly article.

    As a point of reference, a quick invocation of smartctl on a 240GB SSD I am using reveals a total of 3.68 TB written over the life of the drive (3.47 powered-on years). This drive holds both a Linux OS and a Windows 10 VM image that is used daily, so there are effectively two different operating systems running on the drive concurrently.

      • morphine
      • 3 years ago

      Back then I had a 256GB Samsung 830. I used it intensively as a daily driver with lots of VM work. After ~2.5 years, all it had was 17TB written.

      These seemingly-small numbers may look scary at first, until one realizes he’s going to die of old age before the drive burns through its spare NAND.

        • PBCrunch
        • 3 years ago

        True, true, but 110TB is a lot less than 700TB. Let’s all hope some technology appears to stop the bleeding and endurance doesn’t keep dropping as drive speeds increase.

        70GB per day of writes is like 35 seconds of continuous writing per day on some of the new crop of NVMe drives. In that context, things sound pretty dire.

          • DPete27
          • 3 years ago

          [url=http://www.tomshardware.com/news/intel-600p-endurance-tbw-warranty,32798.html<]The 256GB drive has an endurance rating of 144TB.[/url<] 110TB isn't too far off, everything considered.

            • Waco
            • 3 years ago

            Sure, but it didn’t even make it to the rating.

            • derFunkenstein
            • 3 years ago

            What’s unfortunate is that this, just like TR’s own endurance testing, has a sample of one. Other samples may surpass the rating while others may fail even worse. It’s just hard to draw conclusions despite all the work that goes into it.

          • DavidC1
          • 3 years ago

          Since the torture tests are based on 4K writes, and NVMe drives are nowhere near there sequential write speeds, it’ll take longer than that. I believe random write 4K hasn’t increased a lot comparing it to an SATA drive?

            • Waco
            • 3 years ago

            Good drives can maintain writes in the 50-100K 4K random IOPs range. That’s 200-400 MB/s, it doesn’t take a long time to hit the daily endurance limits for small/low endurance drives.

          • BurntMyBacon
          • 3 years ago

          [quote=”PBCrunch”<]True, true, but 110TB is a lot less than 700TB. Let's all hope some technology appears to stop the bleeding and endurance doesn't keep dropping as drive speeds increase.[/quote<] This statement is interesting and possibly cause for concern. Let's evaluate. The 335 launched in Oct. 2012, uses 20nm planar MLC NAND, and wrote over 700TB before bricking. The 600p launched in Sep. 2016, uses (?nm?) 3D TLC NAND, and wrote ~110TB before bricking. A quick search gives me no feature size for the Intel/Micron 3D NAND (I'm not sure it is comparable anyways). For reference, the Micron planar process was 16nm. At face value, it looks like we lost 6.36x the endurance in a 4 year period. At the same rate of degradation, in four year, the SSDs will only be capable of a little over 17TB of writes (which would make users like morphine a little nervous). However, several considerations must be made: 1) The tests were not equivalent. It is speculated that the drive would have reach the full 144TBW under a more normal load. It is pretty well a certainty that it would do better than it did do. Lets assume a degradation 5x less endurance for simplicity sake. Now we are up to 140TB today and 28TB in four years. This still leaves some, though not a lot of, margin for morphine type users. 2) We are comparing MLC NAND to TLC NAND. Looking at multiple generations of MLC and TLC process, it appears that MLC generally ends up with roughly 3x the endurance of TLC built on the same process. While this is neither exact, nor guaranteed to hold true for future generations, lets assume it is for the sake of this discussion. Under this assumption, a TLC equivalent of the 335 could only handle around 233TB of writes. This gives us a number closer to half the endurance and and people like morphine don't get nervous for more than a decade yet. Of course, with QLC flash starting to show up, we may alternately assume we will see a similar degradation in 4 years time. 3) We are comparing planar NAND to floating gate 3D NAND. Intel / Micron have chosen to use Floating Gate where as Samsung / Toshiba / Hynix are using Charge Traps. Charge Traps induce less silicon stress and consequently have higher endurance than Floating Gates. Should Intel / Micron choose to move away from Floating Gates, then it may delay any practical concerns for an indefinite amount of time.

            • Waco
            • 3 years ago

            The workloads are not equivalent, you can’t compare the worst-case workload that Tom’s did with the workload TR put on the drives (which was FAR more friendly).

            The difference in write amplification could be anywhere from 1x (in a perfect world with a perfect controller…which doesn’t exist) to [flash page size / 4K]. The drive Tom’s hammered could have easily done 64-128 times as many writes if the controller was forced to flush pages for every 4K write. In reality, it’s probably closer to 5-10x amplification, which still puts the endurance of the drive north of the older model under the TR workload.

        • Srsly_Bro
        • 3 years ago

        I still have one and it’s still working fine.

        • dyrdak
        • 3 years ago

        I believe the write number may skyrocket once the drive gets almost full and new data is being written to it. Moving pieces of existing data (to wear level) and adding new may the real killer. I bet that NAND on my 16GB iPhone is getting stressed out by podcasts churn (aided by iOS/iTunes issue with podcast that disappeared from GUI yet leave little free space for new content).

        • BurntMyBacon
        • 3 years ago

        I still have an 830 going strong. I’ve logged about 14TB writes on it so far. A buddy of mine uses one for video production and encoding at his job and has logged closer to 40TB writes. No issues.

      • Parallax
      • 3 years ago

      Another datapoint: 120GB Intel drive, 33.91TB written, 5.94TB read, 2 power-on years. OS and basic applications only, everything else on another drive. Apparently with normal use these much higher values are possible.

        • chuckula
        • 3 years ago

        33.91TB written, 5.94TB read,

        That sounds awfully out of whack for a regular system drive. You sure those number’s aren’t reversed? Outside of specialized applications there tends to be a lot more reading than writing during operation of drives.

          • mczak
          • 3 years ago

          Excessive logging or autosave somewhere could cause this. A well known case is firefox session restore – can easily write 10GB per day with just a couple open browser windows (and of course unless firefox crashes the data is never read back).

            • Parallax
            • 3 years ago

            Those numbers are correct. Don’t use session restore that often, but I thought the pagefile might contribute to this somewhat. IDK what else would write that much.

            • Fearless Leader
            • 3 years ago

            guess I shouldn’t rip my blu-ray collection. One disc per day would exceed 10GB per day of writes.

          • MOSFET
          • 3 years ago

          Intel reports host writes and NAND writes, so you can presumably do some calculation on write amplification. The 530 and 535 tend to have about 50TB NAND writes for every ~5TB host writes, in my systems.

          Kingston HyperX SandForce 2×00 drives do the same reporting, but are closer to 1:1 after 4+ years on 3x 120GB drives. Here is a screenshot I took some time ago from Kingston’s Toolbox that illustrates this (ignore the highlighting, that was for something else – SMART line 233 is NAND writes and line 234 is Host Writes. It would appear that SandForce compression actually helps. Intel drives with the same Sandforce controller go the other way, big time. I don’t have snip right now but it’s well documented.

          [url<]https://dl.dropboxusercontent.com/u/18719755/Kingston-SSD-Toolbox.PNG[/url<]

      • Freon
      • 3 years ago

      My Samsung 830 (128GB) had recorded something like 13TBW after something like 3.5-4 years of being a boot drive on my primary system. Still lives on in a second machine, stopped looking though and that system gets such light use I hate to use it as a reference point.

      I could see someone doing some light video editing or even audio engineering have a drastically higher value, though. Maybe a whole order of magnitude.

      • Ifalna
      • 3 years ago

      Another data point:
      120GB 520 series Intel SSD

      Operational: 5 years, used as system drive incl moving games back and forth as needed.

      16.5 TB of writes.

Pin It on Pinterest

Share This