The SSD Endurance Experiment: Two freaking petabytes

More than a year ago, we drafted six SSDs for a suicide mission. We were curious about how many writes they could survive before burning out. We also wanted to track how each one’s performance characteristics and health statistics changed as the writes accumulated. And, somewhat morbidly, we wanted to watch what happened when the drives finally expired.

Our SSD Endurance Experiment has left four casualties in its wake so far. Representatives from the Corsair Neutron Series GTX, Intel 335 Series, Kingston HyperX 3K, and Samsung 840 Series all perished to satisfy our curiosity. Each one absorbed far more damage than its official endurance specification promised—and far more than the vast majority of users are likely to inflict.

The last victim fell at 1.2PB, which is barely a speck in the rear-view mirror for our remaining subjects. The 840 Pro and a second HyperX 3K have now reached two freaking petabytes of writes. To put that figure into perspective, the SSDs in my main desktop have logged less than two terabytes of writes over the past couple years. At this rate, it’ll take me a thousand years to reach that total.

So, yeah. Pretty insane. It’s time for another check-up.

The story so far

If this is your first encounter with our endurance experiment, I recommend reading this introductory article. It has more details about our subjects, methods, and test rigs than we’ll rehash here. Here’s the TL;DR version:

The experiment explores a weakness inherent to the very core of flash memory. NAND stores data by trapping electrons inside billions of individual memory cells. The cells are walled off by an insulating layer that normally prevents electrons from getting in or out. Applying voltage to a cell induces electron flow through that barrier via a process called tunneling. Electrons are drawn in when data is written and expelled when data is erased.

Tunneling is a pretty slick feat of nanoscale engineering, but it comes at a cost. The accumulated traffic slowly breaks down the physical integrity of the insulator, degrading its ability to trap electrons in the cell. Some electrons also get caught in the insulator, imparting a negative charge that narrows the cell’s usable voltage range. The more that window shrinks, the more difficult it is to read and write data reliably—and quickly.

When cells become more trouble than they’re worth, fresh blood is called up from the SSD’s overprovisioned “spare” area. These replacement cells ensure the drive maintains the same user-accessible capacity regardless of any underlying flash failures.

Although all SSDs are living on borrowed time, they can take different paths to the end of the road. Intel’s 335 Series is designed to go out on its own terms, after a pre-determined volume of writes. Ours took its own life after 750TB—but not before its wear indicator bottomed out and multiple SMART warnings were issued.

Our first HyperX 3K only made it to 728TB. Unlike the 335 Series, which was almost entirely free of failed flash, the HyperX reallocated nearly a thousand sectors before it ultimately expired. Again, though, the wear indicator and SMART warnings provided plenty of notice that the end was nigh.

All but a few of the HyperX’s reallocated sectors hit after 600TB of writes. The Samsung 840 Series started reporting reallocated sectors after just 100TB, likely because its TLC NAND is more sensitive to voltage-window shrinkage than the MLC flash in the other SSDs. The 840 Series went on to log thousands of reallocated sectors before veering into a ditch on the last stretch before the petabyte threshold. There was no warning before it died, and the SMART attributes said ample spare flash lay in reserve. The SMART stats also showed two batches of uncorrectable errors, one of which hit after only 300TB of writes. Even though the 840 Series technically made it past 900TB, its reliability was compromised long before that.

Corsair’s Neutron GTX was our most recent casualty. Despite being the picture of health up to 1.1PB, it suffered a rash of flash failures over the next 100TB. SMART errors also began to appear, foretelling the drive’s imminent doom. The Neutron ultimately reached 1.2PB, and it completed the usual round of tests at that milestone. However, it failed to power up properly after a subsequent reboot.

After the Neutron GTX failed to answer the bell, the 840 Pro and second HyperX 3K pressed on to 2PB without issue. They also completed their fifth unpowered retention test. This time, the SSDs were left unplugged for 10 days. Both maintained the integrity of our 200GB test file.

To be fair, the official JEDEC specs require that drives accurately retain data for much longer unpowered periods. We had to make a few concessions to accelerate the timeline for this experiment.

Our two remaining subjects have passed the same retention tests and absorbed the same volume of writes, but their individual stories are very different. On the next page, we’ll take a closer look at how each one is coping with the continuous barrage of incoming data.

 

The war of attrition continues

We’ll start with the Samsung 840 Pro, which has an unblemished record despite mounting reallocated sectors.

Flash failures started piling up after 600TB of writes. Apart from a few undulations, the retirement rate has been fairly consistent since.

The 840 Pro is now up to 5591 reallocated sectors, which translates to over 8GB of flash. That may sound like a lot, but it’s only 3% of the drive’s 256GB total. The SMART data indicates that we’re only 61% into the used block reserve.

That reserve counter seems to be the best gauge for the 840 Pro’s remaining life. The wear-leveling count is supposed to be related to drive health, but it expired 1.5PB ago.

Given what’s supposedly still in the tank, 3PB doesn’t seem impossible. The “good” health rating reported by Samsung’s Magician software is encouraging, too, though it’s hard to put a lot of faith in that assessment. Our failed 840 Series had the same health rating before its sudden demise. Unlike its fallen sibling, the 840 Pro has at least remained completely free of uncorrectable errors.

Before digging into the Kingston HyperX’s vital signs, we should point out that this drive is running a different race than the 840 Pro and the other SSDs. The HyperX is based on a SandForce controller that compresses incoming writes to reduce flash wear (and to accelerate performance). Thanks this DuraWrite mojo, the HyperX has squeezed the experiment’s 2PB of host writes into just 1.4PB of flash writes.

Now, that compression ratio is only applicable to our particular write workload. The surviving HyperX has been getting a stream of sequential data using the 46% incompressible “applications” setting in Anvil’s Storage Utilities. To ensure an even playing field, completely incompressible data has been used for the other drives, including the first HyperX. The graph below illustrates the impact that difference has on the HyperX’s compressed writes attribute, which tracks the true flash footprint of inbound data.

Write compression clearly isn’t the only factor responsible for the remaining HyperX’s survival. If that were the case, the drive would have quit around 1.1PB, when its flash writes matched those of its deceased twin. Digging deeper into the SMART data provides some additional insight on this candidate’s exceptional endurance.

The lifetime attribute leveled out long ago, triggering a warning that drive is in a “pre-failure” state. Despite that ominous message, flash failures have been few and far between. Only 31 reallocated sectors have been reported through 2PB of writes, which translates to a mere 124 megabytes of failed flash. The death toll has risen slightly since we last checked in, but the total still represents no more than a blip.

Less than half of the reallocated sectors have been prompted by program or erase failures. The HyperX recovered gracefully from those hiccups, but it also logged two uncorrectable errors just before reaching 1PB. Uncorrectable errors can compromise data integrity, so we recommend taking SSDs out of service if any appear. While the HyperX remains in the experiment, a black mark taints its permanent record—and an asterisk denotes its compressible payload.

Caveats aside, there’s no denying that the flash in this particular unit is incredibly robust. Since the HyperX is designed to keep writing data until its sector reserves are exhausted, this one may have a lot of life ahead. Then again, failure could be just around the corner. The other Kingston SSD went from 10 reallocated sectors to nearly 1000 over its last 128TB of writes.

With our health check-up complete, it’s time to see if the aging survivors can keep up with their former, fresher selves. On to the benchmarks!

 

Performance

We benchmarked all the SSDs before we began our endurance experiment, and we’ve gathered more performance data after every 100TB of writes since. It’s important to note that these tests are far from exhaustive. Our in-depth SSD reviews are a much better resource for comparative performance data. What we’re looking for here is how each SSD’s benchmark scores change as the writes add up.

As the experiment progresses, the 840 Pro is becoming somewhat more prone to slower performance in Anvil’s sequential write speed test. Those slowdowns have been relatively minor so far, and they’re still inconsistent. For example, the 840 Pro didn’t skip a beat during its last two benchmarking sessions.

The minor variance in the HyperX’s random read scores doesn’t seem to be related to wear. That drive has otherwise performed consistently, a common trend throughout the experiment.

Unlike our first batch of results, which was obtained on the same system after secure-erasing each drive, the next set comes from the endurance test itself. Anvil’s utility lets us calculate the write speed of each loop that loads the drives with random data. This test runs simultaneously on six drives split between two separate systems (and between 3Gbps SATA ports for the HyperX drives and 6Gbps ones for the others), so the data isn’t useful for apples-to-apples comparisons. However, it does provide a long-term look at how each drive handles this particular write workload.

Again, there’s some evidence that the 840 Pro’s write performance is slowing slightly. While the average write speed per run has oscillated wildly since the beginning, the peaks have been a little lower over the past 300TB.

With the exception of regularly spaced spikes associated with secure-erasing the SSDs before each round of benchmarks, the HyperX has maintained steady write speeds since the beginning. Credit compressible data for the drive’s performance advantage over its incompressible counterpart.

 

Until the last SSD standing

I knew it was possible for some of the SSDs in our endurance experiment to survive 2PB of writes, but I didn’t really expect any of them to make it this far. Two petabytes is a staggering amount of data for consumer-grade drives.

To be fair, our sample size is too limited to draw definitive conclusions about the drives we tested. Flash wear is tied to the physical integrity of individual cells, so it can be influenced by normal semiconductor manufacturing variances. One needs to look no further than the experiment’s twin HyperX units to see that, even within the same family, some SSDs simply have more durable NAND than others.

The results of our experiment do, however, point to some more general conclusions about SSDs as a whole. Although only two drives made it to 2PB, all six wrote hundreds of terabytes without issue, vastly exceeding their official endurance specifications. More importantly, the drives all survived far more writes than most users are likely to generate. Typical consumers shouldn’t worry about exceeding the endurance of modern SSDs.

With 2PB in the bag, our survivors are already on the lonely road to the next milestone. Their ongoing battle reminds me a little of the Iron War, an infamous showdown between Dave Scott and Mark Allen during the 1989 Ironman triathlon world championship. After matching each other through the race’s 2.4-mile swim and 112-mile bike, the two legends ran side-by-side for much of the marathon that followed. Allen ended up pulling ahead in the final miles to win the eight-hour race by less than a minute.

Yeah, I’ve written enough of these endurance updates that I’m now tapping the well of obscure sports references to keep things fresh. But the Ironman is all about endurance, just like this experiment.

Right now, it’s hard to say which of our remaining subjects will be the last SSD standing. As the lone survivor to remain free of serious errors, the 840 Pro is already a victor of sorts. The question is whether it can outlast the last HyperX, which refuses to give up despite stumbling through a couple of uncorrectable errors. The HyperX has write compression on its side and plenty of spare flash in reserve, so the final duel could go on for a while. We’ll be watching.

Comments closed
    • RGB
    • 4 years ago

    Very interesting article. One suggestion for the conclusion of the article. Can you please identify which of the dead SSDs were still able to retrieve information off of the drive. Maybe a test that tries to read as much data off the drive and shows in percentage terms how much data it was able to retrieve right after it died, followed by a two month power off and the same test to read all the data in the dead drive would be helpful to readers. Another power off test for a full year would also be helpful so that readers would have an idea for how long they could access an SSD after it died.

    I’m assuming that “dead” here means that the drive is no longer able to write data reliably, but still can read data without too many problems. I had an SSD die on me a few years ago after less than 2 years use and managed to retrieve everything off of it. But, I don’t know if that would apply to all SSDs – and have no idea if that could be expected after a “torture test” like this…

    Like someone else mentioned, would love to see a similar study done with some of the newer SSDs after this one has completed.

    • AJZA
    • 5 years ago

    I’ve read the previous articles and I’ve created my account just to say thank you for your efforts Geoff. 🙂

    Andrew

    • Sireangelus
    • 5 years ago

    On my laptop, in 6 months my 840 evo has accumulated over 2,7TB of writes… Can you explain to me how in the world did you manage to have 1TB in 2 years?

    • kilkennycat
    • 5 years ago

    Geoff, have you any idea whether the early-failure HyperX used the same source and type of flash silicon as the one that has endured? Hopefully, you have kept the dead body. And if the same source, maybe the manufacturing batch dates of the flash silicon are significantly different?

    With a bug-free controller, the weakest link in any SSD is the manufacturing quality of the flash-silicon.

    • gsteele531
    • 5 years ago

    I confess to not having read the earlier article mentioned, which I will do shortly, but I wanted to muse about a few things while fresh in my mind. One, there is this notion that SSDs have a limited retention life for data that ranges from X to Y, depending on the drive, technology, manufacturer, model, claim, etc. It’s not clear whether that applies to power-off hours, like a battery, or to power on, but lack of read activity hours, somewhat analogous to the refresh cycle of dynamic memory. That is, does the longevity figure apply to files written on an SSD that is installed in a computer that is always on, or only to drives kept on a shelf with no power applied?

    Two, most computers are not constantly rewriting everything. An OS gets installed and pretty much stays in the same locations on the disk throughout its life. Data files may move around as they are extended and then defragmented, but many are written and thereafter only occasionally or perhaps never subsequently read or moved. Other files are in constant flux and rewritten constantly – for example, a pagefile or to a lesser extent, hibernation file. So the drive is not the element to which a “life” attribute should be applied, but rather the allocation unit, depending on the particular type of use to which the allocation unit is put. And certainly, it wouldn’t take long for a swapfile to exceed the rewrite life compared to that of a hiberfile, yet the swapfile is a system component far more critical to operating integrity.

    If you are writing linearly – i.e., to fill the drive and then overwrite starting back at sector zero, then 240 GB only takes 4,000 overwrites to hit 2 PB written. 4,000 overwrites seems a trivial number in the context of a swapfile. Of course, conventional wisdom is that you relocate the swapfile to a second, magnetic disk, or eliminate the swapfile in favor of more main memory and no swapping. But it’s still an issue that a synthetic linear overwrite life measure does not reflect accurately.

    How does one characterize the life – and more appropriately, the ability to continue to operate – of a device with so complex a variety of usage profiles across the range of files stored thereon? Wouldn’t it make sense to have some sort of monitoring utility built into the driver of an SSD that kept track of the abuse to which each given allocation unit is subjected, perhaps with a proactive reallocation strategy built in, but at least with an impending failure warning built in? A ~”low tech” device like a car can warn you when tire pressure is low, oil hasn’t been changed, the car needs maintenance, etc. but an SSD – or for that matter, a hard drive – doesn’t have a similar monitoring and alert system? Seems out of touch with the technology tasked with holding our vital data.

    The conventional reply is “back up your data”; that’s like saying: “keep a spare car around.” Too unsmart by half. Seems worth thinking about and debating.

      • meerkt
      • 5 years ago

      I’m guessing online retention is not a problem. If nothing else, because the drive is free to “refresh” the cells. But I haven’t read anything specific about it, and it would be nice to have official confirmation with more details.

      Regarding static vs. dynamic files, it doesn’t matter because the drive does wear leveling. There’s logical-to-physical mapping that happens internal to the drive. The SSD keeps track of how many times each page/sector was written to, and remaps behind the scenes during normal use. The OS may rewrite the same logical sector a million times, but internally the drive will regularly move it around to different physical places. In HDDs sectors are remapped only when they’re damaged. In SSDs, where there’s not much penalty to non-linear access, remapping happens all the time to spread the writes.

      • sschaem
      • 5 years ago

      The Samsung 840 had unrecoverable errors at 300TB written.

      And we have to take into account that TR use a test that perform perfect leveling.
      Most people have data on the drive, so the drive will have to do wear leveling, increasing the write cycles potentially by 2x

      So you in real life, you might only have to write 150TB to start to get file errors on a Samsung 840. No very reassuring….

      If this test is ever done again, it should be done on a drive that is 80% full.
      This will not only test nand quality, but also the firmware quality in term of wear leveling.

      The picture might come out completely differently.

    • Razor512
    • 5 years ago

    For me, with a workload of gaming, occasional video editing, and lots of photo editing, I put around 45-50TB of writes per year on my SSD. I consider an SSD to be ready for replacement when it starts to suffer from reallocated sectors at a regular rate, as there is an increased risk for data corruption.

    • RobShaver
    • 5 years ago

    I shoot and edit video for myself and have some five tetra bytes on various non-SSD hard drives. In the past I had the original video tape as backup, but now all my acquisition is digital. I fill hard drives and put them on the shelf and hope they will spin up in the future if I need something from my archive.

    All this is to say, I wonder if SSDs will retain data more reliably than the spinning disk hard drives? Having worked in the tape industry 25 years ago, I know that even tape becomes unreadable in less than two decades (not that my archive will be worth anything after even one decade :-).

      • w76
      • 5 years ago

      Oh no, you’re in dire need of one of the unlimited backup services out there for those disks. CrashPlan works nicely, but it may not play nice with disconnected disks that it doesn’t see after a period of time. Zoolz is more expensive than I recall for unlimited backup, but they’d never delete the files no matter how long those disks were disconnected. I’d shop around a bit.

      But that’s not exactly what you asked about, just my distrust of all drives, spinning or solid. The word in forums seems to indicate some drives, some times, have data integrity problems after long periods of being powered down. That’s almost unheard of in modern, traditional disk drives, AFAIK.

      • meerkt
      • 5 years ago

      No, I believe it’s quite the opposite. SSDs are for active use, not for archival.

      This 2012 paper says that JEDEC’s JESD47G.01 requires retention of 10 years after 10% P/E cycles (and 1 year at 100%, which I recall having read in some JEDEC standard):
      [url<]https://www.usenix.org/conference/fast12/optimizing-nand-flash-based-ssds-retention-relaxation[/url<] But I'm not sure under what conditions, and on a very brief search I couldn't find more details on that 10 years at 10% wear in the referenced JEDEC standard. See the Google cached version of it: [url<]http://www.jedec.org/sites/default/files/docs/JESD47G-01.pdf[/url<] From a 2011 Dell document, page 6. "In MLC and SLC, this can be as low as 3 months and best case can be more than 10 years. The retention is highly dependent on temperature and workload." [url<]http://www.dell.com/downloads/global/products/pvaul/en/Solid-State-Drive-FAQ-us.pdf[/url<] A 2010 presentation that includes a table on retention times, based on modeling by Intel. I believe it's for cells that have used up their P/E cycles. See page 27: [url<]http://www.jedec.org/sites/default/files/Alvin_Cox%20%5BCompatibility%20Mode%5D_0.pdf[/url<] The trend has been for newer/smaller process geometries to lead to less P/E cycles and worse performance (see this 2012 paper for example: [url<]http://cseweb.ucsd.edu/~swanson/papers/FAST2012BleakFlash.pdf[/url<]), so I believe also worse retention. But it appears the latest MLC generation is rated at 3000 cycles just like the previous one, so maybe the trend flatlined. I assume TLC cells, with their 1000 P/E cycles, are also worse at retention.

        • epicmadness
        • 5 years ago

        SSDs can be a superior archive than mechanical drives due to the fact that theres no mechanical that deteriorates over time.

        SSDs problem is that NAND loses charge over time so reading becomes a blur, but you can refresh it by having archives docked in a very-low power bank who’s purpose is only to refresh once or twice a year.

        this effectively can maintain good reliability for decades and such.
        where as mechanical drives not only could lose magnetic charge but mechanical parts rusting or distorting as well.

          • meerkt
          • 5 years ago

          If manufacturers published specs on retention and provided official means to check the status and to do a refresh, then indeed it may be an option to consider. But I’m yet to see any official word, or even unofficial, on retention or refreshing.

          I guess no one yet is counting on SSDs for archival or nearline.

    • esterhasz
    • 5 years ago

    The Iron War reference is so great. I only did 70.3 and will probably never make it to full distance, but that story really gave me some awesome imagery: two lonely SSDs at 110 degrees on alii drive…

    • PhilipMcc
    • 5 years ago

    When one drive remains will the test be over? Or will the victor press on, like Voyager or Buzz Lightyear?

      • stdRaichu
      • 5 years ago

      You’re lucky – gold subscibers will get to see the final SSD enter into a swordfight with Clancy Brown in an abandoned silicon fab. Silver subscribers get to see an intimate behind-the-scenes portrayal of Sean Connery settling down with the Neutron GTX and misers like me will be shown autoplay flash ads of Christopher Lambert selling refurbished OCZ’s.

        • PhilipMcc
        • 5 years ago

        Excellent reference.

    • DarkMikaru
    • 5 years ago

    Just like many others on the site I greatly appreciate this test. Great job TL…. can’t thank you enough.

    • travbrad
    • 5 years ago

    I’m really glad you guys did these tests. You have pretty much proven there is no reason to worry about NAND flash durability with current SSDs. They could certainly have other issues, but the durability of the flash isn’t one of them. We all suspected it probably wasn’t an issue, but I don’t think most of realized just how much of a non-issue it is, and it’s nice to have hard data to back it up.

      • UberGerbil
      • 5 years ago

      A few years ago those of us who insisted it was a non-issue didn’t really have anything except first principles and general knowledge of the technologies involved to back us up when arguing with the SSD skeptics. Now we have a compelling argument based on real-world data. Of course, a sample size of one of each of a small subset of drives on the market is only indicative, not conclusive; but it’s still pretty compelling. And you’re absolutely right that other things can go wrong — but that was true of HDs as well. And we know the fundamental spinning platter tech in HDs does fail eventually; now we know that in normal consumer usage it very likely will fail before the fundamental NAND tech in an SSD does.

      • designerfx
      • 5 years ago

      I like your idea, but it’s not true.

      These are two drives they obtained that are lasting. We can’t even remotely say that this will be true of all SSD’s, even the same models from the same manufacturers.

      It’s great to see them last and shows the durability can be good, yes – but it’s going to take (even longer) to get a bigger picture on durability across large swaths of consumers.

        • Waco
        • 5 years ago

        Except that even the worst drive exceeded its write endurance many times over…

      • cphite
      • 5 years ago

      Agreed… but we have one example. And while I certainly appreciate what TR is doing here, and it’s great to have that one example… at the end of the day, this needs to be replicated before we can draw any solid conclusions.

      If they were to run the same test with the same drives, and get the same results – that would be compelling. If they run the same test and the drives fail in a different order, that’s something completely different.

        • travbrad
        • 5 years ago

        I wouldn’t ever use these results to say x model/brand is better than y model/brand, but rather as a demonstration that SSDs in general do not have issues with flash endurance. Even the drives that failed the quickest were far exceeding the amount of data a normal user (or even most power users) would ever write to them.

        The SSD naysayers weren’t saying specific models didn’t have enough write endurance, they were claiming all SSDs (or at least all MLC/TLC-based drives) would have issues and that traditional spinning disks would outlast them.

      • sschaem
      • 5 years ago

      Samsung 840 is expected to have hard failure at ~150TB written… how is that a non issue ?

      • CBHvi7t
      • 5 years ago

      [quote<]with current SSDs[/quote<] Yes, but the 1.5 PB are a constant of the technology, therefore there will be a problem if writes increase. This is mostly a testament to the controllers.

    • odizzido
    • 5 years ago

    The fact that the pro is the only one without data loss is pretty significant I think. I am hoping that when it finally fails to write anymore that the drive will be available for use as read only. Unlike that intel one bricking itself instead of going read only on purpose? wtf is with that?

    • Ph.D
    • 5 years ago

    That 840pro is ridiculously good. I know it’s also very expensive, which makes the Kingston look more amazing, but still. This makes me start fantasizing about building an all-flash memory system.

    I love that you’re doing this Geoff/Techreport.

      • CeeGee
      • 5 years ago

      Is the Kingston amazing or lucky, same for the Samsung?

      The other Kingston died a long time ago and the surviving one logged two uncorrectable errors just before reaching 1PB and that is probably the point at which you want to replace the drive. That’s still a great performance of course but I do wonder how representable these results are with only one example of each drive tested except the Kingston and those two drives had considerably different life spans.

      I would like any new iteration of this test if it comes should have at least two or three of each drive tested.

    • Milo Burke
    • 5 years ago

    Maybe next time we can get five of the same drive to start to get a picture on consistency across a single model number. Perhaps the 840 EVO or the MX100?

      • CeeGee
      • 5 years ago

      Three of each of those would be good IMO. Two different manufacturers and MLC versus TLC and three of each to see how consistent they are?

        • AJZA
        • 5 years ago

        And don’t forgot the more enterprise-y SLC drives. Oh, and what about PCI-e drives?

        Perhaps there should be a team of people on this… 😉

    • sschaem
    • 5 years ago

    2000 TB / 256GB = 7812 writes cycle

    My understanding is that SLC is rated at 100K write cycle.
    MLC at 5-10K

    So I would expect the drives still in the run to fail hard at about 3-4 PBytes in this test.

    edit : TLC seem to be rated at 1 to 5K

    Other comment, this test, as performed, is not indicative of how a drive will perform.
    Many drive are often 80%+ allocated, and wear leveling amplify write cycles.

    This test is actually very concerning for people with TLC drive that are near full.

      • Waco
      • 5 years ago

      Drives do internal housekeeping to keep from hammering any one particular part of the drive. At least, they’re supposed to.

      That’s not to say it doesn’t burn up extra write cycles though…

    • kristi_johnny
    • 5 years ago

    It would be interesting to see a endurence test with Samsung 850 Pro, Intel 730, SanDisk Extreme Pro, Seagate 600 Pro, just to mention a few.
    Some of the drives are enterprise derivates. Maybe in the next endurance test.

      • Ethyriel
      • 5 years ago

      I’d love to see this for the Samsung 850 Pro. I’m excited about 3D NAND’s prospects for things like ZFS SLOG, but have no real idea how it actually stacks up.

    • Chrispy_
    • 5 years ago

    It’s a shame Sandforce as a company has been through so many acquisitions and reshuffles over the years – from an independent firm working with OCZ, to acquisitions by LSI, then Avago and now Seagate.

    The lasting performance of the SF-2281 controller and consistency is still pretty impressive, to me. It’s just a shame we haven’t seen or heard much about it’s successor – the SF3700 family. There were some “coming soon” rumblings in June at CES2014 but I’m not aware of an official launch yet.

      • DPete27
      • 5 years ago

      My guess is that nobody wanted to touch Sandforce with a ten foot pole after their gen 2 debacle.

        • Chrispy_
        • 5 years ago

        Exactly, that’s why they seem to have been bounced around so much.

        And yet with the bugs fixed the gen-2 products are still competetive with today’s offerings despite being four-year-old products that launched at the start of SATA3.

          • the
          • 5 years ago

          The flip side is that many other SSD manufacturers have had embarrassing bugs too. Intel and Samsung both come to mind.

          Only recently would I argue that the SSD market has finally matured, tough it is on the verge of the PCIe transition.

    • UnfriendlyFire
    • 5 years ago

    Someone at one of the SSD design facilities:

    “Why don’t we reduce the over-provisioning on consumer SSDs to save costs since the write endurance is already ridiculously high? And just to be safe, we could claim that we have ‘proprietary methods’ or ‘smart methods’ of extending the SSD’s life…”

    EDIT: On a side note, you should run some of the newer SDDs such as the M500s.

      • mczak
      • 5 years ago

      M500 is quite old. I guess though some new test could use some newer ~500GB SSDs – I’d be particularly interested in results from those using 3d nand such as the Samsung 850 Pro / EVO (the latter to be released in, I dunno, maybe within next week since some stores claim to have them in stock now, datasheets are available etc.). But also cheap “ordinary” ones like MX100.
      I’ve got some feeling that indeed since the manufacturers now have more experience with this, they could cut the reserved sectors down and be confident they could still easily reach the claimed endurance writing, thus the results not being as good. But who knows…

      • AJZA
      • 5 years ago

      ^^ This.

    • deruberhanyok
    • 5 years ago

    I love that this series is still going on, that these drives haven’t given up the ghost yet.

    I realize it has been a bit of a time sink, but I’ll echo Alvin’s question about a new batch of drives. Do you guys plan to do another once these finally fail? I’d be curious to see it tried with a new set picked after the last one of each batch fails.

    • Midnotion
    • 5 years ago

    There’s a typo at the start of the third paragraph: “The last victim fell at 1.2TB, which is barely a speck”, when it should be 1.2PB.

      • Dissonance
      • 5 years ago

      Fixed. I’m still not entirely used to dealing with units that large.

        • gigafinger
        • 5 years ago

        That’s what she said! Sorry, I just had to say it.

          • Waco
          • 5 years ago

          Everything seems so much smaller next to such a large unit…

        • dashbarron
        • 5 years ago

        Seeing as you’re more of an active / outdoors man, I think you should get a tattoo — sponsored by TR — that you survived the 2PB endurance run.

    • AlvinTheNerd
    • 5 years ago

    I know that these reviews are becoming tedious, but thank you for the work you are doing here.

    Will this be a one off review? Or are you going to be accepting a new batch of drives and make this a regular thing.

    Also, at the 1PB update, the 840 was sent back to get further analysis. Have you gotten anything back from them?

      • Ninjitsu
      • 5 years ago

      Would there be a point of repeating this experiment, though?

        • Ethyriel
        • 5 years ago

        Yes. Both to increase improvements between generations, and to increase the number of like drives to have some amount of statistical relevance.

          • Ninjitsu
          • 5 years ago

          Well, all drives lasted a few hundred terabytes, and really that was the point of this whole thing: will ordinary consumers ever wear out a drive?

          Average write rates for system drives seem to be about 10 GB per 24 hours of operation, which is about 85.5 TB per year [i<]of literally continuous usage[/i<]. But since you're unlikely to be using your system for more than 10 hours every day on an average, it'll be more like 4 GB per day, or about 1.48 TB per year, which I know to be fairly accurate. Even if you round it off to 2TB per year of non-continuous operation, you're still looking at 50 years of writes before you may, even with statistical variance, run into any issues. So yeah, they could repeat this experiment, but I'd doubt we'd gather anything new before another year or two. I'm just wondering if we're simply asking them to waste their time, by starting another batch immediately.

            • absurdity
            • 5 years ago

            If you’re shooting for statistical relevance, you’d have to do a pretty large scale test. Another batch of 5 drives isn’t going to tell us much that we don’t already know.

            • Ninjitsu
            • 5 years ago

            I’m not the one asking for statistical relevance (which would be good, of course, but as you say, it needs a lot more effort).

            But I don’t believe that if 5 out of 5 drives lasted say 500TB of writes at the very least without significant (or any) issues, then testing more from a similar price tier of reputed brands (and from chronologically nearby product lines) will show us many (any?) that can’t even make it to 100TB, which is already more than good enough.

            • cphite
            • 5 years ago

            Yeah, it seems like they’ve pretty much established that for the ordinary consumer, it’s just a non-issue. You aren’t going to wear out these drives as an average user; or even as a hardcore user. Not even close.

            I’m interested in this test mainly because I want to see how these drives survive very non-ordinary usage. For example, if I decide to use SSD on my database servers, where they’re going to be thrashing 24/7 is it ever going to be an issue, relative to a HDD.

            Based on their results, the answer appears to be, resoundingly, “no” but it’d be nice to have more than one test run to confirm that 😀

            • AJZA
            • 5 years ago

            I’m also interested in SSD use in database servers, but more in line with data warehousing than normal OLTP databases.

      • Ochadd
      • 5 years ago

      Yes. Thanks the review.

    • LaChupacabra
    • 5 years ago

    2 petabytes written to a consumer SSD? That’s gotta be a first.

      • the
      • 5 years ago

      Not sure. It’d be a great question to ask an SSD vendor of their known record holders.

      I know of some servers that have OCZ Vertex 2’s in them close to when those drives were launched in 2010. They’re not writing data at the same rate as this test but they have several years to make up the write difference. The speed difference with SSD over spinning disks was great enough to warrant the jump, with the appropriate redundancy in place (RAID 1 etc.) in case of drive failure.

      • sschaem
      • 5 years ago

      Seem to be within rated limits of MLC nands.

      But I think its getting close… 3 petabytes should start to show all the remaining drives fails in some manner.

Pin It on Pinterest

Share This