The SSD Endurance Experiment: Only two remain after 1.5PB

You won’t believe how much data can be written to modern SSDs. No, seriously. Our ongoing SSD Endurance Experiment has demonstrated that some consumer-grade drives can withstand over a petabyte of writes before burning out. That’s a hyperbole-worthy total for a class of products typically rated to survive only a few hundred terabytes at most.

Our experiment began with the Corsair Neutron GTX 240GB, Intel 335 Series 240GB, Samsung 840 Series 250GB, and Samsung 840 Pro 256GB, plus two Kingston HyperX 3K 240GB drives. They all surpassed their endurance specifications, but the 335 Series, 840 Series, and one of the HyperX drives failed to reach the petabyte mark. The remainder pressed on toward 1.5PB, and two of them made it relatively unscathed. That journey claimed one more victim, though—and you won’t believe which one.

Seriously, you won’t. But I’ll stop now.

To celebrate the latest milestone, we’ve checked the health of the survivors, put them through another data retention test, and compiled performance results from the last 500TB. We’ve also taken a closer look at the last throes of our latest casualty.

If you’re unfamiliar with our endurance experiment, this introductory article is recommended reading. It provides far more details on our subjects, methods, and test rigs than we’ll revisit today. Here are the basics: SSDs are based on NAND flash memory with limited endurance, so we’re writing an unrelenting stream of data to a stack of drives to see what happens. We pause every 100TB to collect health and performance data, which we then turn into stunningly beautiful graphs. Ahem.

Understanding NAND’s limited lifespan requires some familiarity with how NAND works. This non-volatile memory stores data by trapping electrons inside miniscule cells built with process geometries as small as 16 nm. The cells are walled off by an insulating oxide layer, but applying voltage causes electrons to tunnel through that barrier. Electrons are drawn into the cell when data is written and out of it when data is erased.

The catch—and there always is one—is that the tunneling process erodes the insulator’s ability to hold electrons within the cell. Stray electrons also get caught in the oxide layer, generating a baseline negative charge that narrows the voltage range available to represent data. The narrower that range gets, the more difficult it becomes to write reliably. Cells eventually wear to the point that they’re no longer viable, after which they’re retired and replaced with spare flash from the SSD’s overprovisioned area.

Since NAND wear is tied to the voltage range used to define data, it’s highly sensitive to the bit density of the cells. Three-bit TLC NAND must differentiate between eight possible values within that limited range, while its two-bit MLC counterpart only has to contend with four values. TLC-based SSDs typically have lower endurance as a result.

As we’ve learned in the experiment thus far, flash wear causes SSDs to perish in different ways. The Intel 335 Series is designed to check out voluntarily after a predetermined number of writes. That drive dutifully bricked itself after 750TB, even though its flash was mostly intact at the time. The first HyperX failed a little earlier, at 728TB, under much different conditions. It suffering rash of reallocated sectors, programming failures, and erase failures before its ultimate demise.

Counter-intuitively, the TLC-based Samsung 840 Series outlasted those MLC casualties to write over 900TB before failing suddenly. But its reallocated sectors started piling up after just a few hundred terabytes of writes, confirming TLC’s more fragile nature. The 840 Series also suffered hundreds of uncorrectable errors split between an initial spate at 300TB and second accumulation near the end of the road.

So, what about the latest death?

Much to our surprise, the Neutron GTX failed next. It had logged only three reallocated sectors through 1.1PB of writes, but SMART warnings appeared soon after, cautioning that the raw read error rate had exceeded the acceptable threshold. The drive still made it to 1.2PB and through our usual round of performance benchmarks. However, its SMART attributes showed a huge spike in reallocated sectors:

Over the last 100TB, the Neutron compensated for over 3400 sector failures. And that was it. When we readied the SSDs for the next leg, our test rig refused to boot with the Neutron connected. The same thing happened with a couple of other machines, and hot-plugging the drive into a running system didn’t help. Although the Neutron was detected, the Windows disk manager stalled when we tried to access it.

Despite the early warnings of impending doom, the Neutron’s exit didn’t go entirely by the book. The drive is supposed to keep writing until its flash reserves are used up, after which it should slip into a persistent read-only state to preserve user data. As far as we can tell, our sample never made it to read-only mode. It was partitioned and loaded with 10GB of data before the power cycle that rendered the drive unresponsive, and that partition and data remain inaccessible.

We’ve asked Corsair to clarify the Neutron GTX’s sector size and how much of the overprovisioned area is available to replace retired flash. Those details should give us a better sense of whether the drive ran out of spare NAND or was struck down by something else. For what it’s worth, the other SMART attributes suggest the Neutron may have had some flash in reserve.

The SMART data has two values for reallocated sectors: one that counts up from zero and another that ticks down from 256. The latter still hadn’t bottomed out after 1.2PB, and neither had the life-left estimate. Hmmm.

Although the graph shows the raw read error rate plummeting toward the end, the depiction isn’t entirely accurate. That attribute was already at its lowest value after 1.108PB of writes, which is when we noticed the first SMART error. We may need to grab SMART info more regularly in future endurance tests.

Now that we’ve tended to the dead, it’s time to check in on the living…

 

Two keep on truckin’

The Samsung 840 Pro and second Kingston HyperX 3K both reached 1.5PB with little drama. They also completed another unpowered retention test. After writing 1.5PB, the drives were loaded with a 200GB test file and then left unplugged for over a week. Both subsequently passed the MD5 hash check we use to verify data integrity.

A second hash check is integrated into Anvil’s Storage Utilities, the application we use to write data to the drives. This test is configured to verify a smaller 720MB file after roughly every terabyte of writes, and there haven’t been any inconsistencies yet.

Let’s examine the survivors in greater detail, starting with the 840 Pro, which continues to accumulate reallocated sectors.

The burn rate has slowed slightly since the initial uptick, but over 3400 sectors have been compromised so far. At 1.5MB each, that’s about 5GB of flash lost to cell degradation.

According to the SMART data, less than 40% of the flash reserves have been consumed. There’s still plenty on tap to cover future failures.

The wear leveling count is supposed to be related to drive health, but it ran aground after just 500TB, and the 840 Pro has been fine through a petabyte of writes since. The health indicator in Samsung’s SSD Magician utility software has given the drive a “good” rating since the beginning of the experiment, which seems like a more accurate assessment. Then again, the same utility gave the 840 Series a clean bill of health even after the drive had suffered hundreds of uncorrectable errors.

Practical limits restrict our experiment to one example of each SSD, but we have two HyperX 3K drives. One was tested like all the others, with randomized data that can’t be compressed by the DuraWrite mojo in SandForce controllers. The other has been getting a lighter diet based on the Anvil utility’s 46% incompressible setting. You can probably guess which one is still alive.

We can measure the effectiveness of SandForce’s compression scheme by tracking host writes, which come from the system, and compressed writes, which are committed to the NAND. The host writes are identical for both HyperX configs, but the compressed writes are not.

The HyperX 3K writes much less to the flash with the partially compressible payload. 1.5PB of host writes translates to only 1.07PB of compressed writes. On the other setup, compressed writes are slightly higher than host writes due to write amplification.

(The sequential transfers that dominate the endurance test have relatively low amplification, at least compared to the more random workloads typical of client systems. DuraWrite’s effectiveness in this particular scenario isn’t necessarily indicative of how the scheme will perform with other workloads.)

If compression were the only factor in the remaining HyperX’s survival, the drive would have hit the wall around 1.1PB, when it reached the same volume of compressed writes that crippled its twin. The built-in health indicator even suggested the end was coming around that mark:

But the flash in this particular SSD has proven surprisingly resilient. Just 12 sectors have been reallocated through 1.5PB, a far cry from the thousands accrued by the other HyperX.

Our sample size isn’t large enough to confirm which result is the outlier. Chip-to-chip variance is common in semiconductor manufacturing, though. Some dies are simply better than others, whether it’s the clock speeds that CPUs can attain or the write/erase cycles that NAND can survive.

The two HyperX SSDs arrived at the same time, and we used the highly scientific “eeny, meeny, miny, moe” method to determine which one got the partly compressible workload. If that drive also had a few cherry chips under the hood, it got lucky twice—and should probably buy a lottery ticket.

Digging deeper into the SMART data reveals that the surviving HyperX hasn’t been entirely flawless.

We didn’t notice it at the time, but the drive reported two uncorrectable errors between 900TB and 1PB of writes. Those episodes occurred during the same span as the first two reallocated sectors, though we can’t know for sure if the two are related. In any case, uncorrectable errors are very serious. They can corrupt data, crash applications, and even bring down entire systems.

The program and erase failures aren’t as critical. In those cases, the drive should be able to move on to another sector without risking the user’s data. Performance may suffer, but only momentarily.

Speaking of performance, the next page explores whether any of the SSDs lost a step over the last stretch.

 

Performance

We benchmarked all the SSDs before we began our endurance experiment, and we’ve gathered more performance data after every 100TB of writes since. It’s important to note that these tests are far from exhaustive. Our in-depth SSD reviews are a much better resource for comparative performance data. What we’re looking for here is how each SSD’s benchmark scores change as the writes add up.

With only a few exceptions, all the SSDs have performed consistently in these tests. The Neutron GTX stumbled with sequential reads a second time before it died, but it didn’t skip a beat elsewhere.

Unlike our first batch of results, which was obtained on the same system after secure-erasing each drive, the next set comes from the endurance test itself. Anvil’s utility lets us calculate the write speed of each loop that loads the drives with random data. This test runs simultaneously on six drives split between two separate systems (and between 3Gbps SATA ports for the HyperX drives and 6Gbps ones for the others), so the data isn’t useful for apples-to-apples comparisons. However, it does provide a long-term look at how each drive handles this particular write workload.

The 840 Pro and compressed HyperX react differently to Anvil’s stream of randomized files, but their behavior hasn’t changed over time. The Samsung’s write speeds continue to oscillate from one run to the next. The Kingston’s writes remain smooth and steady apart from the brief spikes associated with the secure-erase performed before each benchmarking round.

Unlike the other drives, the Neutron GTX actually sped up slightly as the writes piled up. Its average speed fell off a cliff toward the end, though. Here’s a tighter crop of its final steps versus the other failures:

All the SSDs slowed before deaths, but none as dramatically as the Neutron. No wonder it couldn’t carry on.

 

On to 2PB

Our SSD Endurance Experiment has claimed many victims since it began over a year ago. We’re not done yet, but we’ve already learned some valuable lessons. For example, modern SSDs appear to have more than enough endurance for typical client workloads. All six of our subjects wrote hundreds of terabytes without issue, which is far more data than even most power users will need to write during the useful lives of their drives.

When SSDs exceed their write/erase tolerance, failure can manifest in different ways. Most of our casualties at least provided warnings of their imminent demise, including the latest victim, Corsair’s Neutron GTX. That drive didn’t enter the read-only mode it was supposed to assume when running out of steam, but it still wrote 1.2 petabytes without generating any errors. Pretty impressive.

Among the survivors, the Samsung 840 Pro seems to be on track to outlast its rivals. The drive has reallocated thousands of sectors to circumvent worn-out NAND, but its SMART attributes suggest substantial reserves are still available. There have been no errors thus far.

In a sense, the 840 Pro has already won. The only other survivor, the Kingston HyperX that we’re testing with compressible data, has suffered a couple of uncorrectable errors to date. Those hiccups haven’t been enough to cause a complete failure, but they have left a black mark on the drive’s permanent record. That record already had an asterisk to denote the fact that SandForce’s DuraWrite compression scheme has dramatically reduced the amount of data actually written to the flash.

The 840 Pro and remaining HyperX are already on their way to 1.6PB. We’re committed to killing them both, and we may have to write a lot more data to achieve that goal. Stay tuned for the next chapter.

Comments closed
    • CBHvi7t
    • 5 years ago

    Did I understand this right?
    They wrote >5500 times the volume of the drive to the drive?
    That is not surprising at all, but a testament to the working of the wear leveling.
    The current generation will allow 3000 writes or the very same 1.5PB for a 500GB disk.

    • phufighter
    • 5 years ago

    Hmm.. well. this Intel 330 240GB should last me a lot longer than expected. Thanks! I’ve had it for 1.5 years and it seems I’ve only written 4.16TB (host writes), and 7773 NAND writes. And those numbers are probably only that high because of the whole-disk encryption used. woot. Thanks for this article!

    • yuhong
    • 5 years ago

    Read only mode should be standard practice for SSDs and any failure to go to read only mode in tests like this should be considered a bug.

    • Mopar63
    • 5 years ago

    [quote<]That record already had an asterisk to denote the fact that SandForce's DuraWrite compression scheme has dramatically reduced the amount of data actually written to the flash.[/quote<] Why does this need an asterisk, it is a feature of the drive and should be taken into account. The drive was built to do this and should be used in any testing because it is the way it is designed to be used.

    • AllanJones
    • 5 years ago

    ” This non-volatile memory stores data by trapping electrons inside miniscule cells built ” (p.1)

    You mean ‘minuscule’, not ‘miniscule’.

    • cphite
    • 5 years ago

    This is an awesome test…

    The one thing that I wonder is would the results hold up with a larger sample size? We’ve all had HDD’s from reputable manufacturers die earlier than expected, and also some that lasted longer than expected.

    It would be interesting (I think) to run another comparison, maybe between the last model standing, the first one to fail, and one somewhere around the middle; just to see if the results are similar.

    Note: Before the flame throwers are unleashed, I completely get that they limited the sample size to one of each for practical reasons, and that the point of the article was not to compare the drives but to get a more general view of SSD reliability; I’m not criticizing, just pondering 😀

    • Anovoca
    • 5 years ago

    When you set out on this particular quest for knowledge, did it ever occur to you, that by the time you collected the data about these particular drives, their NAND technology would be outdated and most of the drives would be almost 2 generations old?

      • Convert
      • 5 years ago

      No, someone doing this kind of thing for ~14 years would never stop to think that technology would evolve over time.

      Clearly there’s nothing to be gleaned from this testing.

    • crsh1976
    • 5 years ago

    Thanks for this awesome test and recount, it makes my Monday morning all that much better.

    Now I have this weird craving to torture-test a SSD to see what I can squeeze out of it, thankfully I’m at work and nowhere near an innocent, unsuspecting SSD.

    • crystall
    • 5 years ago

    Excellent article as usual; if only there had been a good old Intel 320 in the mix as well as a Marvell-based drive…

    • Inverter
    • 5 years ago

    These results are generally encouraging, however I would mostly apply them to writing large files. Would it be possible to do a similar test with many small files, and many meta-data updates, in order to provoke high write-amplification factors due to the difference in filesystem and drive sector/block sizes?

    • stmok
    • 5 years ago

    Samsung 840 Pro: [b<][i<]"One shall stand, one shall fall."'[/i<][/b<]

    • UnfriendlyFire
    • 5 years ago

    I’m looking forward to the next SSD Endurance test with SSDs such as the Crucial MX100 1TB. And Kingston’s and PHY’s “different components, same model number” SSDs.

    You should also test enterprise-grade SSDs, including the PCI-E types.

    I do believe that in the future, SSD manufacturers would start to reduce over-provisioning in their consumer SSDs in hopes that the SSDs don’t wear out within their warranty period.

    I would not be surprised if some of the SSD companies are paying attention to these SSD Endurance tests.

    • Roo5ter
    • 5 years ago

    This is my absolute favorite series of articles ever written by TR.

      • Buzzard44
      • 5 years ago

      I agree completely!

      So this seems to suggest about 5000 write/erase cycles in the best case for 2x nm flash. Pretty nice!

      Checking my Vertex 4, I’ve only got ~35 write/erase cycles on it after nearly two years. Sweeeeet.

    • Bobs_Your_Uncle
    • 5 years ago

    CONFESSION: I’ve followed this project with interest (as have no doubt many, MANY others), though I will confess to not having hung upon every word you have crafted with devotion & love. Neither have I read each & every related comment.

    That soul cleansing out of the way, should you ever be tempted back into the belly of this torturous & tortuous beast, I’d really be interested in seeing how current drives from OCZ fare, now that they are residing within Toshiba’s crib.

    Many people are quite willing to recount their pain & suffering at the hands of OCZ SSDs & while I don’t dismiss or deny the truths they tell, or the validity of basis for such sincerely held scorn, I’d wager that the OCZ we see today is NOT your father’s OCZ of yesteryear. While I suspect that OCZ generally performs within acceptable statistical parity of it’s market rivals, an exercise such as this might serve to exorcise a few nagging demons.

    So there’s that & the fact that the Barefoot controller has promised a slightly different approach to things, which may prove interesting as well.

    FWIW – Thanks!

      • south side sammy
      • 5 years ago

      I suggested the Toshiba thing a couple of times. but we have to remember i guess…….. lots of these drives, one way or another, are Toshiba underneath.

    • Forge
    • 5 years ago

    I did a couple of years of research before I picked up the Samsung 830 Pros that live in my desktop and main laptop. I’m glad to see that the same factors I based my choice on are still holding firm, that makes me feel that I’ll go with a pair of 512GB Samsung 850 Pros in the future, when I upgrade. Might even have 860 Pros, by then.

      • beck2448
      • 5 years ago

      Im on 512 840 PRO and no issues at all. Not even 1% of the usage in TR’s test after two years so should be good for quite a while, lol.

        • south side sammy
        • 5 years ago

        as great as some of these drives are, in a few short years it’ll be like comparing a 6600agp to a titan.

    • not@home
    • 5 years ago

    I love this test. It is by far the most interesting thing TR has done/is doing (well, most interesting to me at least). I wish TR had the resources to do a wider variety of drives and multiples (maybe two or three) of each drive model. I especially would like to see how Samsung’s new V-nand in both MLC and TLC fares. I guess that is just wishful thinking though. It seems like a lot of work and a lot of electricity to do all this. Thanks for everything you have done.

    • Rza79
    • 5 years ago

    Still, respect to Corsair for lasting so long. It’s a good performance from a small company that needs to buy everything from outside companies (unlike Samsung).

    • meerkt
    • 5 years ago

    Thanks for those extra few 100TBs. 🙂

    I have to bring up the obligatory “retention checking is more interesting now”, but I had an epiphany: retention testing in higher temperatures! At least, that’s what *they* seem to do in accelerated aging tests. Maybe for the next batch of drives? (After preliminary testing to see what happens to the drives after they cool off, then what temperature is a good compromise.)

      • willmore
      • 5 years ago

      The right way to do it, as you point out, is ALT. The problem is you have to know the coefficients of the Arrhenius equation which governs the event. Without knowing that, you can’t say that X degrees C for A hours is equivalent to Y degrees C for B hours.

      Guess who knows that data and isn’t likely to share it? 🙁

        • meerkt
        • 5 years ago

        Even without being accurate perhaps it gives some indication.

        But maybe there is data available. A random hit: [url<]http://www.sandisk.com/media/65675/LDE_White_Paper.pdf[/url<] It's a 2008 Sandisk whitepaper, not focused on retention, but it suggests (page 21): "1 year data retention @ 25C may be tested at: 1. Stress temperature of 85C require bake time of ~13 hours"

          • willmore
          • 5 years ago

          Different geometry and you’re talking a different set of equations. 🙁

          But, it might be interesting to see if the drives comply to the 1 year spec you mentioned.

    • willmore
    • 5 years ago

    First off, I love the idea of this test and I respect the amount of work that has gone into the testing and the writing of these articles–of which I have read every word.

    But:
    [url<]http://www.smbc-comics.com/comics/20140917.png[/url<] Also FLASH retention specs are generally in terms of years of storage and not one week. I ran a test on EEPROMs years and years ago. The idea was to determine if the 100K write cycles value spec'ed by the manufacturer was accurate or meaningful. So I wrote some code to write and verify the EEPROM constantly and to log the results. I let it run over a weekend. Coming back the next week, I found that it hadn't failed until 850K cycles! "Yay!", I thought, "The cycle life is way higher than 100K!" Oh, wait, the 100K value says "yeah, you can do 100K cycles and then unplug the chip for 25 years and store it at 40C *and the data will survive to 1x10-(some big number) bit errors*" My test was 'until it basically becomes DRAM'. Also, much cooler in my office than 40C.

      • meerkt
      • 5 years ago

      SSDs aren’t SLC NOR flash, but MLC (or worse) NAND. They aren’t specced to retain data, unpowered, decades after all P/E cycles are used. The relevant JEDEC standard just calls for 1 year retention at 30°C for consumer drives at the end of their writable life. I recall reading a few years ago something that suggested 10 years retention for new cells, but that was for much older drives that had better retention properties.

        • willmore
        • 5 years ago

        Excellent, thanks. So, one year or 52 weeks.

    • Phartindust
    • 5 years ago

    Thanks for doing this marathon torture test for us. It’s great to see that there is a very good chance of these drive lasting a good long time. I think it’s time to put to rest reliability worries, as it should get even better as we move forward. And as a nice bonus performance doesn’t really drop over time. Add in that a 256GB drive can be had for around $150, and it seems time that Hdds under 1TB fade to black.

    • HERETIC
    • 5 years ago

    Perhaps we can combine this with Friday’s conspiracy theory-
    SSD’S WILL go into read only mode before dying???????????????

    • SS4
    • 5 years ago

    I’m so glad i got a really good deal on a 840 Pro and that its the SSD i picked. I had a feeling it was above the rest 😛

      • south side sammy
      • 5 years ago

      i didn’t go back to the beginning of the testing but I don’t remember there being ten of each drive. only running one drive against 1 single drive from each competitor doesn’t tell everything.

    • chuckula
    • 5 years ago

    THERE CAN BE ONLY ONE!!

    • south side sammy
    • 5 years ago

    might as well say it’s over………. I was rooting for the Corsair..

    how’s about trying again with more modern drives. see if the “el cheapos” can really stand the punishment. ( mx/adata/lx/etc. )

    • James296
    • 5 years ago

    the entire time I was reading the article, the “Eye of the Tiger” by Survivor, was playing in the background.

    • sparkman
    • 5 years ago

    Right about now I’m feeling smug about my choice of the Samsung 840 Pro in my main build.

    • Wirko
    • 5 years ago

    Corsair, as it seems, had a brilliant idea: when the drive is close to failure, the write speed starts to fall until it reaches an ISP-like bandwidth limit of 256 kbit/s. The idea was implemented but of course could not be tested in advance, and of course didn’t work out as expected.

    • Wirko
    • 5 years ago

    So if I eat eight burgers every day, I may lose my “smart” attributes over time, then become ignorant of [url=http://www.health.com/health/gallery/thumbnails/0,,20393387,00.html<]"fat" tables[/url<]. In the end, after six tons of burgers or so, I may even find myself dead. A scary thought.

    • Ochadd
    • 5 years ago

    Thanks for continued testing. The insanity of pushing consumer storage this hard is entertaining and well worth the read. I expected the Neutron GTX to be the last one standing.

    I’d love to know how a rating or warranty like 80 terabytes written comes to pass if they have drives going 700-1500 terabytes regularly. I mean what kind of failure range/rate would they be seeing out of a 500 drive test?

      • Waco
      • 5 years ago

      This is what I’m wondering as well. The highest rated consumer drives are barely getting to 100 TB…yet the flash and controllers seem capable of MANY times that…

        • willmore
        • 5 years ago

        Because this test doesn’t measure how long it can hold that data unpowered. FLASH specs are normally *decades* long after the write cycles have been completed. *one week* powered off doesn’t test that. It’s apples vs oranges.

          • meerkt
          • 5 years ago

          SSDs aren’t specced to last decades after all P/E cycles are used. More details in a reply to your other comment. 🙂

            • willmore
            • 5 years ago

            Thank you for that! One year, but that’s well more than one week. 🙂

          • Waco
          • 5 years ago

          If flash is rated for 1000 P/E cycles at the minimum retention is factored into that (as far as I know).

          That would imply at the very least 240 TB of writes to any 240 GB drive assuming the worst flash and a not very good controller…

            • willmore
            • 5 years ago

            Actually, isn’t that assuming a perfect controller?

            • Waco
            • 5 years ago

            Well, it’s assuming halfway decent (with streaming writes and no compression). Modern controllers do tend to hit < 1 write amplification on standard workloads.

            • willmore
            • 5 years ago

            Are you using write amplification differently than I am used to? A 1x write amplification means it gets stored and *never moved*. If it gets coppied at least once, that would be 2x write amplification.

            Are we using different meanings?

            • Waco
            • 5 years ago

            Yes.

            Write amplification is when you write X amount and X +/- Y% actually gets written to flash. Modern controllers tend to be pretty good at keeping Y at near zero or even into the negatives depending on the workload.

            IE: If I write 1 TB, and 1 TB hits the flash, that’s 1X write amplification. If only 500 GB hits the flash, it’s a .5X write amplification. 1.5 TB hits…you get the idea.

            Even the junkiest flash is rated for 1000 P/E cycles (at least that’s the lowest I’ve seen) and even with a 2X write amplification on a crappy controller / terrible workload we’re looking at 120 TB of endurance on a 240 GB drive…still 50% higher than the warranty on most drives.

      • odizzido
      • 5 years ago

      It probably has something to do with the fact that most SSDs don’t die because their flash wore out but because of something else. If they write 80TB limit on the warranty then after that they don’t have to offer replacements.

        • gamoniac
        • 5 years ago

        Make sense on both counts.

    • Rakhmaninov3
    • 5 years ago

    They’re like Maytag washers before Mexico happened

    • TwoEars
    • 5 years ago

    Kingston is my #1 most trusted RAM manufacturer, even more so than Corsair.

    And samsung is there… naturally…

    I’m not at all surprised.

    Could we turn this into horse racing or something? I’d make a killing! 🙂

    • derFunkenstein
    • 5 years ago

    This is hilarious. All of these survived long enough to be succeeded by new models, at least, and at writes/day many many times what would be normal.

    • superjawes
    • 5 years ago

    [quote<]We're committed to killing them both[/quote<] You monster!

      • TwoEars
      • 5 years ago

      Would you have prefered that they were put up for sale on ebay as “lightly used”?

      😀

        • cmrcmk
        • 5 years ago

        You could legitimately list this as, “Only owned for a year, this drive was in a system I barely touched and still runs reliably.”

          • TwoEars
          • 5 years ago

          How about “proven itself very reliable!”?

            • Flying Fox
            • 5 years ago

            With an asterisk and a foot note of “Past results do not reflect future performance.”

      • Ninjitsu
      • 5 years ago

      For Science!

    • tanker27
    • 5 years ago

    Never in a million years would I thought that a Samsung SSD would be in the running as last man standing.

    • ronch
    • 5 years ago

    As much fun as it is to see these two lat men standing, it doesn’t really mean much for folks who plan to buy an SSD. Chances are they’ll end up with a different model or (wait for it…) the same model with switcheroos. A manufacturer’s quality may vary from model to model and even from unit to unit. Nonetheless, I guess it’s good to know one doesn’t need to worry about endurance when it comes to SSDs. I did at first when I got my EVO 250GB, but I thought, what the heck.

      • Thrashdog
      • 5 years ago

      At this point, the exercise is fairly academic. Flash wear concerns were one of the biggest sources of FUD when SSDs first came on the scene, but *every* drive in the test has already demonstrated that it can far outlast its own technological obsolescence in anything but a write-heavy datacenter scenario.

      For the next one, how about SSDs for offline archival storage? Fill one up, let it sit in a closet for ten years, and see if it’s still readable in 2024!

        • ronch
        • 5 years ago

        And then there’s the question of how future SSDs built on smaller process nodes will stand up to the same endurance tests.

    • Pez
    • 5 years ago

    An excellent update, thank you for your continuing hard work on this one 🙂

    • Chrispy_
    • 5 years ago

    If there’s an 840 Pro sweepstake, I’m putting ten bucks on 3.3PB

      • Duct Tape Dude
      • 5 years ago

      I’ll say 2.1PB just to keep it interesting.

        • shank15217
        • 5 years ago

        1.999PB SAMSUNG
        1.9999PB with 5 uncorrectable errors KINGSTON

          • Kurkotain
          • 5 years ago

          Samsung dies at 2.6 PB
          Kingston death spiral at 2.1 PB dies at 2.2 PB

      • f0d
      • 5 years ago

      ill go a little lower and say exactly 3.0PB

    • Shambles
    • 5 years ago

    I feel like an ancient Roman in the Colosseum watching a bloodbath as an entire field of combatants are laying on the ground dead with only two, bloody and bruised, remaining. I imagine the two drives struggling to kill each other despite the fact that they can barely stand on their feet.

      • albundy
      • 5 years ago

      for sparta then!

        • Dposcorp
        • 5 years ago

        This is much better:
        [url<]http://www.youtube.com/watch?v=FsqJFIJ5lLs[/url<]

    • Concupiscence
    • 5 years ago

    The last page should be titled, “On to 2PB.” Just letting you know!

      • Dissonance
      • 5 years ago

      Way too many instances of PB, TB, and GB to keep straight in this article! Fixed.

        • Captain Ned
        • 5 years ago

        Yeah, keeping the units straight in this write-up requires the fingers and toes of at least 2 humans and a pack of pets.

Pin It on Pinterest

Share This