The SSD Endurance Experiment: 200TB update

Solid-state drives have revolutionized the PC storage industry. Their wicked-fast access times deliver a palpable improvement in overall system responsiveness, and prices have fallen enough to make decent-sized drives affordable for all. There’s just one catch: due to the nature of flash memory, SSDs have limited endurance.

Flash writes erode the structure of the individual memory cells. Eventually, cells degrade enough that entire blocks of them have to be retired. Those bad blocks are replaced by fresh ones pulled from the SSD’s spare area, and business proceeds as usual.

SSDs only have so much spare area at their disposal, though. That area is also used to accelerate performance, so we’ve been curious about what happens to SSDs as wear accumulates. Do drives burn out or do they fade away—and what happens to performance as write cycling takes a toll on the flash? We’re attempting to answer those questions in our SSD Endurance Experiment.

If you’re unfamiliar with the experiment, I recommend reading our introductory article on the subject. Here’s the short version: we have six SSDs from Corsair, Intel, Kingston, and Samsung, and we’re hammering them with writes until they expire. We’re also testing performance at regular intervals.

We last checked in on our subjects after 22TB of writes, which works out to 20GB per day for three years. There was no drama to report at the time. However, we’ve now written 200TB to the drives, and the first cracks are starting to show.

For one of the SSDs, the first signs of weakness appeared when we polled the field after 100TB of writes. The SMART attributes of our Samsung 840 Series 250GB SSD revealed 11 reallocated sectors—bad blocks, in other words. Since the 840 Series’ three-bit TLC NAND has lower write endurance than the two-bit MLC flash typically found in consumer-grade SSDs, we weren’t surprised that it was the first to exhibit failures. All of our other candidates use MLC flash.

Despite those first bad blocks, the 840 Series’ performance and user-accessible capacity remained unchanged. The same was true for the other SSDs, so we set our sights on 200TB.

At the latest milestone, the 840 Series is up to 370 reallocated sectors. It’s not the only one with bad blocks, either. One of our Kingston HyperX 3K 240GB drives—the one we’re testing with incompressible data like the other SSDs—reports four bad blocks. We also have an identical HyperX drive that’s being tested with 46% compressible data, but it remains free of flash failures. That drive has only written 143TB to the flash thanks to its SandForce-powered compression tech, so we’re not surprised that it’s in better shape than its twin.

The Kingston SSDs have 4MB blocks, so the one with reallocated sectors has lost 16MB of total flash capacity. Samsung has yet to answer our questions about the 840 Series’ block size. However, based on information published by AnandTech, that drive appears to have 1.5MB blocks. With 370 of those blocks now retired, the total flash hit works out to 555MB.

Because bad blocks are replaced by flash reserves held in each SSD’s spare area, the HyperX and 840 Series drives have the same storage capacities as they did when our testing began. The HyperX offers 224GB in Windows, while the 840 Series serves up 234GB. Both have 256GB of flash onboard, leaving plenty to spare for bad block replacement. We’ve barely dipped into the HyperX’s reserves, and we’ve only consumed a fraction of the 840 Series’ spare area.

According to Samsung’s SSD Magician utility, our 840 Series SSD is in “good” health despite the bad block tally. Hard Disk Sentinel, the software we’re using to capture SMART data, is less optimistic. Here’s how that application rated the health of our contenders at 100 and 200TB:

100TB 200TB
Corsair
Neutron GTX 240GB
100% 100%
Intel
335 Series 240GB
88% 73%
Kingston
HyperX 3K 240GB
100% 98%
Kingston
HyperX 3K 240GB (Comp)
100% 100%
Samsung
840 Pro 256GB
78% 51%
Samsung
840 Series 250GB
66% 19%

For what it’s worth, the Samsung utility says our 840 Pro is also in good health. The same goes for Intel’s equivalent app and the 335 Series SSD. Corsair’s software doesn’t have a general health indicator, and Kingston’s utility doesn’t work with our test system’s storage drivers.

The media wear and SSD life attributes we’ve been tracking haven’t budged since testing began, so it’s hard to know which numbers to trust. It’s important to keep things in perspective, though. We’ve written 200TB to the drives—the equivalent of more than 100GB per day for five years—and most of the SSDs are completely intact. Even though a decent-sized portion of the 840 Series’ flash has expired, the drive appears to be far from failure.

Now, let’s look at the performance picture.

Performance

We tested the SSDs after 100TB of writes and again after 200TB, and they were pretty much as fast as they were fresh out of the box. The differences between our original scores and the results after 200TB work out to 2% or less:

You may recall that the HyperX drives were much faster in the random read test after 22TB than they were in a pristine state. Those higher scores persisted after 100TB, but after 200TB, performance has returned to the same levels we measured initially.

We can also track how fast Anvil’s endurance benchmark runs on each drive. The endurance test writes a series of files with random sizes until it hits a predefined limit. Those files are then deleted before the next stream of writes begins. Let’s see how the average speed of each loop has changed since testing began.

First, a disclaimer. These drives are running simultaneously on a mix of 6Gbps and 3Gbps SATA ports connected to a pair of identical test systems. The HyperX drives are connected to 3Gbps ports, while the rest have 6Gbps connectivity. We’re not interested in the relative differences between the SSDs; instead, we’re curious about how each one’s write speed changes over time.

Those spikes in the Kingston and Intel results correspond to the breaks we took at 22 and 100TB. We secure-erase all the SSDs before testing performance at each interval, and that makes the SandForce drives notably faster in their first endurance run of the next wave.

The Samsung 840 Pro speeds up after each secure erase, too, but its performance has been erratic overall. Although most of the SSDs maintain largely stable write speeds, the 840 Pro spikes frequently. This behavior goes back to our early endurance runs, so it’s likely attributable to garbage collection and internal management routines rather than flash wear. That said, it’s worth noting that the 840 Pro achieved higher peak speeds in earlier runs.

Although the 840 Series doesn’t exhibit the run-to-run variance of its sibling, the TLC drive has slowed somewhat. Since write speeds began their slow decline immediately, the recent rash of bad blocks isn’t to blame.

Interestingly, the Corsair Neutron GTX has actually gotten slightly faster since we kicked off our endurance test. The Neutron’s write speeds have leveled off over the last 50TB, though.

So concludes the latest chapter in our SSD Endurance Experiment. Already, we’ve demonstrated that modern SSDs can absorb an awful lot of writes without suffering ill effects. The Samsung 840 Series is spitting out an increasing number of bad blocks, though. It will be interesting to see what happens over the next 100TB and beyond. Everything we’ve learned thus far suggests we’ll be at this for a while.

Comments closed
    • ronch
    • 6 years ago

    I haven’t been paying attention to these SSD endurance articles until I got my 840 EVO a few days ago. Being new to SSDs I’m a bit overly cautious with mine. It’s one thing to render the drive useless after many years of use but it’s also quite another to have lower performance after just a few months of use, which is what I’m more afraid of. This cautiousness also stems from knowing that what I have here uses the least durable flash tech of all.

    Great article, guys. It’s articles like this that make TR my No. 1 go-to tech site.

    • NovusBogus
    • 6 years ago

    Good stuff, keep it coming. As an SSD-less troglodyte and noted technology paranoiac I’m anxious to see some of my own FUD put to rest.

    • glugglug
    • 6 years ago

    Can we get some clarification on the exact test methodology?

    Specifically, if I understand it right, your stress test loop is:
    1. Fill drive with files.
    2. delete all the files
    3. Goto step 1.

    This is pretty much a **BEST** case scenario, because when you delete **everything at once** from the drive, it is able to TRIM every page entirely, and not have to worry about copying data off those blocks for the next read/modify/write cycle. Its good numbers to know about the flash itself, but doesn’t really reveal that much about the controller.

    I would like to see a follow-up with a much more difficult scenario for the drive:

    1. Fill drive to 75% with stuff
    2. Create new sratch directory.
    3. fill the drive the rest of the way with files in that scratch directory.
    4. Delete files in the scratch directory, but leave the drive 75% full with the files that were there before that dir was created.
    5. goto step 3.

    The number of logical writes per cycle in this scenario is 4x lower, so a theoretically perfect controller will get 4 times as many loops out of it. A really crappy controller might actually get fewer loops out of this even with 4x less writes per loop.

    Separately, for absolute worst case, you can do the complete drive filling and emptying, but without TRIM. But the scenario I described above I think is more interesting because we want to see that the drive handles blocks being partially TRIMed well.

      • indeego
      • 6 years ago

      And neither yours nor TR’s is anywhere close to real-world, (i.e. shutdown/restart/repeating TRIMs, etc stresses and such.)

      • NovusBogus
      • 6 years ago

      I agree about the methods but this is still far more useful knowledge than SSD marketing people telling us the drives are reliable because they say so.

      • HTWingNut
      • 6 years ago

      I did just that. See here: [url<]http://forum.notebookreview.com/solid-state-drives-ssds-flash-storage/710497-samsung-840-120gb-endurance-testing.html[/url<] 😉

        • glugglug
        • 6 years ago

        This is a much better real-world test, exactly what I was looking for, thanks!

        If only there were other drives (especially other controllers) to compare using the same methodology.

        One flaw:

        It makes no sense, intuitively at least, that the read performance should degrade while write performance does not. I believe what you are seeing is that the 100MB default file size used by CrystalMark is fitting in cache (either OS or drive, not sure), and the writes aren’t all reaching the flash before the same clusters are overwritten again with other data. I predict if you try it with a larger setting you will see a very different result.

        Actually, after investigating a bit further, Samsung knew writes directly to the flash would be an issue with the TLC, so it has a huge SLC buffer. On the 120GB model used in your test, the SLC buffer is 3GB, so any crystalmark test file less than 3GB will only be using that buffer for the writes, and spooling them to the TLC in the background during idle. So if you want to see the real steady-state write rate, the file needs to be larger than 3GB.

    • Pez
    • 6 years ago

    Just wanted to say thanks for update (and the article as a whole).

    Great to see such in-depth testing taking place, excellent job so far!

    • provoko
    • 6 years ago

    Fiiiinally someone dispels the myths around SSD endurance.

      • indeego
      • 6 years ago

      Ahhhh, a newbie scientist appears!

        • Visigoth
        • 6 years ago

        Ahhhh, an old troll makes its voice heard!

          • derFunkenstein
          • 6 years ago

          Ahhh, a Viking emerges from the shadows..

            • NeelyCam
            • 6 years ago

            How about a Viking Troll?

    • WaltC
    • 6 years ago

    I’d love to see similar tests with mechanical hard drives–say 7200rpm/multi-platter/xTB’s in size. Would be interesting to make that comparison instead of the usual speed comparisons. Just to sort of find out how much of a difference, and if any negative difference at all, we’ll see from the “moving parts” drives…;)

      • Chrispy_
      • 6 years ago

      There’s plenty of existing data for that.

      Google’s internal study of several thousand drives showed that mechanical drive failure is basically unrelated to whether the disk is thrashed for its entire life, or sits there mostly dormant.

      Mechanical drives fail because their manufacturing tolerances are so unforgiving, and they don’t like being too hot, too cold, or jiggled around. Write endurance on them is just not a relevant metric for such fragile technology.

    • sschaem
    • 6 years ago

    I think the most data ‘I’ wrote on a disk was ~200TB on a 1.5TB HDD used as a ‘whole house’ DVR (in operation since 2007, still going strong)
    But If this drive ever fails, I would most likely go with a 2TB HDD. Even ‘cheap’ TLC would be cost prohibitive, and as I see here, those TLC SSD will start to give out sooner then later.

    “Since write speeds began their slow decline immediately, the *recent rash of bad blocks* isn’t to blame.”

    • sschaem
    • 6 years ago

    There is a difference if you erase a 240GB SSD, then fill it, erase etc.
    Compare to using a 80% full SSD and use it like anyone would.

    Without HW wear leveling:

    a) You have 200TB spead over all 256GB worth of cell
    b) 200TB over ~50GB == 5 time the wear

    This test with wear leveling exercised , I would expect more spikes, lower performance and/or faster failures

    “this handy little app includes a dedicated endurance test that fills drives with files of varying sizes before deleting them and starting the process anew”

    What the test should do is “Fill the drive 80%”, then do the test as described.
    I think a very different picture would emerge at all levels.

    • itachi
    • 6 years ago

    So the test still not finish yet ? will be nice to see when things start to really get ugly.

    So far the results look good 200 Tb of data written ? for a normal user how many years do you estimate that could be the equivalent to?

    • UnfriendlyFire
    • 6 years ago

    SSD wearing is the least of a consumer’s concern.

    The main issue are:

    1. Firmwares. A bad one can result in a SSD being reduced to like 40 MB capacity, or lose its data randomly.

    2. Power surge. Some SSDs have capacitors that flush their cache, but some don’t, and thus are likely to lose data when the power goes out.

      • meerkt
      • 6 years ago

      2. You can say the same about HDDs.

        • UnfriendlyFire
        • 6 years ago

        That only applies to the 1990s-2000s HDDs when their arms would simply crash onto the platters instead of snapping back to a safe position in the event of a power loss.

        Stop using that IDE 20GB HDD if you’re so worried.

          • travbrad
          • 6 years ago

          Power surges might not be a big problem for HDDs anymore, but they do fail randomly in plenty of other ways. If you get a good solid SSD at least the failure is more predictable, instead of completely random the way most HDDs fail. Obviously you still need to BACKUP everything important no matter what kind of drive you use though.

          • Zyxtomatic
          • 6 years ago

          Whew, glad I’m still using my good ole 20MB MFM drive in my XT.

    • Wirko
    • 6 years ago

    Geoff, have you made any estimations of the write amplification in this test?

    • dale77
    • 6 years ago

    Love the real science you guys are doing. Excellent!

    • kilkennycat
    • 6 years ago

    Geoff,

    How about testing the SSDs under real world conditions and adding the much-requested DATA RETENTION tests. Now that you have a few SSDs showing SMART errors, why not fully load them with known data, power them down for, say, a week and then check for real data errors… The results may be rather amusing….

      • internetsandman
      • 6 years ago

      That actually hadn’t occurred to me but it’s a good point. Data integrity is arguably more important for server workloads than outright performance degradation

    • Freon
    • 6 years ago

    Ouch. A big damning for the Samsung 840 drives. *hugs my 830*

      • UberGerbil
      • 6 years ago

      Not really. The 830 should be compared to the 840 Pro, but even the regular 840 doesn’t look all that bad here. Maybe not your pick if you were going to stick it in a high-write enterprise environment, but who would be doing that anyway?

        • Freon
        • 6 years ago

        Oh, I meant BOTH the pro and non-pro. They’re both at the bottom of the pack and the Pro looks schizo. I know there is more to it then that one average write speed, but it’s still disappointing.

      • cobalt
      • 6 years ago

      It seems like you may be drawing an awfully broad conclusion from one chart. This isn’t a raw performance test, it’s an endurance test (as mentioned in the title). In fact, to quote the article, “We’re not interested in the relative differences between the SSDs; instead, we’re curious about how each one’s write speed changes over time.” And in that case, to quote again: “The differences between our original scores and the results after 200TB work out to 2% or less” across sequential and random read and write tests.

      Sounds to me like they’re all performing fantastically. The only negative for the 840 non-pro is that the TLC is going to degrade faster than MLC, but we knew that, and the actual endurance is still extremely respectable.

      If you want an actual review that includes performance, I think most (maybe all) of these drives have been reviewed here at some point.

    • Geonerd
    • 6 years ago

    Wouldn’t mind seeing all of the SMART parameters. Maybe post some simple screenshots?

    A cohesive table, listing the drives, their attributes, data written so far, advertized lifespan, etc. (as found on the Xtremesystems SDD Endurance Test that you guys strangely refuse to acknowledge) would also make this project a lot more useful for the average user.

      • indeego
      • 6 years ago

      To their credit that “test” is a mishmash of different people’s experiences, on a very difficult to understand/formatted forums page (that appears dead?) Few updates lately, too. If you can follow it my hat goes off to you.

    • house
    • 6 years ago

    Thank you TechReport for putting this SSD endurance issue to rest. Too many folks out there are fearmongering about how dangerous TLC NAND was for data. For the average user or even power user this is a non-issue.

    • Aliasundercover
    • 6 years ago

    It would be interesting to know how long these worn but still working drives retain data. When removed from power do they forget in a day? A week? A month? A year?

    I suggest at some point before genuinely destroying these drives with excess writes change the test to data retention. Stick them on a shelf and run checksum reads at expanding intervals. It would be more interesting to know they still hold data after 200TB than exactly how many TBs let the white smoke out.

    Anyway, thanks for these excellent articles. It is good to have a source on this question who isn’t out to sell me.

    • liquidsquid
    • 6 years ago

    The real question is if they can withstand operating in a puddle of cat pee for more than 5 days. I know brand new mechanical drive did not make it. Traces and wires corroded right off the controller board! I mean come on, can’t these things withstand anything?!?!?

    Kidding aside,
    That’s what I get for leaving it running outside of the case on top of a litter-pan sized box with old old cats around. Nice and warm, a gentle vibration… angry cat from new dog in the house. Revenge of the hard disk! My reaction was not my proudest moment.

      • DarkMikaru
      • 6 years ago

      LMAO… are you serious? That is classic!!! My cat used to tag the outside of my PC case until it neutered him. Problem solved.

        • stdRaichu
        • 6 years ago

        So did you take the fan grilles off for that purpose or was it just serendipity? 🙂

      • indeego
      • 6 years ago

      When I first started as a junior sysadmin I got lazy and put a user’s drive in a foam container within the drive cage in their system, not bothering to screw it in. I figured, they wouldn’t know the difference. A year later it was dead, I opened it up and it had melted through the foam and was covered in foam goop that was probably overheating it. Shortcuts usually end up burning you in the end…

      The user’s data was fine. Thankfully I didn’t hold such an attitude about backing them up…

        • Mr. Eco
        • 6 years ago

        Oh, man. That was rough 🙂

      • WaltC
      • 6 years ago

      Ever seen a cow tinkle? I did, once. It was like a pressurized water nozzle at full power and capacity…;) Amazing things, cows.

    • oldDummy
    • 6 years ago

    Great stuff.
    This endurance is surprising to me.
    SSD’s current downside is price right now.
    This will be corrected in the fullness of time.
    Thanks again.

    • Dazrin
    • 6 years ago

    The chart on page 1 shows the Intel drive with a 73% healthy according to Hard Disk Sentinel yet you don’t mention it at all regarding bad blocks. Any idea why it is reported so low with no apparent failures yet?

      • Dissonance
      • 6 years ago

      Nope! HD Sentinel reports an even lower health rating for the 840 Pro, which also has no bad blocks. The fact that each drive has a different mix of SMART attributes probably has something to do with it. That’s why we’re tracking a bunch of different variables as part of the experiment.

    • Flatworm
    • 6 years ago

    Interesting. Y’know, HDDs break down, too, probably more frequently, evidently.

    I am sorry, and a little bit surprised, that you didn’t include OCZ Vertex drives, mainly for selfish reasons because that’s what I’ve got. I’ve had a Vertex 3 for two years now in a very heavily used desktop PC and the health is still showing 100.0%.

    I know you can’t include everything, but the OCZ drives are pretty popular.

      • internetsandman
      • 6 years ago

      I had a 60GB vertex 2 a while back that I used for my OS and steam games, and within a few months it started boot cycling. I couldn’t figure out what was wrong with my system and I’m usually pretty good with troubleshooting, I actually had to take it to NCIX and pay for a diagnostics test and they figured out the drive was toast. Given OCZ’s old reputation for RAM and that I certainly wasn’t the only OCZ customer who had a problem with their flash products, I’m staying away from them now

        • DarkMikaru
        • 6 years ago

        LOL, i totally forgot they used to make ram. Reminds me of my first 16GB DDR3 kit i purchased years ago. You remember, when 8GB was 20 bucks! Ahh..the memories. That first kit was OCZ and it failed in spectacular fashion. I remember RMA’ing it and Newegg didn’t even want it back. They just refunded the money and basically told me to keep em. We also had a 60GB Vertex2 at work that lasted all of 2 months in one of our lawyers machines. Rough…

        • f0d
        • 6 years ago

        i have a 60gb vertex 2 also ever since it was released but i havnt ever had a single problem with it, after i upgraded to an 840evo i ended up using it for a cache drive (so it gets written to a lot still without issue)

        in no way at all am i saying that they are good drives – with so many people having issues with them there is obviously something wrong with them, im just suprised that i havnt had a single issue at all with mine yet and it still has 100% life left on it after all these years (around 3 years i think i purchased it in 2010)

        why wont it die.!!!!

      • Chrispy_
      • 6 years ago

      I stopped using OCZ when things like the Samsung 830 and Intel 330-series started undercutting them because of sales. For the most part they were cheap, fast drives using know decent silicon (Sandforce + Micron NAND that are the same basic building blocks as some of the massively reliable Intel drives)

      I don’t have anything against Sandforce drives, but whatever OCZ do to the firmware or hardware is beyond doubt. I have bought several OCZ drives over the years, including twelve Agility drives, ranging from 1st gen Indilinx to 3rd gen Sandforce. The last to fail worked its way into a laptop that failed to detect the SSD about two weeks ago, but that signifies the twelfth failure out of twelve. Every time I received a replacement SSD from OCZ under warranty, I dumped it unopened on eBay.

      OCZ may have cleaned up their act now, but drives sold over the last half decade will continue to fail and sully their already low-credibility name. I’d be amazed to see OCZ still in the SSD business in a few years from now.

        • nerdrage
        • 6 years ago

        Friends don’t let friends OCZ.

    • Random Guy
    • 6 years ago

    Long time lurker here. I have been reading TechReport for at least 5 years, and I decided to register solely to thank TR for articles such as this which not only address those burning “what-if” questions that any computer enthusiast has about their purchases, but at the same time reveals underlying technological behaviours that are not covered by anyone else on the web.

    Massive thumbs up guys! You are doing an amazing job 🙂

      • Chrispy_
      • 6 years ago

      Heh, I was waiting for an “etc” article to come along so I could ask when we were going to get an update on the SSD endurance testing.

      I’m looking forward to finding out just how soon the 840 fails (and I’m assuming it’ll be the first because it has flash with the lowest rated endurance, but I could easily be wrong if the Kingston is showing bad blocks on MLC already)

    • anotherengineer
    • 6 years ago

    Interesting.

    I would have expected better results from the Samsung 840 Pro since it’s supposed to be toggle nand.

    The Corsair Neutron GTX is Toshiba 19nm toggle nand, so I wonder if it is the quality of the flash chips or the fact that the GTX has 16GB reserved for bad blocks is the reason for it’s endurance, while the samy pro has none?

      • Stickmansam
      • 6 years ago

      Also conisder the fact they use different controllers (LAMD vs Samsung) which can account for garbage collection, affecting endurance

      • MadManOriginal
      • 6 years ago

      I thought toggle NAND just added sequential bandwidth. How does it affect endurance?

    • tipoo
    • 6 years ago

    I always have to chuckle when people get HDDs over SSDs because they worry about SSD reliability. This is more data than the average user would write if they kept the drive for 100 years (and stayed alive that long). 20gb a day for 27 years!

    They did have issues early on, but in 2013 picking a good controller isn’t that hard. I also understand it’s scary because a hard drive may generally give warnings of dying while an SSD may not, but I’ve seen my fair share of hard drives stop powering up completely all of a sudden as well. And either one should be backed the !@#$ up.

      • albundy
      • 6 years ago

      even longer than 27 years for those that are paranoid enough to move all writes to an hdd! i just cant believe MLC degrades that fast!

      • mcnabney
      • 6 years ago

      It depends what you do. I have been doing about 100-150GB/day in video editing. Probably 50TB a year if I keep it up (unlikely) since this is not a long term project. So the results do interest me. And yes, I know that my usage is highly atypical.
      I would note that I am NOT using an SSD. I am using four 640GB Black drives, striped, backing up to server nightly. I get nearly SSD speeds when load/saving and the drives were purchased pre-flood.

        • tipoo
        • 6 years ago

        This is true, and I did qualify what I said with “average”. I’m a heavy user but nothing multimedia related, I probably don’t even hit 10GB/day.

      • ChronoReverse
      • 6 years ago

      I’m honestly more scared for the data on my HDD’s than on my SSD’s.

      And the HDD’s are in RAID-1 combined with a third drive for File History for maximum redundancy overkill.

        • DarkMikaru
        • 6 years ago

        Good Job! I can’t emphasize enough to my clients, friends, family… back that *&^% up!!! Amazing how people think “I moved all my files to my external drive” and call it a day. Then I bring up the question, well..what happens if/when it dies or is stolen? What then? To which they quickly nod and reply.. “yeah, I see your point”. Didn’t mean to rattle on… Just love how savy you guys all are. Ever since my 60GB WD drived died on me during an Win XP build several years ago… I learned my lesson!

        With all that said, the only thing that worries me about my SSD’s is that if they do happen to die how does one recover the data? I’ve always considered them to be like USB flash drives, whatever was there is totally gone forever. So, as reliable as my Samsung 830 / 840 drives are I still back them up. You just never know.

          • Diplomacy42
          • 6 years ago

          [quote<]what happens if/when it dies or is stolen? What then? [/quote<] buy another external drive and backup to that?

            • DarkMikaru
            • 6 years ago

            Right… perhaps I needed to elaborate on this a bit. What people would tell me they were doing is moving ALL their files to the external and they’d call that a backup. Not copying to external…moving to external. So if that drive dies or is stolen they are sol. I apologize if I did not make my point clear. So yeah, that isn’t a backup at all. So what I also had to start telling people was “a backup is if you have your file/s in more than one place”. That simple explanation usually worked.

            • indeego
            • 6 years ago

            Backup to another source 25+ miles away in a secure facility. Test restores as permits. A backup ain’t a backup without data validation of integrity and the ability to access it.

            Use encryption YOU control, end-to-end.

            Backup your backup logs and access Logs as well.

            Backup seems simple, but there are TONS of gotchyas. Early Windows had path limitations where one application in windows could write past 256 characters, and backup applications might not be able to back it up, or they might be able to backup, but not restore.

            Don’t go cheap on your backup applications. Backup of databases should be done with Native API calls, not third party.
            Follow vendor recommendations.

            If you use a hardware device, make sure you are up to date on drivers, firmware, configuration guidelines. In most cases it’s better to use older, [tested] technology versus cutting edge. By older I mean 6 months to 1 year, minimum. Let everyone else test the bugs for you.

            oh I could go on and on and on. Being a ssysadmin has made me paranoid about data integrity beyond your wildest dreams.

            • moose17145
            • 6 years ago

            90% of what you mentioned about how to do a proper backup, though correct, is completely unnecessary for most people. What you are describing is server backup for a company. Most people do no need a off site backup 25 miles away. Do not need to have in encrypted. Do not need backups of logs let alone backup of the backups.

            Most people just need to learn how to store their data on two separate devices is all and they would be fine. I work at a small computer repair shop that does data recovery / backup as a 79 – 99 dollar service (the price is determined on how difficult it is to recover data and what we need to do to recover it. If we cannot recover any data the customer is not charged anything). And all that most people have that they do not wanna lose are family photos and maybe a few MS Word / excel docs and stuff like that. I tell them they need to get an external hard drive and make sure that ANYTHING they do not wanna lose needs to be on at least two devices. Be it their laptop and the external HDD, or two different external hard drives. That way their entire house needs to burn to the ground for them to lose anything, at which point… honestly… most people have more pressing concerns than “OMG MAH PICTURES!”

            For a business that relies on that data to survive, then yes… you are correct in that you need to take your back up strategy to a new level… but just little family and personal stuff… a simple copying of data to an external hard drive is enough.

            Personally… I don’t bother to backup most of my desktop up. I just have too much non critical data. And any documents I do not wanna lose are either on two hard drives, or I have saved in my gmail account somewhere in my inbox (old school style emailing stuff to yourself ftw). That or I have it saved on my old 512MB flash drive. Seriously all the stuff I really do not wanna lose can fit onto a 512MB flash drive. Documents and simple things like that just do not take up that much space.

            My my terabytes of movies and hundreds of GB of saved music, my saves for my computer games… if i lost it… it would suck… but I would move on. I do not consider any of that stuff to be “mission critical”.

            EDIT: Grammar fail.

      • Freon
      • 6 years ago

      I think it has much more to do with weird failures of certain SSDs (*cough* OCZ *cough*) and controllers together with new/different = scary.

    • meerkt
    • 6 years ago

    Please test retention time once the base experiment is over! I’m much more worried about retention than endurance; endurance is at least more or less controllable, predictable, visible.

    Can you expound on “the media wear and SSD life attributes we’ve been tracking haven’t budged since testing began”? No change to SMART attributes 0xE9 and 0xE7? That sounds odd.

      • Dissonance
      • 6 years ago

      It would be great if every drive had the same array of SMART attributes reporting in the same way, but that’s definitely not the case. On the Neutron, attribute 231 is labelled temperature, and there’s no 233. For the HyperX drives, 233 covers compressed writes, which we’re tracking, but 231, which is tagged SSD Life Left, has been at 0 since the beginning. There’s no attribute 231 on the 335 series, and 233, the media wear indicator, has been at 0 since the start. Then there are the Samsung SSDs, which don’t have attribute 231 or 233.

        • meerkt
        • 6 years ago

        Ah, that’s sad. I really don’t understand why the relevant standard bodies don’t standardize SMART attributes. They’ve had, what, 15-20 years to get their act together?

        Re Intel’s 0xE9, something unclear. According to [url<]http://www.anandtech.com/show/6388/intel-ssd-335-240gb-review/2[/url<] the "normalized" value changes, even though the raw stays 0. I'm guessing SMART returns both values and they don't have to be related.

          • Chrispy_
          • 6 years ago

          Doesn’t HDTune give you an decoded readout of the SMART attributes in plain English?

            • Kougar
            • 6 years ago

            If it can understand them. Keep in mind HD Tune can’t even properly read the temperature sensor on some model SSDs, lists it incorrectly at 128c. Given the Smart attributes and their meanings differ between brands, I wouldn’t always trust a generic program to get them correct.

    • xii
    • 6 years ago

    This is a very interesting experiment. I know it is hard to do long-term reliability or endurance tests – we are all mortal after all and specific hardware editions are even more so – but I wish more sites would put effort in doing tests like these.

    It’s also good to see that the results are pretty encouraging so far.

    • Melvar
    • 6 years ago

    Just to put this in perspective, At the beginning of April this year, I bought a new system that has a 120GB 840 and a 1TB hard disk. I have made a point of only installing a few games on the SSD and most go on the HDD, but all the other programs I use are on the 840. My temp files and my swap file are on the 840.

    I’m looking at Samsung Magician right now, and it says I’ve written a total of 0.73TB to the drive so far. At this rate, in 10 years I will have put about 10% as much wear on the drive as this test has so far put on its victims. I’m feeling pretty good about this drive’s long term prospects right now.

      • indeego
      • 6 years ago

      Not to mention users typically put the most data (OS, programs, user data) on their devices at the beginning of their use pattern, then rarely things change thereafter.

        • Melvar
        • 6 years ago

        Over the long term that stuff will only be a fraction of the data written. Temp files, swap files and software/driver/OS updates will probably account for most of the wear for a typical user, and those things keep happening for the lifetime of the system.

    • FireGryphon
    • 6 years ago

    In other words, unless you have these drives in a server, you’ll barely see any degradation in performance? 200 TB of data is more than any regular user would see over many years, right?

      • tanker27
      • 6 years ago

      That’s my guess too. a ‘normal’ user shouldn’t worry too much about SSD degradation.

        • moose17145
        • 6 years ago

        very true. That being said those 10k and 15k rpm drives still serve a niche market. I see too many people say they dont even understand why they still make those high RPM drives when SSDs are faster. Well… I still know people who do HD video rendering and stuff like that as their day jobs, and they chew through a couple hundred GB of writes a day on average. For people like those a couple of high rpm drives in a raid 0 used as a swap file makes lots of sense. But that is far from a typical usage scenario.

        One person I knew had 4 raptors in raid 0 trying to eek out maximum reads and writes. A reliable array being raid 0? Oh god no. But it was only used as a swap partition while he was working on a given project. Everything was also properly backed up every night to his server, and he has an off site backup in a safety deposit box at a bank he updated once a week. So at most he would lose a days worth of work in even the entire machine failed. A weeks worth if his house burned down or something like that. Gotta admit… that was a wickedly fast array. Had some very impressive read and write speeds.

    • spuppy
    • 6 years ago

    Samsungs are performing just as I would expect. I wish some Crucial m4 drives could be included, as I think they will have similar issues.

    Also would be curious how Kingston’s V300 Sandforce+Toggle flash drives would compare – they perform just like any Sandforce MLC drive, so endurance should be interesting to see.

      • Visigoth
      • 6 years ago

      I agree. Some Crucial M500’s would be awesome to include as well.

        • DarkMikaru
        • 6 years ago

        My Crucial C300 64GB is still toiling away in my old Acer 5536 laptop I gave to my mom almost 2yrs ago now. I wonder how its fairing lol.

          • ChronoReverse
          • 6 years ago

          My 256GB Crucial C300 is still working great as my system drive.

    • PainIs4ThaWeak1
    • 6 years ago

    Hmm… Makes me question my recent 840 EVO purchase.

      • f0d
      • 6 years ago

      why?
      where im from they are much cheaper than the competition and theres no way you would write as much as these tests in normal usage over its lifetime

      if anything im impressed with my 840 evo purchase because if it performs similarly to the 840 (in these tests) i wont be writing that much for many years (maybe 2TB a year means i have around 100 years of usage)

      i havnt even written a TB to my 840 yet and i have had it since it came out

      • derFunkenstein
      • 6 years ago

      There’s no 840EVO in this group.

        • indeego
        • 6 years ago

        Not only are people making conclusions about their own drives based on this test alone, but they are making it about drives not even in the test! I wonder if someone warned us about this in the comments way back when?

          • derFunkenstein
          • 6 years ago

          not to mention the low sample size – this is a fascinating experiment but to do it on a large scale is very costly – both in terms of money and time. I think in some cases you can generally guess how your drive is affected. There is a Sandforce drive in the batch so other sandforce drives MIGHT be affected. But I wouldn’t let these results sway me, especially now that they’ve all written 200TiB of data.

    • Faiakes
    • 6 years ago

    Great article Geoff.

    Wasn’t the Vector out when you started it?

    The Neutron is looking very nice indeed.

    • Barbas
    • 6 years ago

    Looking at that chart makes me wonder about the 840’s performance.

    It comes at about the same price as the Kingston HyperX but it seems to be 50% slower.

    Does anyone have some experience/benchmarks comparing the two drives?

      • Spunjji
      • 6 years ago

      Do you mean the 840 or the 840 Pro? The 840 isn’t meant to (and doesn’t) compete with the HyperX in performance terms. Here in the UK it’s an awful lot cheaper, though, although it has just been superseded by the much more nimble 840 Evo.

        • Barbas
        • 6 years ago

        Well here they (840 EVO and HyperX 3K) have the same price and the 840 Pro is priced at around 50% more.

        I guess that the HyperX is a really good value here because this chart suggests it’s consistently faster than the 840 Pro

          • CampinCarl
          • 6 years ago

          840 EVO != 840 Series.

            • Chrispy_
            • 6 years ago

            Yeah, the 840 is the bargain basement model – usually significantly cheaper than the Evo or the HyperX.

      • Chrispy_
      • 6 years ago

      The graph only gives write speeds, which is of little value when sequential writing of gigabytes at a time is pretty rare for a desktop SSD. Most of the SSD benefit comes from reads and small writes. About the only common use for sequential write speeds on an SSD are when transferring from mechanical to SSD, and as long as the SSD can sustain more than about 100MB/s it’s going to be plenty fast enough to not matter.

      The reason the Samsung 840 does so well in review (and day-to-day desktop use) is because it’s one of the fastest drives for reads, and the IOPS are amazing for smaller file sizes. 4K reads are almost twice as fast as the HyperX, which makes up for the slower sequential writes and then some.

    • moose17145
    • 6 years ago

    First page – ” That drive has only written 143GB to the flash thanks to its SandForce-powered compression tech, so we’re not surprised that it’s in better shape than its twin.”

    I suspect you mean it had written 143TB, not GB.

    Very nice article! Its interesting watching this little endurance experiment unfold!

      • ClickClick5
      • 6 years ago

      No. 143GB. The compression is top notch. 😉

Pin It on Pinterest

Share This