The SSD Endurance Experiment: Casualties on the way to a petabyte

I feel for the subjects of our SSD Endurance Experiment. They didn’t volunteer for this life. These consumer-grade drives could have ended up in a corporate desktop or grandma’s laptop or even an enthusiast’s PC. They could have spent their days saving spreadsheets and caching Internet files and occasionally making space for new Steam downloads. Instead, they ended up in our labs, on the receiving end of a torturous torrent of writes designed to kill them.

Talk about a rough life.

We started with six SSDs: the Corsair Neutron GTX 240GB, Intel 335 Series 240GB, Samsung 840 Series 250GB, Samsung 840 Pro 256GB, and two Kingston HyperX 3K 240GB. They all exceeded their endurance specifications early on, successfully writing hundreds of terabytes without issue. That’s a heck of a lot of data, and certainly more than most folks will write in the lifetimes of their drives.

The last time we checked in, the SSDs had just passed the 600TB mark. They were all functional, but the 840 Series was burning through its TLC cells at a steady pace, and even some of the MLC drives were starting to show cracks. We’ve now written over a petabyte, and only half of the SSDs remain. Three drives failed at different points—and in different ways—before reaching the 1PB milestone. We’ve performed autopsies on the casualties and our usual battery of tests on the survivors, and there is much to report.

If you haven’t been following along with our endurance experiment, this introductory article is a good starting point. It spends far more time detailing our test methods and system configurations than the brief primer we’ll provide here.

The premise is straightforward. Flash memory has limited endurance, so we’re writing data to a stack of SSDs to see how much they can take. We’re checking health and performance at regular intervals, and we’re not going to stop until all the drives are dead.

The root cause of NAND’s limited endurance is a little complicated. Flash stores information by trapping electrons inside nanoscale cells; the associated voltage defines the data. The “tunneling” process used to move electrons in and out of the cell is destructive, not only eroding the physical structure of the cell wall, but also causing stray electrons to become stuck in it. These errant electrons impart a negative charge of their own, reducing the range of voltages available to represent data. The narrower that range becomes, the more difficult it is for SSDs to perform writes and to verify their validity.

Electron build-up is especially problematic at higher bit densities. MLC NAND needs to differentiate between four possible values within the flash’s shrinking voltage window, but TLC NAND must track twice as many. It’s more sensitive to normal flash wear as a result, which is why our 840 Series has been burning through more of its flash than the MLC-based drives in the experiment.

Continued write cycling eventually causes cells to become unreliable, at which point those cells are retired and replaced by flash harvested from the drive’s “spare area.” This reserve of fresh flash area ensures the SSD maintains its user-accessible storage capacity even if cell failures incapacitate some of the NAND. Of course, eventually that reserve becomes exhausted and the drive will begin to fail.

Now that we’ve laid the groundwork, it’s time to inspect the casualties. The first failures were a bit of a surprise but also completely expected. When we checked on our lab rats after 700TB of writes, we found SMART messages warning that the Intel 335 Series and one of the Kingston HyperX 3K units were at risk of failure. Both drives are based on MLC NAND, so we didn’t expect them to falter before our lone TLC contender.

Although the failure-prone drives were fully functional at 700TB, neither one made it to 800TB. The HyperX 3K expired at 728TB, while the 335 Series croaked at 750TB. We’ll deal with the Intel first, since its demise was a little more straightforward.

The 335 Series’ flash was almost entirely intact when the SMART warning hit. Only one reallocated sector had been logged up until that point, and it appeared way back at the 300TB mark, so it didn’t inspire the warning. Instead, the slow decline of the media wearout indicator (MWI) was responsible.

This SMART attribute starts at 100 and decreases as the NAND’s rated write tolerance is exhausted. It’s completely unaffected by the number of reallocated sectors, and it’s been ticking down steadily since the experiment began. The remaining life estimate in Intel’s SSD Toolbox utility is based on the MWI, and so is the general health assessment offered by HD Sentinel, the third-party tool we’ve been using to grab raw SMART data.

Our journey to 700TB drove the MWI all the way down to one, which is supposed to put the 335 Series in a read-only, “logical disable” state. The flash is deemed unreliable at this point, and in typically conservative fashion, Intel doesn’t want to perform a write that isn’t guaranteed. The SMART readout might have truncated a decimal place, though, because we were still able to run our usual performance tests and kick off the next 100TB of writes.

The 335 Series was fine until about 50GB into that run, when write errors started appearing in Anvil’s Storage Utilities, the application tasked with flooding the SSDs with writes. The Anvil app actually froze, though we were able to load it again and extract the performance log stored on the drive. We’ll take a closer look at those results in a moment.

Oddly, the 335 Series wouldn’t return SMART information after the Anvil write errors appeared. The attributes were inaccessible in both third-party tools and Intel’s own utility, which indicated that the SMART feature was disabled. After a reboot, the SSD disappeared completely from the Intel software. It was still detected by the storage driver, but only as an inaccessible, 0GB SATA device.

According to Intel, this end-of-life behavior generally matches what’s supposed to happen. The write errors suggest the 335 Series had entered read-only mode. When the power is cycled in this state, a sort of self-destruct mechanism is triggered, rendering the drive unresponsive. Intel really doesn’t want its client SSDs to be used after the flash has exceeded its lifetime spec. The firm’s enterprise drives are designed to remain in logical disable mode after the MWI bottoms out, regardless of whether the power is cycled. Those server-focused SSDs will still brick themselves if data integrity can’t be verified, though.

SMART functionality is supposed to persist in logical disable mode, so it’s unclear what happened to our test subject there. Intel says attempting writes in the read-only state could cause problems, so the fact that Anvil kept trying to push data onto the drive may have been a factor.

All things considered, the 335 Series died in a reasonably graceful, predictable manner. SMART warnings popped up long before write errors occurred, providing plenty of time—and additional write headroom—for users to prepare. On the next page, we’ll explore what happened to the HyperX 3K.

 

More casualties

There are actually two Kingston HyperX 3K SSDs in the experiment. One is being tested like all the others, with 100% incompressible data that’s immune to SandForce’s DuraWrite compression mojo. The second HyperX is identical to the first, but it’s getting a stream of compressible data via Anvil’s “applications” preset.

The HyperX’s SMART attributes log host and flash writes separately, giving us a glimpse of DuraWrite in action. After 700TB of writes from the host, the incompressible HyperX config showed 738TB of flash writes, while its compressible sidekick indicated only 501TB.

As one might expect, the imminent-failure warning came from the incompressible drive. The warning was displayed by both HD Sentinel and the Intel storage driver used on our test systems. Then, after 725TB of writes, we got another cautionary message, this time from the OS. “Windows detected a hard disk problem,” read the dialog box, “Back up your files immediately to prevent information loss.” 3TB later, Anvil started reporting write errors. The drive was still accessible, and we were able to dump one last batch of SMART data, but it bricked after a reboot.

On the HyperX 3K, the SSD life left attribute tracks flash wear. Like Intel’s media wearout indicator, it counts down from 100 and is tied directly to the rated lifespan of the NAND.

When this attribute reaches 10, the flash’s specified endurance has been exhausted, and the SMART warning is triggered. Kingston urges users to back up their data and move to a new SSD at this point. The firm describes the SMART message as being similar to the warning light on a car’s gas gauge. There’s still some fuel in the system when the light comes on, but you should pull over at next opportunity to fill up.

The HyperX is designed to keep writing for as long as the NAND is viable, regardless of its rated endurance. Flash blocks are only retired if there’s a programming failure, an erase failure, or if the acceptable ECC tolerance has been exceeded.

Programming and erase failures are logged by separate SMART attributes, and they really ramped up toward the end of the drive’s life. So did the number of reallocated sectors. By the end, there were 986 reallocated sectors, 111 programming failures, and 381 erase failures. Those figures suggest about half of the retired sectors were taken out of commission due to ECC issues.

The HyperX 3K has loads of overprovisioned area, but sections of it are reserved for internal management routines and for RAISE, the RAID-like redundancy feature available in SandForce SSDs. Only a small portion is dedicated to “spare” blocks that can fill in for reallocated sectors. Once this extra NAND is consumed, the HyperX is finished. Kingston says the drive will fail to mount if the power cycles, which explains why ours wasn’t detected after a reboot.

Unlike the 335 Series, which checked out on its own terms, the HyperX appears to have failed after burning through all of the NAND available for writes. We still received multiple warnings before the failure, and there was additional write headroom after each one. A normal user would have had plenty of time to prepare for the failure.

Our third casualty was the Samsung 840 Series, which we expected to fail first due to the shorter theoretical lifespan of TLC NAND. Our accumulated SMART data supported that assumption, too. The 840 Series started logging reallocated sectors after only 200TB of writes, and it’s reported thousands of them since our experiment began—far more than any other SSD. However, the 840 Series also allocates more spare area to replace bad blocks, so it’s tuned with the TLC’s relative frailty in mind.

When we checked on the SSDs after 900TB of writes, the 840 Series was still functional, and Samsung’s own SSD Magician software gave it a clean bill of health. The 840 Series didn’t make it to a petabyte, though. It died suddenly in the last leg, without any preceding SMART warnings.

We’re not entirely sure what caused the failure. The Anvil utility crashed, and the drive disappeared from not only the Windows device and disk managers, but also from the SSD Magician and HD Sentinel utilities. The Intel storage driver detected the 840 Series as an unnamed Samsung SATA drive, but we couldn’t actually do anything with it. We weren’t even able to grab a log of the last batch of writes or a final accounting of the SMART status. We can, however, analyze the SMART data collected up to 900TB.

The wear-leveling count is sort of like the MWI and life-left attributes on the Intel and Kingston SSDs. It’s “directly related to [the] lifetime of the SSD,” according to Samsung, and it bottomed out after 300TB of writes. HD Sentinel bases its health estimate on this attribute, so it’s had a dim assessment of the 840 Series since the 300TB mark. But Samsung’s own software pronounced the drive in good health after 300TB, as it did at every subsequent milestone.

The SMART attributes also track how much of the 840 Series’ spare block reserve has been consumed by reallocated sectors. That attribute suggested there were plenty of spare blocks at the 900TB mark, so the flash’s mortality rate would have to have spiked dramatically for insufficient reserves to cause the eventual failure. Without SMART details from the time of death, we can’t be certain about what happened. We can quantify the reallocated sectors along with another important attribute: uncorrectable errors.

Uncorrectable errors can compromise data integrity and potentially cause application or system crashes, so they’re kind of a big deal. The first bunch appeared after 300TB of writes, apparently during preparation for our first unpowered retention test. The 200GB file we use to check data integrity failed multiple initial hash checks and had to be recopied before proceeding. Although the 840 Series ultimately passed the retention test and a similar one after 600TB of writes, the uncorrectable errors put a mark on its permanent record.

Between 800 and 900TB of writes, the 840 Series logged 119 more uncorrectable errors, bringing the total to 295. Anvil didn’t report any hash failures during that period, but we have its built-in integrity test set to run relatively infrequently—after each 1TB of writes—and on a 700MB file, that covers only a small portion of the flash. Regardless of whether the last spate of uncorrectable errors resulted in incorrect data, it’s probably no coincidence the 840 Series died shortly after.

When we kicked off this experiment, Samsung told us to expect warning messages before the 840 Series’ demise. Failure would resemble a compatibility error, the company said, and it could manifest in a BSOD or “other failure notice.” Since we didn’t get any warnings or failure messages, something may have gone awry at the end of the line. The 840 Series’ lifeless body is being returned home for further analysis, which we hope will shed light on the drive’s final moments.

All these casualties are bumming me out, so let’s turn our attention to the survivors…

 

The petabyte club

As their comrades fell around them, the Corsair Neutron GTX, Samsung 840 Pro, and compressible Kingston HyperX 3K drives soldiered on to 1PB without issue. That’s kind of miraculous, really: a bunch of consumer-grade SSDs withstanding one freaking petabyte of writes. None of these drives are rated for more than 200TB.

Reaching such an important milestone warrants a closer look at the health of the remaining candidates, especially since one of them might not be with us for very long. Along the way to 1PB, the second HyperX posted a pre-failure SMART warning.

Thanks to its compressible payload, this HyperX logged only 716TB of flash writes for 1PB of host writes. Don’t read too much into the magnitude of the savings, though. The stream of sequential writes in our endurance test isn’t indicative of real-world client workloads. Those workloads write far too slowly to stress SSD endurance in a reasonable timeframe.

Apart from its declining life indicator, the compressible HyperX is in excellent shape. It’s logged only two reallocated sectors and no program or erase failures so far. The flash seems to be in much better condition than that of its incompressible twin, which had hundreds of reallocated sectors and lots of program and erase failures with a similar volume of flash writes. The difference between the two configs suggests there may be some variance in flash endurance from one SSD to the next, even within the same family. Our sample size is far too small to draw a definitive conclusion, though.

Given how the HyperX is designed to behave, death probably isn’t imminent. I wouldn’t expect a failure until the number of reallocated sectors starts increasing substantially.

Next up: the Samsung 840 Pro.

After the 840 Series’ sudden demise, it’s hard to know what to expect from the Pro. This drive has the same SMART attributes as its TLC counterpart, including the wear leveling count that’s supposed to be related to health. The thing is, that attribute hit its lowest point after 400TB of writes, and Samsung’s SSD utility said the drive was still in good shape. SSD Magician indicated that everything was cool at 1PB, too, although the shrinking reserve  of used blocks points to an increase in reallocated sectors.

The number of reallocated sectors started ramping up after 700TB, hitting 1836 at the 1PB mark. Based on its 1.5MB sector size, the 840 Pro has retired 2.7GB of its total flash capacity. There’s plenty left, but whether we burn through it all remains to be seen.

Our last survivor is Corsair’s Neutron GTX. Several of this drive’s SMART variables are obfuscated by vague, “vendor-specific” titles, but Corsair’s Toolbox utility identifies attribute 231 as “SSD life left.” HD Sentinel lists the same attribute as temperature, but the profile fits what we’d expect from a lifespan indicator, albeit one that thinks the Neutron is going to be around for a very, very long time.

If the current rate of decline continues, the life attribute won’t hit zero until after more than 4PB of writes. That seems a tad optimistic for a consumer-grade SSD, so we’ve asked Corsair to clarify exactly how the value is calculated. It’s possible the slope could steepen in response to reallocated sectors. The drive hasn’t logged any of those yet, though.

Before moving on to our performance results, I should clarify that simply writing a petabyte isn’t sufficient for entry into our exclusive club. After reaching that milestone, the survivors faced another unpowered data retention test. They were left unplugged for seven days, and they all returned with our 200GB test file fully intact.

Now that we know which SSDs lived and which ones died, let’s see if any of them slowed down over the last stretch.

 

Performance

We benchmarked all the SSDs before we began our endurance experiment, and we’ve gathered more performance data after every 100TB of writes since. It’s important to note that these tests are far from exhaustive. Our in-depth SSD reviews are a much better resource for comparative performance data. What we’re looking for here is how each SSD’s benchmark scores change as the writes add up.

For the most part, all the drives have performed consistently since we began. We’ve observed a few blips here and there, including a potential one for the Neutron GTX in the last sequential read speed test. The drive hit roughly the same speed through five runs, so it was consistent in that sense, just short of previous efforts. We’ll have to see what happens at 1.1PB and beyond.

Accumulated writes don’t affect performance in most of these tests. However, the read speeds on the Samsung 840 Series are a little slower in our last set of results. Hmmm. Perhaps our other performance data will be more enlightening.

Unlike our first batch of results, which was obtained on the same system after secure-erasing each drive, the next set comes from the endurance test itself. Anvil’s utility lets us calculate the write speed of each loop that loads the drives with random data. This test runs simultaneously on six drives split between two separate systems (and between 3Gbps SATA ports for the HyperX drives and 6Gbps ones for the others), so the data isn’t useful for apples-to-apples comparisons. However, it does provide a long-term look at how each drive handles this particular write workload.

Again, the SSDs have mostly behaved consistently. The 840 Pro’s run-to-run inconsistency is kind of its thing, while the Neutron GTX’s slowly increasing pace has been evident from the start. Pay no attention to the regular spikes for some of the SSDs; those are related to the secure erase we perform before running our performance benchmarks every 100TB.

Our casualties maintained consistent write speeds for much of their lives, but there’s evidence of sputtering toward the end. Let’s zoom in for a closer look. The Intel and Kingston SSDs are covered through their final runs, but we don’t have data for the Samsung beyond 900TB.

Even without its last gasps on record, the 840 Series clearly started breathing more erratically over the last few hundred terabytes. The HyperX barely staggered in its final steps, while the 335 Series suffered a short but noticeable bout of wheezing before it hit the wall.

Ok, so maybe that’s a stretch—the noticeable bit, not the drawn-out running metaphor. These are ultimately minor reductions in write speeds, at least for the Intel and Kingston SSDs. It’s possible the Samsung got substantially slower closer to the end of its life, though I wouldn’t bet on it based on the data we have.

 

To the next milestone and beyond

Our SSD Endurance Experiment has killed off more main characters than a season of Game of Thrones. Ok, maybe not that many. But we suffered multiple failures on the road to 1PB, thinning the herd by half. It’s only appropriate, I think, to pause for a moment of silence.

The casualties didn’t die in vain. They showed little performance slowdown toward the ends of their lives, suggesting SSDs can keep up the pace until the very end. And they showed us that drives can expire in different ways. The Intel 335 Series checked out voluntarily, at a predetermined cycle limit, while the Kingston HyperX 3K tried to squeeze every last drop out of the flash. Both provided ample warning of their demise and enough headroom for users to back up their data.

The Samsung 840 Series died suddenly, but the writing had been on the wall for a while in the form of mounting block failures. Some of those failures may have been responsible for the uncorrectable errors that struck during the drive’s life. The 840 Series’ error correction apparently wasn’t strong enough to overcome a few hurdles, albeit ones that didn’t arise until after hundreds of terabytes of writes.

Given our limited sample size, I wouldn’t read too much into exactly how many writes each drive handled. The more important takeaway is that all of the SSDs, including the 840 Series, performed flawlessly through hundreds of terabytes. A typical consumer won’t write anything close to that much data over the useful life of a drive.

Even with only six subjects, the fact that we didn’t experience any failures until after 700TB is a testament to the endurance of modern SSDs. So is the fact that three of our subjects have now written over a petabyte. That’s an astounding total for consumer-grade drives, and the Corsair Neutron GTX, Samsung 840 Pro, and compressible Kingston HyperX 3K are still going!

Three down. Three to go. Stay tuned.

Comments closed
    • DevilsCanyonSoul
    • 5 years ago

    This entire effort is a testament to hard work, values and commitment.
    Thanks to everyone at TR that put this thing together.

    As an aside, the FA Report on that dead soldier (Samsung 840) is a critical component to understanding better what transpired here – especially since the sample basis = 1. I’d also like to point out that this continued argument over what constitutes a gracious death (transition to read-only mode as a feature notwithstanding…) is tedious at best. Under most modern Windows OSes those unrecoverable error events would have posted up warning messages in both the event log AND via screen dialogs – so anyone expecting a “good to the last drop” experience measured to the last gasp of highly accelerated test environments – such as this test bed has been to date – is smoking the funny stuff and setting up unrealistic real world expectations. A drive on the slide, be it rotating media or not, can be detected well before going off-the-cliff in most circumstances by *drivers* who are paying attention.
    … pun intended …

    Again, kudos the TR team !
    You’ve added another loyal follower to your readership.

    • mtcn77
    • 5 years ago

    Love the commentary, lol! Every line is a speck of genius.

    • MustSeeMelons
    • 5 years ago

    Really enjoying the series, will be waiting for the next update! And possibly buying an SSD..

    • glugglug
    • 5 years ago

    I’m really surprised at the life left reading on the Neutron GTX. Just how different are the regular Neutron and the GTX supposed to be? My 512GB Neutron has an 88% SSD life left reading after less than 16TB of writes.

    • torquer
    • 5 years ago

    Reading some of these posts I can’t understand what people are so butt hurt about with these SSDs. Do you not realize that these behaviors happen LONG LONG past the warranty? Complaining about what it does way beyond the advertised and expected life is a little like throwing a tantrum because your engine throws a rod at 400k miles with a 100k mile warranty.

    Pat yourself on the back that it lasted so long, or pay heed to the pre-failure warnings and back up your stuff. Typical internet comment/forum posts – nothing but FUD, trollery, and arguing stupid semantics.

    • Ninjitsu
    • 5 years ago

    Interesting note: on my Intel 320 and 313 series drives, MWI counts up from 0 i think (as the raw column reads 0 for both, and I’ve only written 3.49 TB and 254 GB respectively).

    EDIT: My Samsung 840 (250 GB) has had 1.55 TB of writes and is currently reporting a wear-leveling count of 10.

    • ronch
    • 5 years ago

    The worst thing that could happen is that NAND flash makers, realizing the ridiculous longevity of MLC and TLC, will adopt flash cells that can hold more bits (QLC for 4 bits per cell?). Not only will they theoretically make more money with every byte sold, they’ll also cause SSDs to die more quickly (all things being equal) and spur people to buy new SSDs more often.

    • Tracker
    • 5 years ago

    Thank you very much for the very informative report. I am not that tech savvy so I have a few questions:

    1. We have to date used normal HDDs. We swap photoshop files sized 100-300 mb on a regular basis (print shop) and have not had any disk failures for the past 5 years. What does the 1PB number mean in actual business use?

    2. Are there SSDs out there which allow one to retrieve the data even when the disk has reached end of life (read only)?

    3. Would we have to re-think our backup strategies of Apple Time Machine plus several daily back-ups, e.g doing backups during the day?

    Thanks
    Peter

      • BrewingHeavyWeather
      • 5 years ago

      1. That you don’t have that many HDDs :). Disk failures tend to be random. The more disks you have, and then the more you have per chassis, the more regular their failures will be, doubly so if many are in notebooks. As far as data transfer, pretty much that a good SSD is going to last or outlast the system it is installed in.

      If it doesn’t, then chances are very good that you already have a good ROI from using one, because if it doesn’t, that means you’re taking advantage of the huge performance increases they can provide over HDDs (HDDs are, optimistically, 10-20x slower in random IOPS…in practice, due to the lower latency combined with the higher bandwidth, you can safely add a zero). Since you mention Time Machine, you clearly use Apples. You should be aware that Apple is transitioning to SSD-only machines (like the normal PC OEMs aught to be doing), so you will end up, in the future, with HDDs only in external enclosures.

      2. Maybe. Intel’s are designed to commit suicide, to make it practically impossible for a 3rd party to get data from a worn out SSD. It also prevents 1st parties from getting their data, of course. In general, always have backups, whether HDD or SSD. There are lots of variables, and killing SSDs like this takes so much time, that it’s basically impossible to answer in the affirmative.

      3. No. Those backups only read, and there’s no good reason to not use HDDs as backup targets. Backup up the contents of an SSD will not affect its longevity, outside of theoretical hair-splitting.

      • Dissonance
      • 5 years ago

      A few answers…

      1. It really depends on how often you write those files. 300MB = ~0.00000028PB. I’ll let you do the math, but it doesn’t sound like you’d be writing files frequently enough to exhaust the endurance of a good SSD.

      2. Intel’s enterprise-grade SSDs are designed to stay in logical disable (read-only) mode after they exhaust their write capacity. We haven’t actually pushed any of them that far, though.

      3. All the evidence I’ve seen suggests that SSDs are less failure-prone than HDDs. They shouldn’t require different or more frequent backups than what you’re doing now.

        • Tracker
        • 5 years ago

        Thank you very much. We are convinced. This report and your answers finally gave us a deeper understanding, so we’ll get one or two SSDs and start integrating them in our system.

    • prima.king
    • 5 years ago

    an identical test should be done with the SSDs used within Apple computers

      • Chrispy_
      • 5 years ago

      The SSD’s here are used in apple computers. Apple sources drives from all over the place, quote often Samsung.

    • f0d
    • 5 years ago

    that samsung 840 did much better than some people give it credit for, i often see people say “dont get TLC drives they wont last long”
    it lasted long enough to be considered ok imo

    • UnfriendlyFire
    • 5 years ago

    I can imagine an executive looking at this statistics and think:

    “Well, if our SSD can last for 50 years under normal consumer usage… Why not reduce the wasteful over-provisioning so they last for 10 years instead? Maybe that would encourage some more people to buy our enterprise SSDs.”

    And meanwhile, Kingston and PHY plays the game of bait-and-switch by reducing the over-provisioning in their SSDs right after the reviews are published, gambling that nobody is going to use their SSDs for more than a decade.

    EDIT: And if you don’t get the Kingston and PHY reference, this is an interesting read: [url<]http://www.extremetech.com/extreme/184253-ssd-shadiness-kingston-and-pny-caught-bait-and-switching-cheaper-components-after-good-reviews[/url<]

      • JustAnEngineer
      • 5 years ago

      “We’re only including a 3-year warranty. You need to redesign this thing to ensure that it fails at exactly 37 months of operation.”

        • UnfriendlyFire
        • 5 years ago

        No no, this is more likely to happen:

        “Instead of improving our SSD, why don’t we just rebadge some of our older ones that nobody is buying? Look at AMD and Nividia!”

        (Fun fact, GT 705m is a rebadge of GT 610m, which itself is a rebadge of Fermi GT 520m. And just look at AMD’s entire mobile GPU lineup.)

        • ronch
        • 5 years ago

        Why, if I were some sleezy marketing guy I’d convince the powers that be to include a 10-year warranty on my drives. Hey, it worked for Hyundai. Besides, in 10 years it’s either the user has moved on to something more advanced or the company that sold the SSD is already dead, given how many defunct companies litter Silicon Valley.

          • JustAnEngineer
          • 5 years ago

          When I was in college, a friend of mine ran his BBS on an old (even at that time) Apple IIe with a Sider hard-drive. While the drive was sold with a one-year warranty, in 24/7 service on the BBS, it couldn’t make it nine months. When they sent him the fifth (20MB) warranty replacement for his original 10 MB drive, the company begged him to upgrade to newer hardware and to quit running the old product, since it cost them more to keep replacing the drive under warranty than it would have cost for a new one.

    • SomeOtherGeek
    • 5 years ago

    1,125,899,906,842,624 bytes!! That is such a huge number. Just thinking about all the work – PC wise – to read/write that much info.

    I love this review, the end-game of the drives and how things will look at their end-times. Love it. I love the way the post started with show of compassion. Laughed at that.

    Like everyone is saying, it would be nice to have a more up-front warning and the ability to read data after complete drive failure. My question would be: Can the drive be cloned to another drive? Can you test that or is that something that is impossible?

    Again, keep up the awesome work!

      • Wirko
      • 5 years ago

      That nasty number has an equally nasty name: “Pebibyte”. PiB.

        • derFunkenstein
        • 5 years ago

        Processor in Box?

    • Freon
    • 5 years ago

    Great stuff. I would really love to see another round and batch of drives be put through this test!

    • jmke
    • 5 years ago

    as with the first cars, these SSDs are build from a consumer point of view. outlasting by far lengths what the marketing people put on the box.

    as time passes we’ll see focus shift to the manufacturer point of view, reliability will reduce greatly after the warranted TB written are hit. As they fine tune their process and squeeze every last ounce of profitability from their products.

    enjoy these type of products while they last!

      • Freon
      • 5 years ago

      Sadly, I think you may be right.

        • jmke
        • 5 years ago

        and it has already begun:

        [url<]http://www.extremetech.com/extreme/184253-ssd-shadiness-kingston-and-pny-caught-bait-and-switching-cheaper-components-after-good-reviews[/url<] [quote<]When Tweaktown inquired as to the situation, PNY sent back the following: “Yes we did ship some Optima SSD’s with SandForce controllers, but only if they meet the [i<]minimum advertised performance levels[/i<][/quote<]

    • Wirko
    • 5 years ago

    Geoff, as you’re still looking for new ways to stack the SSDs: place three of them belly up.

    • ronch
    • 5 years ago

    It looks like SSDs take so long to kill themselves that their cost/GB probably will have reached HDD levels by the time a typical user’s SSD gives up the ghost.

    • ronch
    • 5 years ago

    Terrific article. It’s good to know one really need not worry too much about his/her SSDs. Besides, it’s a fact of life that everything degrades. Your car degrades, your LCD display, your room A/C, the magnetic strip on your credit card after so many shopping sprees, your shoes, heck, even you. If it breaks, it breaks. Time to buy a new one. Given TR’s findings it even looks like SSDs are pretty durable compared to many other things we use.

      • meerkt
      • 5 years ago

      But without retention testing this is just a partial picture.

        • ronch
        • 5 years ago

        Hear that, Scott?

        • Dissonance
        • 5 years ago

        We performed unpowered retention tests after 300TB, 600TB, and 1PB of writes.

          • meerkt
          • 5 years ago

          Thanks for that. But the great mystery is how long retention holds.

      • Wirko
      • 5 years ago

      LCD and magnetic strip excluded, things that you’ve listed often develop retention problems when old.

    • bipin.nag
    • 5 years ago

    Please cover which of the drives last the longest. I would like to know how many petabytes the remaining drives can take and see which one wins the durability test.

    • itachi
    • 5 years ago

    What you guys advise to be the “required” optimization for an SSD, like disable superfetch, indexing, pagefile maybe ? etc, or does it not even matter ?

      • Pwnstar
      • 5 years ago

      Yes, superfetch is not needed for SSDs but it won’t hurt to have it on, either.

      Turning off indexing and pagefile reduces the number of writes to your SSD, but as this test shows, it really doesn’t matter in the end, as they can withstand 800 terabytes of writes.

    • Stickmansam
    • 5 years ago

    I always though SSD fail into a read only mode

    Its good that you’re doing the experiment so we learn new things

      • oldDummy
      • 5 years ago

      The SSD Endurance Experiment is a major reason I’m a gold member.
      Just saying.
      Keep up the good work guys.

    • Dirge
    • 5 years ago

    @Geoff “Both provided ample warning of their demise and enough headroom for users to back up their data.”

    It seems other than periodically checking SMART attributes or firing up the the manufacturers utility there is no way to know if these warnings are occurring. How does the OS report a failing SSD to the user?

    I know theses drives have been absolutely punished and show very respectable endurance, but I find it concerning since the SSDs appear to brick themselves instead of going into read only mode, once they have reached their write limits.

      • Dissonance
      • 5 years ago

      Intel’s RST storage driver delivered the SMART warnings on our test system. Then there was the separate Windows error on the HyperX. We didn’t get that error on the Intel, though.

        • Dirge
        • 5 years ago

        Thanks for your reply, I find it very interesting. I think it would be optimal if Windows would alert the user to the impending doom of their storage device.

          • nanoflower
          • 5 years ago

          It’s the same problem that exists with mechanical hard drives. With mechanical drives the errors may get logged in the Windows event log but unless you check it or have a program to show you SMART errors you may not realize a HD is going south until it is too late. Windows really should be more aggressive about alerting the user of such a potential fatal error such as a storage device failure by giving the user a visible warning.

            • UnfriendlyFire
            • 5 years ago

            Windows XP doesn’t provide any warning at all, even when the HDD suffered a head crash. I’m fairly sure the HDD’s firmware would’ve noticed something odd when that kind of failure occurs.

            • Klimax
            • 5 years ago

            Correction: If SMART reports significant problems then Windows throw massive warning dialog with IIRC red bar in top part of it with three options. (Immediate back up, postpone or disable)

            So Windows are already aggressive. I had a drive (Samsung 2TB) going bad with large number of bad and relocated sectors. (About months later I tried to use it again for temp storage, so many errors that system nearly halted until I disconnected drive…)

            ETA: Windows 7

            • nanoflower
            • 5 years ago

            Never saw such a warning when I had a hard drive go south last year. Ended up losing some data because Windows didn’t warn me in a visible manner, but it did log the data into the Event log. It’s why I now try and keep a SMART monitor program running at all times so I will get visible notice when an error occurs (currently running Acronis Drive Monitor.) (Windows 8.1)

            • Klimax
            • 5 years ago

            Since Windows 7. I think it depends how Smart is done on drive and by controller. Also many faults can’t get detected by SMART.

            Something like: [url<]http://i.stack.imgur.com/uKXcK.jpg[/url<] ETA: Reminder, parameters reported by drive can be farly opaque to OS and even other tools. ETA2: And yes, I misremembered how warning looks...

            • Dirge
            • 5 years ago

            Hi Klimax thanks for your post, I had until now always wondered what a warning from the OS would actually look like.

    • LoneWolf15
    • 5 years ago

    Great ongoing

    Still hoping you take a look at Crucial SSDs for this sometime; they are widely in use. Would love to see how the M500, M550, and the new MX100 hold up.

      • Visigoth
      • 5 years ago

      This 1000x!

    • NovusBogus
    • 5 years ago

    This is where it gets interesting. I’m surprised that the drives are crapping out so suddenly, I figured that aggressive wear leveling would manifest itself as slowly shrinking available area instead of one day throwing in the towel.

    • yuhong
    • 5 years ago

    I hope RAID controllers has been updated to detect read only mode and allow the RAID array to be recovered by data copy.

    • Ochadd
    • 5 years ago

    Thanks for the great experiment; I look forward to every update.

    Maybe Intel figures a company could possibly use these consumer grade drives like you’d use gasoline? If the drives go permanent read only there would be one less reason to go enterprise grade. Just swap out the drives with spares and continue on without the risk of data loss. Since the enterprise drives can do it, it’s surely a cleverly laid firmware landmine.

    • Chrispy_
    • 5 years ago

    My 840 is up to 36TB written, and I feel that I use it [b<]*a lot*[/b<] compared to your average consumer doing gaming/office/internet/media-consumption. The good news is that after two years of intensive usage, I can reasonably expect it to go for something like another 47 years before I hit 900TB. I wonder if I'll live for [url=http://upload.wikimedia.org/wikipedia/commons/1/17/Mountain_bike_in_downhill_race.jpg<]another[/url<] [url=http://ridedaily.com/wp-content/uploads/2013/05/1280x960-jump-snowboard-wallpaper.jpg<]47[/url<] [url=http://cdn.velonews.competitor.com/files/2010/06/000_Par33007781.jpg<]years[/url<].....

      • UnfriendlyFire
      • 5 years ago

      My 13 year old desktop rig has a 32GB HDD.

      Back during Windows 2000/ME era, that was probably a respectable amount of storage space.

      Now? Windows 7/8 installation would probably take up most of the HDD space. And we have SD cards that reach up to 512GB of storage space.

      EDIT: I know a friend’s parent who once spent a pretty penny for a 1 MB HDD back during the early DOS computing era.

        • UberGerbil
        • 5 years ago

        I was using a 40GB Intel SSD as a boot drive for a Windows 8 installation when it was still in beta. It was tight but quite doable (I even had a hiberfile and a 4GB pagefile). Not a lot of room for applications or user data, of course.

        And actually, 32 GB of data back in the late 90s was quite a lot. I had a laptop (Pentium 133 with MMX!) with an easy-to-swap HD bay, and I kept my Win98 system on a 6GB drive while my NT install lived on a 4GB. Those were 2.5″ of course; desktop systems tended to sport larger drives like yours.

        And yes, you could spend [url=https://c1.staticflickr.com/3/2108/2373941595_834cc7169d_b.jpg<]a pretty penny[/url<] back in the day (that's 1986: PC10 was 10MB, PC20 was 20MB).

      • DarkMikaru
      • 5 years ago

      Right! My Samsung 830 64GB in my work rig is at 2.39TB and exactly 2yrs old. So I’m no where near that magical failure point. But I will say, it did disturb me that the 840 didn’t speak a word before it kicked the bucket. That is a little worrisome, but honestly we really don’t have anything to worry about.

      How long have you had your 840? How in the world are you at 36TB already?

        • Chrispy_
        • 5 years ago

        About 18 months I guess. 36TB takes some real doing – I use it for virtual machines and it effectively runs multiple operating systems at once, and all the writes that go with them.

        In contrast, my laptop is a light-use machine with a 256GB 830 in it and I cant remember what that’s on but last time I checked about a year ago it was around 2TB.

      • ssidbroadcast
      • 5 years ago

      Is there a utility out there that tells you how many writes you’ve done on your SSD?

        • UberGerbil
        • 5 years ago

        There isn’t a standard for how the data is reported (SMART was a good idea, incompletely implemented). Some mfrs supply utilities that do this, eg Samsung’s Magician and Intel’s Toolbox.

      • UberGerbil
      • 5 years ago

      I have a 2.5 year old 830 (120GB) that is the system/apps drive in my main work machine, and it has a pagefile that gets used a fair bit (even with 16GB of memory, I find even just a bunch of browser windows can chew up more than that if I let them). It has 11.33 TB written.

      • ronch
      • 5 years ago

      I’ve had my EVO 250 GB for 6 months now and Samsung Magician says I’ve written only ~0.78TB on it. Given the current trend I expect to reach around 1.6TB in a year. It would take 187 years to reach 300TB. Last time I checked, normal humans don’t even reach 100.

      • LoneWolf15
      • 5 years ago

      I kept around an 80MB Western Dig IDE stepper motor drive for a long time that ran just fine, just because I could. Before the days of advanced 100MB and larger drives with voice actuator coils (like my uber-fast 106MB Seagate ST3120A). Still working a decade later.

      It was a 3.5″ half-height, and likely a an MFM or RLL drive with a redesigned circuit board to adapt it.

      [url<]http://offog.org/notes/archiving/misc-hard-disks/wd93044a_size1600.jpg[/url<]

    • WillBach
    • 5 years ago

    [quote<]... the 335 Series died in a reasonably graceful, predictable manner.[/quote<] I'm disappointed. It should have died fighting if it wanted to enter Valhalla

      • superjawes
      • 5 years ago

      It must have realized its own failure and fallen on its sword instead.

      • UnfriendlyFire
      • 5 years ago

      *Sets firmware to ignore all errors/faults and tell the OS that everything is fine*

      What could possibly go wrong?

        • stdRaichu
        • 5 years ago

        [quote<]*Sets firmware to ignore all errors/faults and tell the OS that everything is fine*[/quote<] I didn't think they used any OCZ drives in this test did they?

          • UnfriendlyFire
          • 5 years ago

          I’m pretty sure many of the flawed OCZ drives fail hard enough to not even boot.

    • edwpang
    • 5 years ago

    My work machine has an OCZ-Vertex2 100GB as boot drive. According to smart info: it has been powered on for 31339 hours(3.5 years!), Host writes of 13.81TB, and remaing SSD life: 91.

    • HisDivineOrder
    • 5 years ago

    Sandforce drives failing? /gasp

    Sandforce drives failing before the TLC drive? Now that is actually interesting. Two of them doing it is really something.

    Then again, Sandforce was always kinda buggy and no amount of Intel QA could remove them all. They should have picked a better controller when they went third party initially.

    As for the TLC drive failing, well… pretty much expected.

      • Chrispy_
      • 5 years ago

      You didn’t read the article properly and you clearly have an biased opinion of Sandforce.

      Firstly, both Sandforce drives failed gracefully, predictably, and with plenty of warnings – and both drives failed once their NAND lifetime counters had hit zero. The Sandforce controllers themselves were faultless and the NAND was the weak link.

      Your expectation of the TLC drive failing would be based on TLC being lower-endurance NAND, I assume but you either didn’t comprehend or maybe you just skipped reading the bit where the Samsung’s controller failed. According to the data up to 900TB, the Samsung had around 50% of its spare area left and most graph analysts would probably expect the 840 to run out of NAND at around 1.4PB.

      The drive is bricked (controller dead) and it’s been sent to Samsung to see if they can disclose why.

        • Duck
        • 5 years ago

        Both Sandforce drives failed gracefully? Only one Sandforce drive failed. The others were the Intel and the Samsung TLC.

        edit: Argh for some reason I thought the Intel 335 wasn’t Sandforce based. Maybe I got it mixed up with the 320.

      • Meadows
      • 5 years ago

      The point of the experiment is to make them fail, you moron.

        • Damage
        • 5 years ago

        Please be civil and don’t call names, folks.

          • Meadows
          • 5 years ago

          Oh okay.

    • anotherengineer
    • 5 years ago

    Would have been nice if there was a plextor SSD in there also, le sigh.

    • deepblueq
    • 5 years ago

    I bought a 40 GB Intel 320 (I wasn’t familiar with the parallelism issues back then) a few years ago, and took special care in setting up the systems to not make unnecessary writes to it for the sake of longevity. It’s been the only drive in three systems now, all fairly heavily used (everything except gaming, which is a different machine). All three have been running Linux, the first two had swappiness = 1 (nothing ever got as far as swap), and the current one doesn’t have a swap partition at all. This article made me curious about the smart data on this drive.

    Host_Writes_32MiB = 12040

    I have only made 300-something GB of lifetime writes to this thing, despite it having spent the last 2.75 years as the only drive in whatever was my primary computer at the time. Nothing has driven home just how much data is being handled in this experiment quite like seeing that for comparison. I think I can stop optimizing for low write volumes now. 😉

    Many thanks to TR and Geoff – this is some great stuff.

    • wierdo
    • 5 years ago

    Thanks for the tests guys, good stuff to share with friends who haven’t jumped on the SSD train yet.

    Things are starting to get interesting, I’m disappointed that the drives bricked, but at least in the case of Intel and Kingston there was some good warning signs.

    I’m curious to see how Samsung will respond to your inquiry about their drives failing ungracefully like that though, I think it’s a big deal, I’d want my SSD drive to tell me when it’s time to move my data safely to another drive, right?

    • Anovoca
    • 5 years ago

    Continued write cycling is murder! Continued write cycling is murder!
    The Association for Fair and Ethical Treatment of Consumer Grade Solid State Drives (AFFETCGSSD) does not condone your testing methods. Cease your testing immediately and release these drives into the wild.

      • Wirko
      • 5 years ago

      i.e., sell them as “10 months old, not used in servers”.

      • Steele
      • 5 years ago

      “AFFETCGSSD” looks like something I type when I’m really frustrated and just slam my hands on the keyboard.

        • entropy13
        • 5 years ago

        To be fair, it makes you wonder if all AFFETCGSSD members are fat, computer-generated, solid state drives…

    • balanarahul
    • 5 years ago

    For a drive that voluntarily went into read only mode, I didn’t expect it to get bricked, especially for a consumer drive. I am disappointed with Intel’s engineers.

      • Farting Bob
      • 5 years ago

      It’s a shame it (and the other 2 failed drives) didnt enter a read only state where you could still get at everything on the drive.

      But still, the only number Intel give for endurance is 22TB (20GB per day every day for the length of it’s 3 year warranty). The fact that it went 34 times that distance and performed as normal for 95% of that life, with warnings while the drive was still read and writeable that it was going to die soon i think is very good.

      • jihadjoe
      • 5 years ago

      From the article it seems like Anvil’s repeated attempts to write to it were what caused it to brick.

      • yuhong
      • 5 years ago

      Yep, given that even SMART don’t work it looks like a bug. Hopefully Intel can push out a firmware update to fix it to go into read only and that it will become standard practice in the future.

      • meerkt
      • 5 years ago

      Not only Intel but all three. Read only, even with some corrupt data, is far far better than becoming inaccessible. The three failed miserably, SMART warnings and writing-much-over-spec notwithstanding.

        • Firestarter
        • 5 years ago

        I imagine Intel could revive the drives to read-only mode again if you send them in

    • JosiahBradley
    • 5 years ago

    These articles are why I subscribed. Keep the bytes coming.

    • koaschten
    • 5 years ago

    I am kind of curious why the SSDs don’t fail “gracefully”, as in going to read-only mode, but it seems like they just go into “brick” mode. It would be a good selling point saying “even if it wears out, you can still get your data out”

      • DPete27
      • 5 years ago

      I was going to say the same thing. IMO, read-only is unusable as a system drive (obviously), but at least you can get your data off the drive (corrupted or not) once it fails…

        • yuhong
        • 5 years ago

        I wonder how Windows currently deals with read only SSDs as the boot drive.

          • stdRaichu
          • 5 years ago

          Windows doesn’t deal with read-only drives at all. If you try and boot windows through a write-blocker, you’ll get maybe three seconds of progress before an error message.

            • yuhong
            • 5 years ago

            I suspected so. It would be a good idea to update Windows to deal with them, maybe by booting into safe mode or something like that.

            • robliz2Q
            • 5 years ago

            You can get your data off, by booting from a Linux data rescue image like – System Rescue

            • UberGerbil
            • 5 years ago

            Windows Embedded can boot from read-only media, so it presumably would be something they could add to normal Windows without too much difficulty (particularly if it was restricted to Safe Mode).

            • glugglug
            • 5 years ago

            Sure it does. Ever heard of Windows PE? it’s designed for running off a CD.

      • Forge
      • 5 years ago

      This, again, more so.

      I’d like to be able to read my data, even most of my data, if my SSD has failed. Disabling writes seems quite fair, disabling reads seems unnecessary and potentially cruel, depending on what data has been laid down since the last backup.

      • mark84
      • 5 years ago

      This. If an SSD came with the feature of turning into read only on expiry instead of bricking would get my money over other options.

      Maybe when they get to 5% of available reserve flash left they auto enter ‘safe mode’ and become read only so the data can be still retrieved? Would be nice.

      • Buub
      • 5 years ago

      It sounded like they did fail gracefully, but instead of replacing the drives at the first signs of failure, they kept on pounding them. If I read correctly, one of the drives had an additional 3TB (three TERABYTES) of data written to it after it started indicating failure, before it failed completely. I’d say that’s a pretty damn reasonable margin of error.

        • nanoflower
        • 5 years ago

        I’m sorry but a drive that turns into a brick isn’t a graceful fail. If it just stopped accepting writes then that would be graceful but turning into a brick is a problem that needs to be fixed. Any SSD that fails should fail into read-only mode so that the user has the opportunity to backup any data. That isn’t what happened and I hope the companies involved will fix their SSD firmware so that they do fail gracefully in the future even if someone does keep attempting to write to them.

      • Thue
      • 5 years ago

      The TechReport article seems to accept Intel’s claim that bricking instead of going into read-only mode is how it is supposed to work. That is completely absurd. What TechReport should have concluded is that Intel’s firmware is dangerously buggy.

      I assume that this is a case of Intel etc paying (or at least giving test hardware for free) to TechReport, and TechReport not wanting to bite the hand that feeds it.

        • Damage
        • 5 years ago

        I think one of the worst things a person can do on the web is make this sort of drive-by accusation, harming somebody’s reputation without having all of the facts.

        Intel doesn’t advertise with us. In fact, they’re pretty poor about supporting the PC enthusiast community. For that very reason, we bought the Intel 335 Series SSD used in testing.

        And you apparently didn’t read the article. We didn’t have a clearly defined expectation of the failure mode, and the Intel drive failed slightly more gracefully than the other two. It gave ample warnings about an impending failure and remained readable until after a reboot. Remaining accessible in read-only mode ad inifinitum would be nice, but I’m not sure that’s feasible. We’re talking about an extreme edge case here. The other two drives were bricked eventually, too.

        Yet here you are, throwing out this accusation of corruption and worse. You need to go sit over in the corner of the Internet and think about what you did.

          • mczak
          • 5 years ago

          I think you were quite gentle not bashing the manufacturers (not just intel) that the drives are not accessible read-only after failure though :-). I see no technical reason why it shouldn’t be possible, if there is one I’d like to know why but since the “pro” drives can apparently do it I really don’t think there’s any (other than this just being reserved for the expensive SSDs). I’m not sure if the Samsung one also typically would be no longer readable (as it seems to have failed in an unexpected way not how it was designed) but that’s kind of a moot point if it doesn’t fail how it should in the first place, I’d say firmware bugs…

          If anything I’d have accused SSD manufacturers of having a deal with data recovery companies :-). But yes, it’s highly unlikely to be a problem in practice.

            • Damage
            • 5 years ago

            Our sample size is one. Of course we were gentle.

            Doesn’t mean anyone needs to be accused of something.

            • mczak
            • 5 years ago

            Oh yes I definitely don’t agree with the accusations.
            Would be nice though if you could ask the manufacturers why the drives can’t stay in read-only mode after reboot if you get the opportunity.

            • ImSpartacus
            • 5 years ago

            If you want to protect yourself from accusations, you need to take measures such as increasing that sample size.

            You’ve been working on the internet long enough. You shouldn’t need to be told how you can mitigate these kinds of accusations.

            • bean7
            • 5 years ago

            Yes, increasing the sample size would help fix these kinds of problems. But that’s an easy thing to say and a hard thing to do. I’m not a statistician, but how large does your sample size need to be? Five drives? Five hundred? Five thousand? And just buying more drives wouldn’t necessarily be enough. How much effort are you willing to expend to make sure that your sampling of X drives is sufficiently random to be representative of the population (however you choose to define it).

            Saying “increase your sample size” and “you should already know to do this”, as if it was easy and they were just too lazy to do it, seems like a further unwarranted accusation to me.

            • ImSpartacus
            • 5 years ago

            You have to make assumptions on the failure distributions to start to calculate confidences based on sample size. Selection bias is also an elusive thing to measure. Honestly, it’s just not worth getting that detailed.

            For benchmarking, I think TR repeats everything three times and then reports the median results.

            So I think it would be perfectly adequate and expected for them to be consistent and test 3 of each drive.

            In fact, considering that they strayed from the methodology standards that built the reputation of TR, I think accusations [u<]are[/u<] warranted, though hateful remarks are never warranted.

            • UberGerbil
            • 5 years ago

            [quote<]For benchmarking, I think TR repeats everything three times and then reports the median results. So I think it would be perfectly adequate and expected for them to be consistent and test 3 of each drive.[/quote<]But they don't buy three CPUs of each type and benchmark them. So what you're asking for is a false equivalence. Moreover it's not at all clear that we'd have any more confidence in a sample of three than we have in a sample of one.

            • Corion
            • 5 years ago

            @Damage – Any chance of the HDD Endurance Experiment? Can we make this a yearly thing? Maybe get a bracket going?

            This kind of data is valuable and I’d like to see it with a larger sample size.

          • Thue
          • 5 years ago

          Then TechReport should not make completely absurd statements, such that a drive deliberately bricking itself is OK. I don’t know whether TechReport is under some kind of influence, but it is a far more believable explanation than that the writer thinks deliberate self-bricking is ok.

            • Damage
            • 5 years ago

            Geoff never said the drive’s behavior was “OK.” He simply relayed Intel’s sentiment that the behavior was expected. You’ve added a layer of implicit approval on Geoff’s part where none was stated. Then, pretending that exists, you got mad and accused us of corruption because that’s apparently the most plausible explanation for something you read into the text.

            You were wrong in your premises, wrong in your conclusions, wrong on the facts, and you would have been wrong even if an Intel ad were running on the page next to the article. Your assumptions are not a good lens for interpreting our behavior, since we don’t work like you think we do.

            • Corion
            • 5 years ago

            Bricking the drive kind of sucks. Even an error-prone readable mode might be better than being told “NO” – if the reads could occur without corrupting the drive.

            But ultimately what we have here is a drive that’s still absurdly durable and lets you know when things are about to hit the fan. I wish I had that when some of my previous HDD’s crashed.

            • kamikaziechameleon
            • 5 years ago

            If your going to troll please actually read the article then you could better quote the things that were clearly not written so that your paraphrase wouldn’t expose the fact you never passed reading comprehension as part of your 8th grade English class. Just a recommendation, would really inform the whole conversation IMHO.

            • torquer
            • 5 years ago

            Way to come into one of the few pretty obviously unbiased tech sites and throw around ridiculous accusations that have no basis in reality.

            What would you have them do? Leaving a burning bag of dog crap on Intel’s doorstep in retaliation for a self bricking drive doing what it was designed to do LONG after the warranty and any reasonable expectation of usable life?

            People say a lot of stupid things in these comments but accusing TR of bias based on something like this takes the cake.

          • albundy
          • 5 years ago

          +1! you had me cracking up all day!

          [i<]you bad now! you bad boy! ooooh, you have shamed us all! maw, git mah belt![/i<] although you may be correct, he just presumed what many other sites already do. i think many of us are guilty of being unbeknownst and type what human perception concedes.

        • ColeLT1
        • 5 years ago

        The drive gave warnings Terabytes before it failed, its like your car gas light coming on, then driving for a few weeks instead of getting gas.

        If all my past failing hard drives gave some warning like this I would be ecstatic, normally you just lose everything, or maybe you are lucky and get bad sector warnings on a chkdsk.

        • Bensam123
        • 5 years ago

        I don’t believe this is because of kickbacks, but it is interesting he didn’t think this behavior was bad. I noticed this when I read the review, but didn’t say anything.

        The other drives, like the Samsung worked till they literally ran out of gas, but the Intel just decided it was done and shut off, which I think isn’t the right thing to do, for a lot of different reasons. That’s like your ink cartridges refusing to print because they’re ‘out of ink’ even though you can clearly still see ink left in it and they want to maintain ‘quality, which is just a way to get you to buy another one sooner.

        They should be doing something like telling you that the drive will be unreliable past this point and your data integrity may be compromised, you know, the usual disclaimer. Let people run them into the ground if they so choose.

        Almost no one is going to reach these write limitations, but it’s still an interesting. They may reach them on the drives like the Samsungs, but those don’t just brick themselves.

        • auxy
        • 5 years ago

        Spoken like a true freetard! (*’▽’)

      • derFunkenstein
      • 5 years ago

      I’m willing to give everyone a pass because they all threw up warnings ahead of time. If they just suddenly without warning went into a single read-only mode and then after that you’re losing everything for good, then I’d agree. The solution of course is a good backup.

      • ronch
      • 5 years ago

      Because SSD makers want to screw you for using SSDs for so long instead of buying a new one every time you reach 5TB of writes.

      • oldDummy
      • 5 years ago

      Consider average storage writes of 10GB/day [which is high].
      It renders your question as moot.

      • Voldenuit
      • 5 years ago

      [quote<]I am kind of curious why the SSDs don't fail "gracefully", as in going to read-only mode[/quote<] While that would be nice, one of the main uses of a SSD is as a system drive. I'm not 100% sure, but a system drive that is read-only would probably fail to load modern OSes like Windows, that have expectations about being able to write to the drive. But yeah, it would be nice to be able to at least recover some of the data from a 'bricked' drive, even if you have to unplug it and use it as a non-system drive to get at the data.

    • UnfriendlyFire
    • 5 years ago

    How many years of regular consumer usage would be needed to reach one PB on an SSD?

    Also, any plans of testing the newer SSDs in the future?

      • Farting Bob
      • 5 years ago

      1PB is 1 million GB. So base your number off of that. On some machines you will never average close to 1GB of writes a day, if you are using it intensively you may use 10-15GB day (for example doing multimedia editing using only the SSD). Intel rates the 335’s warrenty as 20GB day for 3 years as an example, and Intel is known for being cautious.

      Even at 20GB/day average, your looking at 37,500 days before that Intel drive would have hit 750TB of writes. Which is over 100 years.

      Of course the SSD will fail long before that for many other reasons regardless of how much you use it, but it puts into perspective how people complain and worry about write cycles in SSD’s when really they don’t need to worry unless they have an already defective one.

      • Ifalna
      • 5 years ago

      I have my SSD for roughly 2 years now.
      8587h of operation
      3775 GB of host writes according to Crystal Disk Info.

      SSD is used as system drive, hosts World of Warcraft and now and then gets fed a steam game whose loading times annoy me.

      That puts me at 5.1GB / day.
      1024²GB / 5.1GB = 202771 days = 555.5 years.

      I don’t plan to live that long. 😀

        • jessterman21
        • 5 years ago

        [quote<]I don't plan to live that long. :D[/quote<] Pessimist.

        • cphite
        • 5 years ago

        Yes, but at the end of those 555.5 years it just STOPS?? Unacceptable!!

    • nico1982
    • 5 years ago

    I don’t know why, but the crude ‘pro’ sticker on the 840 Pro make me laugh.

    • Sargent Duck
    • 5 years ago

    How does 700+ PB writes compare to a traditional HDD?

      • anotherengineer
      • 5 years ago

      Agreed. It’s too bad 2 or 3 regular HDD’s were not included in this test to ‘test’ their longevity with constant writes vs. the SSD’s.

      Oh well, sounds like another review in the making.

        • stdRaichu
        • 5 years ago

        I suspect that, even with sequential IO, you would be waiting a long, long time to get a platter-based drive into the petabyte club. Mix in some random IO like TR have done in the performance testing and you’d have to keep running the test for years to achieve throughput parity.

          • just brew it!
          • 5 years ago

          With purely sequential I/O you could easily reach a petabyte on a mechanical drive in just a few months. That’s not a very realistic usage scenario for a hard drive though (other than some very specific use cases). Relevance aside, I would expect the drive to still have plenty of life (in terms of wear) after 1 PB of sequential writes.

          Throw in a lot of random I/O, and yes you’re correct that the test could easily take years. I would also expect the seek mechanism on a consumer HDD to fail at some point if you’re hammering at it with lots of random I/O, as it is subjected to some rather extreme accelerations every time a seek occurs.

          Uncorrectable errors are yet another story. Mechanical HDDs all have an expected uncorrectable error rate, and you’re pretty much guaranteed to hit a few somewhere along the way to 1 PB of data written. But unlike with SSDs, these are not an indication that the drive is failing unless they start happening frequently; there’s just an expected base level of uncorrectable errors for mechanical HDDs. (And this is just one more reason — among many — why backups are important, and one more reason why mission critical systems all use some form of RAID.)

            • Wirko
            • 5 years ago

            [quote<]With purely sequential I/O you could easily reach a petabyte on a mechanical drive in just a few months. That's not a very realistic usage scenario for a hard drive though (other than some very specific use cases).[/quote<] It's not a very realistic scenario for a SSD, either. However, it is possible to rewrite data on a single cylinder on a HDD tens of millions of times in one year. Does a magnetic medium withstand such a large number of writes?

            • just brew it!
            • 5 years ago

            AFAIK the magnetic properties never wear out. Since the head does not actually touch the media there should not be physical wear either (provided there aren’t any head crashes).

            However… I suppose it is possible that the intense localized air currents caused by having the head constantly positioned over the exact same track might eventually cause some wear of the coating? I’m just speculating here.

            • stdRaichu
            • 5 years ago

            Magnetic domains do eventually degrade, although usually not enough over the lifetime of a drive to render drives or sectors unreadable. And I don’t know if it’s a problem with modern drives, but certainly in some older ones there were tales of the oxide eventually coming off the platters due to friction with the air; there’s an urban legend about a netware box that was running unattended for so long that the platters had been scraped clean and the box had been running from memory ever since. I have actually seen platters scraped clean myself, but that was always due to a head crash rather than friction. Have a look at the Wiki entry for “stiction” to see some examples of the problem. [url<]http://en.wikipedia.org/wiki/Stiction#Hard_disk_drives[/url<] Wear and tear on the bearings and actuators is a very real and much more common problem though.

            • jihadjoe
            • 5 years ago

            My guess is its either the mechanism that moves the head around, or the motors that spin the platters that will give out first.

            • just brew it!
            • 5 years ago

            The spindle motor does’t have any additional stress put on it by lots of I/O. My bet is on the head actuator.

        • R2P2
        • 5 years ago

        The huge difference in write speeds would probably make adding mechanical drives tricky. They’d take 2 or 3 times as long to write everything, and the test isn’t exactly going quickly with the SSDs.

        • ronch
        • 5 years ago

        I bet the mech hard drives would still be at around 100TB by the time the SSDs have reached 700TB, that is, if they even live long enough to reach 100TB given their mechanical nature.

      • travbrad
      • 5 years ago

      The problem with testing mechanical hard drives is they tend to fail more randomly, rather than gradually wearing out the way SSDs do with their flash memory, so you would really need to have several HDDs to even begin to get an accurate picture.

        • UnfriendlyFire
        • 5 years ago

        Wasn’t there a graph of HDDs’ failures, most of them being clustered around their first three years of operation and the last 7-10 years?

          • mnemonick
          • 5 years ago

          The guys at [b<]Backblaze[/b<] online backup have posted some interesting mechanical drive data: [url=http://blog.backblaze.com/2014/01/21/what-hard-drive-should-i-buy/<]Backblaze blog[/url<]. Be sure to check the earlier posts listed there, they've got even more charts and graphs! 🙂 [sub<] Edit: fixed url code[/sub<]

        • Deanjo
        • 5 years ago

        [quote<]The problem with testing mechanical hard drives is they tend to fail more randomly, rather than gradually wearing out the way SSDs do with their flash memory,[/quote<] Every SSD that has failed on me has been a random failure, no different than a mechanical. In fact I've had far more luck retrieving data off of a mechanical where as the SSD's just "disappear" to often without any warning. None of my SSD failures have been due to write limits either.

          • travbrad
          • 5 years ago

          OCZ?

      • Meadows
      • 5 years ago

      Hold on, 700+ [b<][i<]PB[/i<][/b<]?

      • magila
      • 5 years ago

      It doesn’t really make sense to talk about HDD longevity in terms of bytes written because write endurance is not the limiting factor for an HDD’s life. If nothing else the spindle bearing will seize before write endurance starts to become an issue, and the head assembly is likely to wear out before that.

      For what it’s worth, the typical design life of a consumer HDD is 5 years which is enough to write several PB if you stick to mostly pure sequential workloads.

      • Captain Ned
      • 5 years ago

      According to the speed charts in this review:

      [url<]https://techreport.com/review/22794/western-digital-velociraptor-1tb-hard-drive/5[/url<] A 1TB Velociraptor would reach 1PB in 123 to 246 days, while a 2 TB Caviar Black would need 183 to 578 days. All those numbers are exclusive of the time needed to erase the drives so they could be rewritten, so add a large bugger factor to those numbers.

    • deeppow
    • 5 years ago

    Statistically a sample of 1 has an infinite standard deviation, however your take home conclusion is a very useful one — far more capabilities/life than we (the average user) can use.

    Good job!

      • travbrad
      • 5 years ago

      Yeah, I wouldn’t necessarily conclude that x drive will always fail before y drive, just because 1 of them did in these tests. I don’t think that was ever the purpose of these tests either. It was more to get a general picture of how many writes SSDs can withstand. As it turns out, they can withstand more than 99% of people will ever throw at them.

    • Sargent Duck
    • 5 years ago

    I was starting to wonder when we’d see an update to this.

    Good way to start a Monday morning!

      • Pez
      • 5 years ago

      Indeed 🙂 A great update Geoff thanks for your continuing effort. Nothing like this on any other tech site!

Pin It on Pinterest

Share This