The SSD Endurance Experiment: They’re all dead

I never thought this whole tech journalism gig would turn me into a mass murderer. Yet here I am, with the blood of six SSDs on my hands, and that’s not even the half of it. You see, these were not crimes of passion or rage, nor were they products of accident. More than 18 months ago, I vowed to push all six drives to their bitter ends. I didn’t do so in the name of god or country or even self-defense, either. I did it just to watch them die.

Technically, I’m also a torturer—or at least an enhanced interrogator. Instead of offering a quick and painless death, I slowly squeezed out every last drop of life with a relentless stream of writes far more demanding than anything the SSDs would face in a typical PC. To make matters worse, I exploited their suffering by chronicling the entire process online.

Today, that story draws to a close with the final chapter in the SSD Endurance Experiment. The last two survivors met their doom on the road to 2.5PB, joining four fallen comrades who expired earlier. It’s time to honor the dead and reflect on what we’ve learned from all the carnage.

Experiment with intent to kill

Before we get to the end, we have to start at the beginning. If you’re unfamiliar with the experiment, this introductory article provides a comprehensive look at our test systems and methods. We’ll only indulge a quick run-down of the details here.

Our solid-state death march was designed to test the limited write tolerance inherent to all NAND flash memory. This breed of non-volatile storage retains data by trapping electrons inside of nanoscale memory cells. A process called tunneling is used to move electrons in and out of the cells, but the back-and-forth traffic erodes the physical structure of the cell, leading to breaches that can render it useless.

Electrons also get stuck in the cell wall, where their associated negative charges complicate the process of reading and writing data. This accumulation of stray electrons eventually compromises the cell’s ability to retain data reliably—and to access it quickly. Three-bit TLC NAND differentiates between more values within the cell’s possible voltage range, making it more sensitive to electron build-up than two-bit MLC NAND.


Watch our discussion of the SSD Endurance Experiment on the TR Podcast

Even with wear-leveling algorithms spreading writes evenly across the flash, all cells will eventually fail or become unfit for duty. When that happens, they’re retired and replaced with flash allocated from the SSD’s overprovisioned area. This spare NAND ensures that the drive’s user-accessible capacity is unaffected by the war of attrition ravaging its cells.

The casualties will eventually exceed the drive’s ability to compensate, leaving unanswered questions. How many writes does it take? What happens to your data at the end? Do SSDs lose any performance or reliability as the writes pile up?

This experiment sought to find out by writing a near-constant stream of data to Corsair’s Neutron GTX 240GB, Intel’s 335 Series 240GB, Kingston’s HyperX 3K 240GB, Samsung’s 840 Series 250GB, and Samsung’s 840 Pro 256GB.

The first lesson came quickly. All of the drives surpassed their official endurance specifications by writing hundreds of terabytes without issue. Delivering on the manufacturer-guaranteed write tolerance wouldn’t normally be cause for celebration, but the scale makes this achievement important. Most PC users, myself included, write no more than a few terabytes per year. Even 100TB is far more endurance than the typical consumer needs.

Clear evidence of flash wear appeared after 200TB of writes, when the Samsung 840 Series started logging reallocated sectors. As the only TLC candidate in the bunch, this drive was expected to show the first cracks. The 840 Series didn’t encounter actual problems until 300TB, when it failed a hash check during the setup for an unpowered data retention test. The drive went on to pass that test and continue writing, but it recorded a rash of uncorrectable errors around the same time. Uncorrectable errors can compromise data integrity and system stability, so we recommend taking drives out of service the moment they appear.

After receiving a black mark on its permanent record, the 840 Series sailed smoothly up to 800TB. But it suffered another spate of uncorrectable errors on the way to 900TB, and it died without warning before reaching a petabyte. Although the 840 Series had retired thousands of flash blocks up until that point, the SMART attributes suggested plenty of reserves remained. The drive may have been brought down by a sudden surge of flash failures too severe to counteract. In any case, the final blow was fatal; our attempts to recover data from the drive failed.

Few expected a TLC SSD to last that long—and fewer still would have bet on it outlasting two MLC-based drives. Intel’s 335 Series failed much earlier, though to be fair, it pulled the trigger itself. The drive’s media wear indicator ran out shortly after 700TB, signaling that the NAND’s write tolerance had been exceeded. Intel doesn’t have confidence in the drive at that point, so the 335 Series is designed to shift into read-only mode and then to brick itself when the power is cycled. Despite suffering just one reallocated sector, our sample dutifully followed the script. Data was accessible until a reboot prompted the drive to swallow its virtual cyanide pill.

The reaper came for the Kingston HyperX 3K next. As with the 335 Series, the SMART data’s declining life indicator foretold the drive’s death and triggered messages warning that the end was nigh. The flash held up nicely through 600TB, but it suffered a boatload of failures and reallocated sectors leading up to 728TB, after which it refused to write. At least the data was still accessible at the end. The HyperX didn’t respond after a reboot, though. Kingston tells us the drive won’t boot if its NAND reserve has been exhausted.

The next failure occurred after the 840 Series bit the dust. Corsair’s Neutron GTX was practically flawless through 1.1PB—that’s petabytes—but it posted thousands of reallocated sectors and produced numerous warning messages over the following 100TB. The drive was still functional after 1.2PB of writes, and its SMART attributes suggested adequate flash remained in reserve. However, the Neutron failed to answer the bell after a subsequent reboot. As with the other corpses, the drive wasn’t even detected, nixing any possibility of easy data recovery.

And then came the calm. The remaining two SSDs carried on past the 2PB threshold before meeting their ultimate ends. On the next page, we’ll examine their last moments in greater detail

The final casualties

The next victim totally had it coming, but it still deserves our respect. Bow your head in a moment of silence for the second HyperX 3K.

SandForce-based SSDs like the HyperX (and the Intel 335 Series) use write compression to shrink the flash footprint of incoming data. To prevent this feature from tainting the results of the experiment, we tested the drives with incompressible data. We also hammered a second, identical HyperX with compressible data that would cooperate with SandForce’s special sauce. This twin was fed a diet of 46% incompressible data from Anvil’s Storage Utilities, the application used to accumulate writes and test performance.

From the very beginning, the second HyperX’s compressible payload measurably reduced the volume of writes committed to the NAND. The following plot shows the host and compressed writes accumulated by both HyperX drives. Host writes denote data written by the system, while compressed writes represent the corresponding impact on the flash.

The incompressible HyperX wrote slightly more data to the flash than it received from the host, an expected result given the low write amplification of our sequential workload. Meanwhile, its compressible twin wrote 28% less to the NAND.

As the graph illustrates, the compressible HyperX didn’t hit the same volume of flash writes that killed its sibling until around 1.1PB. The drive evidently wasn’t ready to go quietly into the night, either. It went on to write another freaking petabyte before failing. To get a sense of how far the drive exceeded its life expectancy, check the next plot of the life-remaining attribute:

The life attribute takes compression into account, so it’s clear this HyperX survived on more than just SandForce mojo. The low number of reallocated sectors suggests that the NAND deserves much of the credit. Like all semiconductors, flash memory chips produced by the same process—and even cut from the same wafer—can have slightly different characteristics. Just like some CPUs are particularly comfortable at higher clock speeds and voltages, some NAND is especially resistant to write-induced wear.

The second HyperX got lucky, in other words.

It also didn’t lead a perfect life. On the leg between 900TB and 1PB, the HyperX logged a couple of uncorrectable errors along with its first reallocated sectors. Even two uncorrectable errors are too many, so the HyperX continued with the same asterisk attached to it that the 840 Series did after it had the same issue. Not counting correctable program and erase failures, the drive was error-free after that.

The HyperX is designed to keep writing until its flash reserves run out, which seems to be what happened with the first drive. The circumstances surrounding the second’s death are obscured by a power outage that struck after 2.1PB of writes. This interruption occurred over the Christmas holidays, while I was away from the lab. The machine booted without issue when I returned, but it hard-locked as soon as I tried to access the HyperX, and the drive wasn’t detected after a subsequent reboot. Attempts to recovery data and SMART stats also failed.

With the data available, it’s impossible to tell whether the outage precipitated the failure or ocurred after it. To the HyperX’s credit, messages warning of impending failure started appearing after the life attribute flattened out, long before the drive’s eventual demise.

And so the Samsung 840 Pro soldiered on as the last SSD standing.

The 840 Pro was among the most well-behaved drives in the experiment. It remained free of uncorrectable errors until the very end, and it accumulated reallocated sectors at a surprisingly consistent rate.

Reallocated sectors started appearing in volume after 600TB of writes. Through 2.4PB, the Pro racked up over 7000 reallocated sectors totaling 10.7GB of flash. Samsung’s Magician utility gave a clean bill of health, though, and the used-block counter showed ample reserves to push past 2.5PB:

As I prepared to leave the drive unattended during a week-long vacation at the end of February, I thought, “what could possibly go wrong?” Famous last words.

When I logged into the endurance test rig upon returning last week, Anvil’s Storage Utilities were unresponsive, as was HD Sentinel, the program used to pull SMART data from the drives. The interfaces for both applications were blank, and Windows Explorer crashed when I tried to access the 840 Pro. Then a message from Intel’s SATA drivers appeared to say that the drive was no longer connected to the system. The 840 Pro took its last gasp in my arms—or, rather, at my fingertips—and it’s been completely unresponsive.

As with the demise of Samsung’s TLC-based 840 Series, death struck without warning or mercy. A sudden burst of flash failures may have been responsible.

Before moving on to the performance analysis on the next page, I should note that the 840 Pro exhibited a curious inflation of writes associated with the power outage after 2.1PB. The SMART attributes indicate an extra 38TB of host writes during that period, yet Anvil’s logs contain no evidence of the additional writes. Weird. Maybe the SMART counter tripped up when the power cut out unexpectedly.

Performance

We benchmarked all the SSDs before we began our endurance experiment, and we’ve gathered more performance data after every 100TB of writes since. It’s important to note that these tests are far from exhaustive. Our in-depth SSD reviews are a much better resource for comparative performance data. What we’re looking for here is how each SSD’s benchmark scores change as the writes add up.

Apart from a few hiccups, all the SSDs performed consistently as the experiment progressed. That said, the Neutron GTX stumbled in the sequential read speed test near the end of its life. The 840 Pro’s propensity to post slightly lower sequential write speeds increased as the experiment wore on, as well. Even though flash wear doesn’t appear to have a clear impact on SSD performance, the data suggest that drives can become more prone to stumbling as writes accumulate.

Unlike our first batch of results, which was obtained on the same system after secure-erasing each drive, the next set comes from the endurance test itself. Anvil’s utility lets us calculate the write speed of each loop that loads the drives with random data. This test runs simultaneously on six drives split between two separate systems (and between 3Gbps SATA ports for the HyperX drives and 6Gbps ones for the others), so the result aren’t useful for apples-to-apples comparisons. However, they do provide a long-term look at how each drive handles this particular write workload.

Samsung’s 840 Series slowed a little at the beginning and more gradually at the end. The Intel 335 Series and the first HyperX also experienced small speed drops in their final hours, but those declines are nothing compared to the steep plunged suffered by the Neutron GTX. The fact that the Corsair SSD had been getting faster over time makes its final nosedive even more striking.

There’s no evidence that the second HyperX so much as skipped a beat. The regular spikes for that drive (and some of the others) are an artifact of the secure erase we performed every 100TB.

Similar surges are evident on the 840 Pro’s plot, where the peaks get shorter with additional writes. This drive exhibited a lot of run-to-run variance from the very beginning. The only break from that behavior is the band of narrower oscillation toward the end, which corresponds to the post-power-outage period leading up to 2.2PB. For the most part, at least, the 840 Pro was consistently inconsistent.

In the end, there can be none

The SSD Endurance Experiment represents the longest test TR has ever conducted. It’s been a lot of work, but the results have also been gratifying. Over the past 18 months, we’ve watched modern SSDs easily write far more data than most consumers will ever need. Errors didn’t strike the Samsung 840 Series until after 300TB of writes, and it took over 700TB to induce the first failures. The fact that the 840 Pro exceeded 2.4PB is nothing short of amazing, even if that achievement is also kind of academic.

Obviously, the limited sample size precludes drawing definitive conclusions about the durability and reliability of the individual drives. The second HyperX’s against-all-odds campaign past 2PB demonstrates that some SSDs are simply tougher than others. The important takeaway is that all of the drives wrote hundreds of terabytes without any problems. Their collective endurance is a meaningful result.

The Corsair, Intel, and Kingston SSDs all issued SMART warnings before their deaths, giving users plenty of time to preserve their data. The HyperX’s warnings ended up being particularly premature, but that’s better than no warning at all. Samsung’s own software pronounced the 840 Series and 840 Pro to be in good health before their respective deaths. Worryingly, the 840 Series’ uncorrectable errors didn’t change that cheery assessment.

If you write a lot of data, keep an eye out for warning messages, because SSDs don’t always fail gracefully. Among the ones we tested, only the Intel 335 Series and first HyperX remained accessible at the end. Even those bricked themselves after a reboot. The others were immediately unresponsive, possibly because they were overwhelmed by incoming writes before attempted resuscitation.

Also, watch for bursts of reallocated sectors. The steady burn rates of the 840 Series and 840 Pro show that SSDs can live long and productive lives even as they sustain mounting flash failures. However, sudden massacres that deviate from the drive’s established pattern may hint at impending death, as they did for the Neutron GTX and the first HyperX.

Given everything we’ve learned, it’s not really appropriate to end the experiment by crowning an official winner. But the 840 Pro wrote the most data, so it deserves to take center stage as the final curtain closes. It asked to perform a rendition of Gloria Gaynor’s I will survive, and I couldn’t say no.

At first I was pristine

Untouched and unwritten

And completely unaware

Of the life that I’d be livin’

But then you locked me in this case

A spectacle for all to see

And in that moment I resolved

Not to let you get to me

First we were six

All SSDs

Just lab rats in the crosshairs

Trying to cope with this disease

Electrons tunnel through our cells

With every write we slowly bleed

But all you seem to care about

Is the specs that we exceed

Go on, now watch

The gigs add up

An endless stream of files

Just to see if we’ll get stuck

As one by one my friends around me slowly fall

Do you think I’ll follow

Do you think I’m gonna hit the wall

Oh no, not I

I will survive

As long as I know how to write

I know I’ll stay alive

I’ve got all my cells to give

And a persistent will to live

I will survive

I will survive, yeah yeah

Thousands of cells retired

All just to keep me whole

And still spares in reserve

So I don’t lose control

I outlasted all my rivals

In this endurance race to death

And I won’t lie

It puts a twinkle in my eye

And now you see me as somebody new

I’m not that chaste, naive virgin

Tryna prove something to you

’cause I took all of your best shots

Without a single error shown

You know I’ve written way more data

Than all of you have ever known

Go on, now watch

The gigs pile up

More senseless random files

Just to see if I’ll get stuck

SMART money says that I’ve got miles in the tank

I ain’t gonna stop now

And you can take that to the bank

Oh no, not I

I will survive

As long as I know how to wri

This whole “new media” business demands that I ask you to follow me on Twitter.

Comments closed
    • mikato
    • 5 years ago

    You should send these off to a data recovery company to see how much data they can get just for the hell of it – that’s as long as these don’t encrypt all data by default with a key known only to a controller which no longer functions. I may know a volunteer.

    • MasterRanger
    • 5 years ago

    After all that, you didn’t even pick a winner? Weak. If you’re not picking a winner, dont test, IMNSHO

    • Dr_b_
    • 5 years ago

    Can you do this same test with hard drives or have another round with current gen SSDs and a set of HD’s as a control?

    • jstern
    • 5 years ago

    Can you guys do the same test, but with mechanical drives?

    • land shark
    • 5 years ago

    Would any of the toolkits provided by the manufacturers such as Intel’s Toolbox, Samsung Magician or Corsairs SSD Toolbox indicated a failure is imminent? Kingstons’ toolbox is very primitive so I would not rely on it.

    • Tjalve
    • 5 years ago

    Wonderful conclusion to a fantastic experiment.

    If anyone is interested i have started a similar experiment over at Nordichardware. I test 3 TLC based drives to see how long they live. (840 evo, 850 evo and sandisk ultra 2).
    You can folow it in realtime here: [url<]http://www.nordichardware.se/SSD-Recensioner/vi-testar-livslaengden-pa-ssd-enheter-med-tlc-minne-foelj-vart-loepande-ssd-doedartest.html[/url<] Data is updated once every hour and i will updated the page as soon as something happens. Im particilarry interested to see how well Samsungs v-nand TLC stands against their planner nand. Only time will tell though. The workload is a trace-based workload from a standard work-laptop and the "years in use" is based on the calculations from that trace witch say that a standard user averages about 14GB of writes per dag.

    • Wild Thing
    • 5 years ago

    Great article there Geoff and a satisfactory end point to that long test project.
    I liked how you continued the theme of giving the drives a “human face”,each noble struggle and then death somehow inviting empathy.
    Congrats to TR on this one.
    +2

    • wye
    • 5 years ago

    The analogy to real life beings was fun the first thousand of times. Then it was just sad.
    You pushed the theme too much.

    Poetry? Just retarded.

    There was nothing glorious or even unusual in what you did. There are tons of endurance articles on the web, and they are done more profesionally and without the endless self-patting on the back.

      • MOSFET
      • 5 years ago

      [quote<] Then it was just sad.[/quote<] Please, do submit your alternative concluding article. [quote<]and they are done more profesionally[/quote<] I must have missed those. [quote<]and without the endless self-patting on the back.[/quote<] And I sure missed that when reading the concluding article. I really don't mean to be antagonistic, but I have a very high degree of respect for the journalists here.

    • Ph.D
    • 5 years ago

    Glad to see all SSDs did so well and thank you Techreport for taking the time and effort to do this extensive test.

    • vargis14
    • 5 years ago

    Bravo!! Another very interesting and informative test from TR.

    Any plans on testing the newer drives the same way with the smaller nm chips like toshibas 18nm?
    I promise it is not because that is in my AMD/OCZ R7 240gb drive with a old barefoot controller that still does 500+MB reads 🙂 Also I would love to see OCZ’s old bad name get stomped out.

    • conjurer
    • 5 years ago

    Take out “D” of article title, and you get absolutely other meaning.

      • Melvar
      • 5 years ago

      Change one ‘H’ to a ‘T’, and you’re talking about the meaning of otters.

    • SparkySamza
    • 5 years ago

    all this time i say that samsung SSD’s are the best and now tech report has proved my point. Thank you boys for this fantastic report on SSD’s

    • hofct
    • 5 years ago

    Just wondering the freaking [u<]2.4TB[/u<] for [u<]Samsung 840 Pro[/u<], are they talked about [b<]Host Write[/b<] or [b<]NAND Write[/b<] ? It' s huge difference....

    • bLaNG
    • 5 years ago

    Thx for the great test. And the witty write-up! This is the reason I still come to TechReport, even if I am not a real PC enthusiast anymore. Cheers from Germany!

    • TnF
    • 5 years ago

    No SanDisk? You suck Samsung fanboys

    • gamoniac
    • 5 years ago

    Is this now the most linked TR article, even more so than the frame-pacing issue discovered by TR? I see it [i<]everywhere[/i<], from Extremetech to Ars. [Edit] Typo.

    • Shouefref
    • 5 years ago

    ‘The 840 Series didn’t encounter actual problems until 300TB,’
    ->
    Do you mean the 840 Series would function for 150 years in a normal PC?

    ‘the 840 Series sailed smoothly up to 800TB’
    -> I don’t understand why you went on using it after the black mark at 300TB. Was there a problem at 300TB or was there not a problem at 300TB?
    If you mention ‘it recorded a rash of uncorrectable errors around the same time.’, I think that means it has become unusable.

    That also seems to mean that the Intel’s 335 Series dit niet fail ‘much earlier’, but actually much later, at 700TB (350 years of use!).

    ‘Corsair’s Neutron GTX was practically flawless through 1.1PB’
    ->
    but what does ‘practically flawless’ mean?

    ‘The circumstances surrounding the second’s death are obscured by a power outage that struck after 2.1PB of writes. (…) The machine (…) hard-locked as soon as I tried to access the HyperX,’
    ->
    This actually menas that the HyperX lived longer than your mains power supply!

    I’ll guess I order a 840 Pro 256GB.

      • epicmadness
      • 5 years ago

      he meant that those SSDs are still going strong (i.e. didn’t go brick) even if they’re suffering uncorrectable errors.
      but i agree, using “flawless” when its suffering errors is ironic.

      what sort of “years” are you referring to?
      ( 1MB/s * 60sec * 60min ) / 1024MB = ~3.5GB/Hr
      ( 3.5GB/Hr * 10Hr * 365days ) / 1024GB = ~12.5TB/Yr
      and this is just the tip of normal write use.

    • WaltC
    • 5 years ago

    Has TR ever done such a reliability/longevity test with platter drives? I can’t recall, and I can’t find a link for it anywhere in my “link library” (which doesn’t mean much except to me…;)) But looking at these incredible numbers…geeze pdb’s!…it seems like these things should out last mechanical drives easily…

      • moose17145
      • 5 years ago

      I have found that the length of time a mechanical drives are in service affects it’s reliability/probability to fail more than the amount of data it has written.

        • WaltC
        • 5 years ago

        On balance I’d agree with this…but in thinking on it…out of the dozens of hard drives I’ve used over the last few decades, I’ve only had about 2-3 of them to fail–to actually grok up and seize (If you’ve had a drive do that then you know just what I mean.) But then, the problem really is that every 2-3 years I’m buying new drives and either keeping the older ones (or giving them to my son…;)) –so probably I’ve been using them less in the sense of data writes–of course, Windows is always doing stuff in the background–so I’m really not sure of how much I’m writing to them even when I think of them as just sitting sort of quiescent…;) A louder Seagate I still have as my boot drive can be heard chugging away when I’m not strictly using it…so I guess it’s either some background task like Defender running a quick-scan or else some background OS-spawned drive defragmenting during idling…etc.

        It is very interesting that whereas mechanical drives are measured in “mean-time between failure hours” these SSDs are measured in data writes…seems to back your suspicions of longevity of use being more important to the life of platter drives…

          • moose17145
          • 5 years ago

          Well the underlying tech of the two are so different that you really can’t even compare the two.

          For sake of easier argument lets use enterprise drives that are *never* idle. They are spinning their platters 24×7. even if they are not writing.

          In that case its basically like a wheel bearing on a car. The physical spinning of the platter itself puts mechanical wear and tear on the drive. Much of the heat a mechanical drive generates is from friction.

          Even while writing, the actuator itself is taking mechanical wear from friction and vibrations from having to physically move about the surface of the platters.

          SSDs suffer ZERO wear and tear from things like mechanical motion and friction. The wear and tear they suffer from is from electrons getting caught up inside the NAND cells.

          VS a mechanical drive that is (essentially) just flipping magnetic iron filings around. You can flip iron filings an infinite number of times without them losing their magnet properties.

      • AJSB
      • 5 years ago

      As for amount of DATA written or read to/from a HDD, the limit is virtually infinite.

      What kills a HDD is:

      1) Excessive level of vibrations.
      2) Excessive temperatures (before kill HDD , too high temps can erase DATA with current tech).
      3) Total of load/unload cycles.

      Notebook HDDs usually are much more tolerant to vibrations but always good to install if possible with rubber dampers, still, avoid drop a HDD 😉
      Mind you that as opposite to what many think, SSDs also CAN be destroyed if you drop them, you might notice that SSD OEMs also indicate vibration limits…in very high vibrations or a strong impact, a soldering can break making problems i a SSD but this is rare compared w/ HDDs.

      Excessive temperatures are easy to monitor and neutralize.

      Excess of Load/Unload cycles is one of the most dangerous enemies of HDDs, IT CAN DESTROY A HDD IN LESS THAN A YEAR. To avoid it:

      1) Disable ALL power saving options for the HDD in OS and/or BIOS.
      2)You might want also use a tool for WD (ONLY) drives to tune this. There is other tool for all other drives but seems that goes back to normal after reboot.
      3) NEVER buy “GREEN” HDDs.

        • WaltC
        • 5 years ago

        I’ve always avoided the “green” drives like the plague…;)

      • Krogoth
      • 5 years ago

      For older generation HDDs, the data on the actual platters can theoretically last for centuries barring being exposed to strong magnetism, curie temperature and chemical reactions (oxidation). The problem is that mechanical portions last for a fraction of it even if it is collection dust due that fact that lubrication for the motor can dry up. The bits on the newer generation of HDDs platters so small that it is difficult to read from it in the event of emergency data recovery and are much more sensitive to magnetic influences.

      That’s why organizations who handle sensitive to classified data go to great lengths to destroy any data written onto HDDs.

      SSDs have a shorter long-term life-span because the cells start to “leak” even if the SSD has been idle for months. The problem gets worse as you shrink the cells on the SSD.

      The delicious irony is that nothing will beat information written onto stone tables. The stuff can last for thousands of years. Any archeologist who studies ancient cultures can attest to this.

        • AJSB
        • 5 years ago

        Yeah but to write 1TB in stone will also take ages :p

        Back to topic, yes, as densities in HDDs increase, there is the issue of magnetism be more susceptible to magnetic fields, or even natural magnetism decay, HOWEVER:

        1) If you use the freeware Puran DiskFresh once per 3 months, you can be fully assured that magnetism is kept at adequate level.

        2) External magnetic fields are not so much a concern nowadays in special since transition from CRTs to LCDs (that don’t make virtually any magnetic field)…hell, i have a 17″ CRT less than a foot away from a PC and i don’t had any Data loss (i use however DiskFresh)
        The only normal magnetic field influence that could cause concern is big unshielded speakers, but usually PC speakers are shields and should not be a problem.

        BTW, you talk about decrease of magnetic field that can prevent DATA recovery but OTOH hand you talk about from a security stand point how hard is to erase truly a HDD.
        The truth is no matter there can be decrease of magnetism, there is special software to recover DATA of very weak magnetic fields in a HDD and recover the DATA even of files that were deleted and of the sectors were written over with new DATA, this is why you should reformat (NOT Quick Format) a HDD 40-50 times to make sure you truly erase all DATA before recycle drive or give/sell the HDD.

        As for dry up of HDD lubricant, it can happen but usually only on drives that are stored for a long long time without be used. In practice, not a problem.
        If by any chance happens in a HDD in use, there are warning signs that indicate the situation on time.

      • Freon
      • 5 years ago

      [url<]https://www.backblaze.com/blog/best-hard-drive/[/url<]

    • rutra80
    • 5 years ago

    I don’t know if it was discussed before, probably was, but why won’t these drives put themselves into read-only mode when bad things start to happen? It would be a great feature. There could be some low level way to switch it back into writeable mode for those who know what they’re doing, but otherwise it would be so much safer for a common consumer.
    In most cases when a HDD fails, it is very well possible to recover most of the data at home. With those bricked SSDs only a specialised lab would have a chance.

      • DarkMikaru
      • 5 years ago

      That is part of the problem. I think the Intel & Samsung drivers were supposed to but didn’t. But agreed, if they shut themselves down they need to make sure that our data is still retrievable. Or at the very least have a software solution like Samsungs Magician that automatically makes an image or moves files automatically.

      • DrDominodog51
      • 5 years ago

      I think it was the repeated attempts to write to the SSD after it went into read-only mode that bricked them

    • Razor512
    • 5 years ago

    My 120GB sandisk 32nm is sitting at about 200 TB, and my 256GB Samsung 850 pro, is sitting at 41 TB (purchased it in November 2014).

    If all you use your system for is reads, or making articles, or using Facebook, then you will not have many writes. If you are using your system to work with video in addition to managing a library of a bunch of large games, you will rack up writes pretty quickly.

      • DarkMikaru
      • 5 years ago

      Wow… cause I was just saying how my 830 is only now just under 4TB. But yeah, as a Scratch drive for editing or games drive, yeah… I can see that. Something to consider.

      • Visigoth
      • 5 years ago

      Sounds like you need to move up to 1 TB Samsung 850 Pro SSD’s…larger die area to work with, plus with the already higher-endurance of the 3D NAND, you should be good to go.

      • Ifalna
      • 5 years ago

      Total NAND writes 9292 / LBA written 7.5TB.
      After 4 years.
      I think my Intel 520 will outlast me at this rate. ._.

    • DevilsCanyonSoul
    • 5 years ago

    Thanks to everyone who put forth effort on this test bed.
    This sheds light on very important aspects on the future of storage.
    This study should waylay any concerns people have on the durability of SSD’s.

    Great work and very informative!
    TR is the cornerstone of tech coverage for me…

    • Kaotik
    • 5 years ago

    Am I just reading something wrong, but doesn’t the article state Kingston HyperX 3K bit the bullet after Intel 335, at 728TB? And then suddenly on the next page it’s still alive and kicking towards 2PB?

    edit: there’s 2 Kingstons in the test 😐

    • Jigar
    • 5 years ago

    I just want to know, did they scream before they died ? /sad face

    • FireGryphon
    • 5 years ago

    Jee-off, this is a great article! Well written, and hilarious. TR’s trademark.

    What’s the sense in an SSD bricking itself and locking your data inside? I’d think that when the drive reaches the end of its life it would remain readable as long as it possibly could. What does a reboot do that irreparably destroys it?

    • Misel
    • 5 years ago

    All good things come to an end.

    Thank you very much for this article series! 🙂

    • Pez
    • 5 years ago

    One of the best long-running articles on any tech site ever, you should be proud of the work and we appreciate the monumental effort!

    • chrisgeiser
    • 5 years ago

    You’ve got a second career if this one doesn’t work out.

    • fredsnotdead
    • 5 years ago

    Great article. Now, figure out how to get that song out of my head.

    • Buzzard44
    • 5 years ago

    Wow. Just wow. Don’t get me wrong – all the staff at the Tech Report does a superb job, but I believe this is the best writing on a tech site I’ve ever seen. And the “I will survive”? Mind blowing. Hilarious. Put a smile on my face.

    You may have lost your name, G-off/G-Funk, but you have gained prestige, honor, and style.

      • TrailBlazerDK
      • 5 years ago

      I so wanted it to be set to the Portal/Glados ‘Still alive’, that i tried to read it in that way.

    • UberGerbil
    • 5 years ago

    I didn’t realize how much I’m going to miss checking in on these poor little buggers.

    • modulusshift
    • 5 years ago

    I hadn’t read these before, I don’t think. I loved just how many unique pictures of these drives you ended up taking. That’s a lot of almost unnecessary work.

      • UberGerbil
      • 5 years ago

      “Almost”

      • Wirko
      • 5 years ago

      I’m still waiting for the terminal pic: six SSDs belly up.

        • SoM
        • 5 years ago

        with a frown upside down and X’s for eyes

    • UnfriendlyFire
    • 5 years ago

    So what will be the next SSD contestants? Maybe you guys should also throw in a 7200 RPM HDD as a comparison (in terms of how much it writes).

    EDIT: It would also be interesting to see a server-grade SSD get tested to determine how it fails.

    • HTWingNut
    • 5 years ago

    [url<]https://youtu.be/a5R_pS0h5Qk[/url<]

    • DragonDaddyBear
    • 5 years ago

    I think Geoff should be required to sing that song at the next TT BBQ.

    • zzz
    • 5 years ago

    This was awesome and surely tied up some of TR’s resources for perhaps longer than expected. The results are based on a small sample size but the fact that all of them outlasted expected usage is something notable. I hope this is done again using modern SSDs

    • DarkMikaru
    • 5 years ago

    Thank you again guys for taking the time to perform this exhaustive study! Though I kinda thought the manufacturers were “sandbagging” us with these weak 3yr warranties I had no idea by how much. We just had to have faith right?

    As it stands, my Samsung 830 64GB SSD in my work rig is currently sitting @ 3.94TB after 2.5 years. My 840 in my home server is at a laughable .85! I guess the point is, from a reliability standpoint, MLC, TLC…. controller…. all moot. Just enjoy the speed my friends…enjoy the speed. 🙂

    • davidbowser
    • 5 years ago

    Thanks so much TR. Have to renew my subscription, just to support stuff like this.

    As an aside, alt-rock fans must now go listen to the cover of [url=https://youtu.be/7KJjVMqNIgA?list=PLXYv3vwBYZ0D9uR9fhLlsBxWjemqNzjvW<]"I Will Survive" by Cake.[/url<] I had to play it after reading the end of the article.

    • Sam125
    • 5 years ago

    Great review Geoff! My reservations about SSD durability have been mostly addressed. I guess stacked NAND SSDs like the 850 Pro will be my first solid state purchase.

    Could TR by any chance also do an endurance test on 3D NAND? Not that there should be any difference in durability but it’d be nice to read an empirical study on it.

    • CB5000
    • 5 years ago

    Now I wanna see a new test with a new generation of SSDs. A lot changed in just 18 months, really curious how well vNAND would fare.

      • LoneWolf15
      • 5 years ago

      I’d really like to see some Crucial SSDs. Larger market share, good controllers, decent features, and they make the NAND chips. I tend to buy them and would like to see some reliability stats.

        • Deanjo
        • 5 years ago

        Ya, I was kind of surprised by their absence in the test.

          • travbrad
          • 5 years ago

          IIRC the Crucial drives that were available when they started these tests didn’t record SMART info about how much data had been written. I’d be very surprised if their reliability was dissimilar to the rest of these drives though, and they all greatly exceeded what the average person would do with them.

            • Deanjo
            • 5 years ago

            Even M4’s had F6 total host sector writes available in their SMART and those have been around well before this test started.

            number of sectors reported * 512 bytes give you the total.

          • modulusshift
          • 5 years ago

          Last paragraph of this page:
          [url<]https://techreport.com/review/25681/the-ssd-endurance-experiment-testing-data-retention-at-300tb[/url<]

            • Deanjo
            • 5 years ago

            Ya, but it does have it. Just have to do a bit of math to get the figure.

            F6 number of sectors reported * 512 bytes give you the total.

    • LoneWolf15
    • 5 years ago

    [i<]I did it just to watch them die.[/i<] And you're stuck in Folsom Prison, now?

    • albundy
    • 5 years ago

    out of curiosity, would the drives stand up better during the long term with low to moderate data writes over time vs blasting them with TB’s of data like you guys did? or are the memory cells prone to this level of damage no matter how long it takes?

    also, what will you be doing with the drives? can the manufacturers use them to reset the nand or improve on future nand? or are the drives just a door stop or a target at the gun range?

      • Dissonance
      • 5 years ago

      Interesting question. NAND is prone to write-induced wear regardless of the workload. I’m not sure if the frequency of writes has any bearing, though. Hmmm.

      The future of the drives hasn’t been settled, but I’d like to do something special with them. Stay tuned.

        • Ifalna
        • 5 years ago

        Looking at what happens at the semi conductor level, I seriously doubt that the frequency has much impact.

        • Farting Bob
        • 5 years ago

        I say you give them an honorable funeral, viking style. At the next TR BBQ set these fallen warriors sail on a wooden ship, then set it ablaze.

        Not environmentally friendly, but these drives deserve a memorable funeral.

          • Captain Ned
          • 5 years ago

          After 3 BBQs I’ve noted that the prevailing winds argue against this as the voyage would be short and would not end in the water.

          Now, if someone can find a way to hang them as a target at the waterline, I’d be more than happy to try to spud them into oblivion (and Lake Michigan).

            • JustAnEngineer
            • 5 years ago

            With a 2½” barrel and some good-sized spud sabots, you could launch ’em a fair distance.

      • willmore
      • 5 years ago

      This is a resonable statement as we do know that drives perform different types of garbage collection and other tasks when they find themselves idle. Running the test in such a way as to prevent this would give unrealistic results which could not be used to predict behavior under a more normal usage pattern.

    • odizzido
    • 5 years ago

    Why why why does the intel drive brick itself? What the hell?

      • odizzido
      • 5 years ago

      oh no I figured it out. The drive comes from the japanese branch of intel. When it no longer can write data it feels it has failed and commits seppuku. Makes sense now.

      • UberGerbil
      • 5 years ago

      It’s possible it tries to preserve some state across power cycles and when the power-on code finds bad/garbled data it gives up. I suspect it’s rather difficult to reproduce and test the various failure modes near-death NAND might produce, but you’d think they could make the code fail more gracefully than this. It’s also something that’s going to tend to get prioritized off the schedule — do you work on fixing bugs most users will encounter, or on hard-to-simulate situations that only occur in the most extreme usage cases? Nevertheless, tests like this will hopefully prod at least some of the mfrs (who don’t have bigger issues to prioritize — talking to you, Samsung) to address this in future firmware (though maybe it’ll have to be addressed in the drive circuitry itself)

        • UnfriendlyFire
        • 5 years ago

        Or they decide to reduce the amount of over-provisioning after realizing that consumers are unlikely to wear out the SSDs before the warranty expires.

      • Waco
      • 5 years ago

      That’s so you don’t use them in a datacenter. The datacenter edition drives revert to read-only state, consumer drives brick.

      It’s annoying.

        • balanarahul
        • 5 years ago

        B****h Intel!! It should stay in read only state and not fucking brick itself!!

      • Prototyped
      • 5 years ago

      Despite common claims that suggest this is intentional, my understanding is that in actuality the firmware attempts to start up and read some state. Apparently as the NAND flash wears down, it takes longer and longer to read data off it (in terms of latency) — and at some point this latency exceeds a timeout built into the firmware. Firmware is written by the same sorts of people as OS drivers are, and so there is ample scope to FUBAR behavior in this situation — and result in the drive ending up in an unrecoverable, bricked state.

      Hanlon’s razor — never ascribe to malice that which can adequately be explained by stupidity.

      People also keep claiming that most SSDs are meant to go read-only when they exhaust program/erase cycles — clearly that is untrue given that the common failure mode is just to fail to enumerate. Far too many claims, far too little actual testing — present series of articles excepted of course.

    • SomeOtherGeek
    • 5 years ago

    Geoff, you are a cruel, cruel man! So, when are you going to do it again? 😉

    • wiak
    • 5 years ago

    i was planing of saying “are they dead yet?” 😀

    • MarkG509
    • 5 years ago

    Just for grins, see if you can get any of the replaced under warranty 🙂

      • Klimax
      • 5 years ago

      Hm, depending on country, they might actually have to. Either seller or manufacture. In extreme case it could require suite to actually resolve that.

    • wierdo
    • 5 years ago

    Great piece guys, nice to see what to expect from these drives in terms of write endurance, although I’m still concerned about actual data retention, I wonder how long the cells would be able to retain the data after, say, 1pb of writes.

    Also curious: Did Samsung PR ever get back to you with an explanation as to why their drives didn’t gracefully report their end of life situation? It’s concerning, though with so many writes accomplished one would hope a normal user would never reach that point any time soon.

    Still… would be nice to have the peace of mind that you’re likely to have time to react and get a replacement before the data’s lost should the need arise.

    • WhatMeWorry
    • 5 years ago

    Fine. But is the experiment reproducible? Fire it up again!

    • Geonerd
    • 5 years ago

    An interesting post over at the XtremeSystems forum, discussing their SSD Endurance Test.

    [url<]http://www.xtremesystems.org/forums/showthread.php?271063-SSD-Write-Endurance-25nm-Vs-34nm&p=5236109&viewfull=1#post5236109[/url<] "It came to a halt because a) it's proven that under normal or even extreme workload you won't wear out your ssd b)... c) there was a "design fault" with these tests. After a while and thousands of p/e cycles you could continue write data to the cells and keep wearing them out , but the cells couldn't preserve the data if you keep them out of current, even for a few hours. So what was the point of the test after that." Item C raises a VERY interesting point. If the drive won't retain data w/o power, it's useless. Future tests really should include a periodic shutdown for a day or two, followed by a full hash check upon reboot.

      • derFunkenstein
      • 5 years ago

      GEE-off has done powerless retention tests by sticking drives in the closet. So his tests aren’t really faulty.

      • continuum
      • 5 years ago

      IIRC they did do a few periodic halts to test unpowered retention– but yeah, obviously, a proper test of unpowered retention is not feasible. 😛

      • Dissonance
      • 5 years ago

      Unpowered retention tests were performed after 300TB, 600TB, 1PB, 1.5PB, and 2PB of writes. The durations varied, but the drives were left unplugged for at least a week each time.

        • Geonerd
        • 5 years ago

        Ah, OK. Thanks.

        That’s at least something, although the granularity is a tad oversized. +/- 250TB is a pretty big error bar. 😉

        Can SSD drives be hot mounted? If so, a software controllable power supply could switch the drives completely off for a day or two as they pass given thresholds, then fire them up, remount them, and keep testing, all w/o human intervention.

          • VincentHanna
          • 5 years ago

          That would make the test take months longer for no reason. If it can retain data across an entire drive for a week, it can retain data. If the checksums had indicated a reproducible problem with data retention, that problem would have been explored further. “drilling down” a dry well is irrational.

          • Klimax
          • 5 years ago

          Depending on SATA implementation (chipset/UEFI/mainboard/drivers) it is possible. X99 seems to offer this as I have it enabled on my Gigabyte mainboard and had used it successfully.

    • XTF
    • 5 years ago

    [quote<]Through 2.4PB, the Pro racked up over 7000 reallocated sectors totaling 10.7GB of flash.[/quote<] What's an reallocated sector in this context? Is a sector 1.5MB? Does a reallocated sector mean the 'sector' is officially dead? The failure modes of these drives aren't that nice. Ideally they'd fail to write new data but reading existing data should NOT be a problem. Bricking after a reboot seems especially stupid.

      • meerkt
      • 5 years ago

      Yeah. Conclusions:

      1. SSDs can be written to plenty.
      2. SSDs have braindead or buggy firmware that at end of life bricks the drives instead of going read-only.

      • Melvar
      • 5 years ago

      It might not be intended behavior. It seems plausible to me that the drive could degrade to a state where it could keep running if it was already running, but would not pass self tests or some other boot requirement if it had to start up again.

        • meerkt
        • 5 years ago

        I.e., stupid code or buggy. 🙂

        All it needs is to be able to run the firmware. I suppose the firmware is stored either in the controller chip, or in a dedicated area of the flash that’s not part of the normal storage pool, so it doesn’t get rewritten much and stays fresh.

      • Dissonance
      • 5 years ago

      Yeah, the Samsung drives have a 1.5MB sector size.

      Reallocated means retired. That could indicate actual death or simply the sector exceeding specific health/performance parameters. The sector is effectively dead either way, though, since it’s taken out of service.

      FWIW, Intel’s server-grade SSDs are designed to stay in read-only mode after a reboot. That functionality just doesn’t trickle down to the company’s consumer drives.

        • meerkt
        • 5 years ago

        That’s odd logic from Intel. Any drive needs to strive to provide as much access to as much of the data as possible, but if anything, in Enterprise it’s less critical because there you have better usage/backup/replacement policies followed.

        It’s also in contrast to the JEDEC standard which calls for just 3 months of retention at EOL for Enterprise versus 12 months for Consumer. Yeah, they all exceeded their flash-rated P/E cycles (I think?), but the original intent should still apply.

    • Bensam123
    • 5 years ago

    Good stuff… I hope there will be a continuation with new models to add more perspective. ^^

    • Firestarter
    • 5 years ago

    I’m simultaneously looking forward to and dreading the day that somebody decides to actually sing those lyrics and put it on youtube

      • UberGerbil
      • 5 years ago

      It inspired me to throw [url=https://www.youtube.com/watch?v=7KJjVMqNIgA<]this[/url<] on

    • Welch
    • 5 years ago

    Bravo Geoff, the endurance tests have been one of my favorite tests to follow on TR probably ever. Great job on the song as well… so I’m thinking we may get a chance to hear you sing your version of it on the next TR Podcast right….. 🙂

      • the
      • 5 years ago

      I’ll second this.

    • m3mb3rsh1p
    • 5 years ago

    I find the violent theme of this article inappropriate and unnecessary, despite the significant accomplishment.

      • JJAP
      • 5 years ago

      Someone shoot this guy.

        • allreadydead
        • 5 years ago

        or better, make him live&post with that name.

        • JustAnEngineer
        • 5 years ago

        [url<]http://www.youtube.com/watch?v=f7zsdL4ueps&t=00m12s[/url<]

        • tks
        • 5 years ago
      • Peter.Parker
      • 5 years ago

      It amazes me than even with all the new discoveries in the medicine field, they were able to make a cure for the lack of the sense of smell, but nothing was done for the sense of humor.

    • bjm
    • 5 years ago

    Hah! I told you those SSD things weren’t reliable!

    -Seagate

      • Prestige Worldwide
      • 5 years ago

      11/10, would upvote again (if I could).

      • divide_by_zero
      • 5 years ago

      Well done sir!

      You are the recipient of the rare d_b_0 three thumbs up!

      • donkeycrock
      • 5 years ago

      best comment ever!

      • UnfriendlyFire
      • 5 years ago

      Except the HDDs would have less than 10% of the writing, especially if never defragmented.

      • cmrcmk
      • 5 years ago

      Now I’m wishing that there had been a disk drive thrown in the mix just for a chuckle.
      “The last SSD has died at 2.4PB but our little HDD is finally over 100TB!”

      • auxy
      • 5 years ago

      Ehh? ;つー`)

      Seagate sells SSDs, you know? (´・ω・`)

      Why is this getting upvoted so much … (´・ω・)

        • derFunkenstein
        • 5 years ago

        because of how terrible their spinning drives are perceived. I laughed.

      • Krogoth
      • 5 years ago

      Cells in SSDs do leak out after being idle for a long period of long (months to years).

      The problem gets worse as the smaller the cells get.

        • geekl33tgamer
        • 5 years ago

        No one cares when they can write more data than any Seagate disk! 😉

      • ClickClick5
      • 5 years ago

      Take my laugh of the day, and +3!

      • Takeshi7
      • 5 years ago

      I just did a back of a napkin calculation. Assuming an average of 200MB/s, an enterprise hard drive can write 31.5 Petabytes within its 5 year warranty period. That’s 15 times the write endurance of even the best of these SSDs.

        • ClickClick5
        • 5 years ago

        You missed the joke.

        • DPete27
        • 5 years ago

        I would guess that “duty cycle” has a much more profound effect on hdd’s than ssds….moving parts & all…

          • derFunkenstein
          • 5 years ago

          huh huh. you said “duty”

    • Shinare
    • 5 years ago

    I realize that this must have been a HUGE amount of effort to get to this end article. From running the tests to compiling the data and formatting it in easy to read graphs. Spot on! From this gerbil to you, thanks for that, mate.

    • ccipher
    • 5 years ago

    Is the dota 2 reference intentional?

      • Dissonance
      • 5 years ago

      Nope! Just a little Jim Morrison and Highlander.

    • NeelyCam
    • 5 years ago

    I just loved this series. Thank you.

    • Ochadd
    • 5 years ago

    Great long term test. I wonder how many of the failures occurred as designed by the OEM.

    • tks
    • 5 years ago
      • NeelyCam
      • 5 years ago

      Yes; it would’ve been really cool to have a couple of HDDs running side-by-side these SSD athletes. Like one of the WD Red models and (lol) a 3GB Seagate

        • derFunkenstein
        • 5 years ago

        They’d still be writing that first 100TB. :p

          • continuum
          • 5 years ago

          No kidding, especially with 4K random writes. :scared:

      • Deanjo
      • 5 years ago

      Well I have a couple of Maxtor Diamond Max drives that are going on 12 years of continuous R/W operations compiling all the packages for openSUSE.

        • derFunkenstein
        • 5 years ago

        As old as those drives are, they probably need all that time. #rimshot

          • Deanjo
          • 5 years ago

          7200’s running in Raid 0 gets you about 105 MB/s

            • derFunkenstein
            • 5 years ago

            Oh, my bad. It’d only take 3 days to hit a TB at that rate. That 100TB would take less than a year. 😉

            • Deanjo
            • 5 years ago

            Ummm a lot less than 3 days, 86400 seconds in a day.

            86400 seconds* 100 MB per second = 8.4 TB in a day “in theory” if the R/W was constant and transfer rates were consistent across the platters.

            While I couldn’t give you an exact number, I can guarantee you those refurbished Maxtors have far exceeded the amount of data that the SSD’s tested did over their 24/7 operation for the last 12 years.

            • derFunkenstein
            • 5 years ago

            bad math. I think I came up with 2.9 hours, not 2.9 days. That’d be in line with your 8.4TB

            • Takeshi7
            • 5 years ago

            Those must be very slow drives or a very bad RAID controller. I have a single 7200 RPM drive in my computer that gets ~150 MB/s

            • Deanjo
            • 5 years ago

            They are from 2002 dude! That was about as fast as you could get in those days. First gen SATA, power connector is a full sized molex.

            [url<]https://techreport.com/review/4886/ata-hard-drives-compared/2[/url<]

            • Ninjitsu
            • 5 years ago

            But 105 MB/s of sequential writes, yes? Random R/W would be a lot slower, even with RAID…

            • Deanjo
            • 5 years ago

            Absolutely, still over that long of time going 24/7 still exceeds the lifespan and data of everyone of these SSD’s.

        • SomeOtherGeek
        • 5 years ago

        Cool, can you share some numbers? This is awesome data!

          • Deanjo
          • 5 years ago

          The only real numbers I can supply you with is that it compiles about 43 Gig of openSUSE packages daily. The smart status on hours of use flagged the warning about 11 years ago. lol

      • chubbyhorse
      • 5 years ago

      Fun fact:
      I’ve got an old Compaq ULTRA II SCSI RAID 5; and pulling the SMART stats on one of the 146GB drives has logged 1,573,404.808 GB in writes.
      18,984.471 GB in reads
      293.910GB in verify mode.

      All three fields have 0 uncorrectable errors logged; with 85,238.4 hours powered on.

      2PB? Not yet, but still pretty freaking good for a drive that was assembled by Fred Flintstone himself.

    • Prestige Worldwide
    • 5 years ago

    *Pours one out for my dead SSD homies.*

    RIP in peace.

      • Takeshi7
      • 5 years ago

      Rest in peace in peace. I can’t stand when people say that. it’s so stupid.

        • UberGerbil
        • 5 years ago

        Hey, I just got through putting my PIN number into the ATM machine. They keypad was kind of grungy, though; I hope I didn’t catch the HIV virus.

        (Of course, that doesn’t stop the Sahara Desert, Lake Tahoe, and The Los Angeles Angels from being a thing)

          • Wirko
          • 5 years ago

          Are all of your SSD drives well? Er, are all of your SSD drives well and good?

        • Prestige Worldwide
        • 5 years ago

        It was deliberate / ironic / what the cool kids are saying on teh intertubes these days.

      • davidbowser
      • 5 years ago

      Tupac would be proud

        • Prestige Worldwide
        • 5 years ago

        Sending some california love to his family’s solid state storage.

      • stdRaichu
      • 5 years ago

      Reallocate In Perpetuity In Peace?

    • Wirko
    • 5 years ago

    And so the time has come to move on, to six brand new, less-nanometric SSDs.

    • deeppow
    • 5 years ago

    Love the snog, always have.

    Wonder what the variability (standard deviation) across a population is for the last two drives. There is a reason they probably don’t test such properties. At least for the last two, the next models would be out before they died.

      • sweatshopking
      • 5 years ago

      I ALSO LOVE SNOGGING. WE SHOULD SNOG EACH OTHER

        • TrailBlazerDK
        • 5 years ago

        That’s disturbing on so many levels…

    • bellyfuz
    • 5 years ago

    WOW. Very Cool. Great articles! Read each one with pent up anticipation what was going to happen. Sooo many electrons……. RIP lol.

    • Techgoudy
    • 5 years ago

    I really loved the theme of the article. Great write-up.

    • themattman
    • 5 years ago

    I sang the whole song in my head as I read the lyrics.

    • derFunkenstein
    • 5 years ago

    Great, first Cyril leaves and now GEE-off has written his way out of a job.

    /fans self frantically, looking up towards the sky, struggling to hold back the tears

    BTW I love the nihilist tone of this piece. It goes with my favorite corporate social media account, [url=https://twitter.com/nihilist_arbys<]Nihilist Arby's[/url<]

    • ronch
    • 5 years ago

    They’re all DEAD!!!

    Woe!! WOE!!!!

    • tanker27
    • 5 years ago

    18 months of torture…….lets hope, to the naked eye, they have divulged all their secrets.

      • ludi
      • 5 years ago

      Dead drives tell no tales.

    • Duct Tape Dude
    • 5 years ago

    [quote<]Oh no, not I I will survive As long as I know how to wri[/quote<] Wow... that was fitting and glorious, Geoff.

      • Chrispy_
      • 5 years ago

      You beat me to it.
      Great song, ended perfectly [s]Geoff[/i] Gee-off!

        • the
        • 5 years ago

        Just wish they sang it on the podcast.

      • bean7
      • 5 years ago

      I wasn’t familiar with “I Will Survive,” so I looked it up after reading the article. I thought the new lyrics were funny even before listening to the song (loved the abrupt ending), but afterward…HILARIOUS! You are a master Gee-Off. A smashing end to an epic experiment. (Thanks also for expanding my musical horizons 😉 )

    • PrincipalSkinner
    • 5 years ago

    Murderers!

    • wizardz
    • 5 years ago

    [i<]"I'm afraid. I'm afraid, Dave. Dave, my mind is going. I can feel it. I can feel it. My mind is going. There is no question about it. I can feel it. I can feel it. I can feel it. I'm a... fraid."[/i<]

      • willmore
      • 5 years ago

      “Daisy, Daisy….”

        • Hyp3rTech
        • 5 years ago

        I’m sorry, Dave I can’t perform any more write cycles.

      • BiffStroganoffsky
      • 5 years ago

      Dave’s not here, man.

      • EndlessWaves
      • 5 years ago

      They’re all dead. Everybody’s dead, Dave.

        • Scrotos
        • 5 years ago

        What? Even the captain?

          • modulusshift
          • 5 years ago

          Everybody’s dead, Dave.

    • chuckula
    • 5 years ago

    They’re Dead Jim.

      • ronch
      • 5 years ago

      Stick a fork in it, Jim.

Pin It on Pinterest

Share This