Introducing the SSD Endurance Experiment

SSDs are pretty awesome. They’re fast enough to provide a palpable improvement in overall system responsiveness and affordable enough that even budget rigs can get in on the action. Without moving parts, SSDs also tolerate rough handling much better than mechanical drives, making them particularly appealing for mobile devices. That’s a pretty good all-around combination.

Despite the perks, SSDs have a dirty little secret. Their flash memory may be inherently robust, but it’s also fundamentally weak. Writing data erodes the nano-scale structure of the individual memory cells, imposing a ceiling on drive life that can be measured in terabytes. Solid-state drives are living on borrowed time. The question is: how much?

Drive makers typically characterize lifespans in total bytes written. Their estimates usually range from 20-40GB per day for the length of the three- or five-year warranty. However, based on user accounts all over the web, those figures are fairly conservative. They don’t tell us what happens to SSDs as they approach the end of the road, either.

Being inquisitive types, we’ve decided to seek answers ourselves. We’ve concocted a long-term test that will track a handful of modern SSDs—the Corsair Neutron Series GTX, Intel 335 Series, Kingston HyperX 3K, and Samsung 840 and 840 Pro Series—as they’re hammered with an unrelenting torrent of data over the coming weeks and months. And we won’t stop until they’re all dead. Welcome to the SSD Endurance Experiment.

Why do SSDs die?

Before we dive into the specifics of our experiment, it’s important to understand why SSDs wear out. The problem lies within the very nature of flash memory. NAND is made up of individual cells that store data by trapping electrons inside an insulated floating gate. Applied voltages shuffle these electrons back and forth through the otherwise insulating oxide layer separating the gate from the silicon substrate. This two-way traffic slowly weakens the physical structure of the insulator, a layer that is only getting thinner as Moore’s Law drives the adoption of finer fabrication techniques.

Another side effect of this electron traffic—tunneling, as it’s called—is that some of the negatively charged particles get stuck in the insulator layer. As this negative charge accumulates over time, it narrows the range of voltages that can be used to represent data within the cell. This form of flash wear is especially troublesome for three-bit TLC NAND, which must differentiate between eight discrete values within that shrinking window. Two-bit MLC NAND has only four values to consider.

Flash cells are typically arranged in 4-16KB pages grouped into 512-8096KB blocks. SSDs can write to empty pages directly. However, they can only write to occupied pages through a multi-step process that involves reading, modifying, and then writing the entire block. To offset this block-rewrite penalty, the TRIM command and garbage collection routines combine to move data around in the flash, ensuring a fresh supply of empty pages for incoming writes. Meanwhile, wear-leveling routines distribute writes and relocate static data to spread destructive cycling more evenly across the flash cells. All of these factors conspire to inflate the number of flash writes associated with each host write, a phenomenon known as write amplification.

SSD makers tune their algorithms to minimize write amplification and to make the most efficient use of the flash’s limited endurance. They also lean on increasingly advanced signal processing and error correction to read the flash more reliably. Some SSD vendors devote more of the flash to overprovisioned spare area that’s inaccessible to the OS but can be used to replace blocks that have become unreliable and must be retired. SandForce goes even further, employing on-the-fly compression to minimize the flash footprint of host writes. Hopefully, this experiment will give us a sense of whether those techniques are winning the war against flash wear.

The experiment

Clearly, many factors affect SSD endurance. Perhaps that’s why drive makers are so conservative with their lifespan estimates. Intel’s 335 Series 240GB is rated for 20GB of writes per day for three years, which works out to just under 22TB of total writes. If we assume modest write amplification and a 3,000-cycle write/erase tolerance for the NAND, this class of drive should handle hundreds of terabytes of flash writes. With similarly wide discrepancies between the stated and theoretical limits of most SSDs, it’s no wonder users have reported much longer lifespans. Our experiment intends to find out just how long modern drives actually last.

The ideal workload for endurance testing would be a trace of real-world I/O like our DriveBench 2.0 benchmark, which comprises nearly two weeks of typical desktop activity. There’s just one problem: it’s too darned slow. Reaching the 335 Series’ stated limit would take more than a month, and we’d have to wait substantially longer to approach the theoretical limits of the NAND.

We can push SSD endurance limits much faster with synthetic benchmarks. There are myriad options, but the best one is Anvil’s imaginatively named Storage Utilities.

Developed by a frequenter of the XtremeSystems forums, this handy little app includes a dedicated endurance test that fills drives with files of varying sizes before deleting them and starting the process anew. We can tweak the payload of each loop to write the same amount of data to each drive. There’s an integrated MD5 hash check that verifies data integrity, and the write speed is more than an order of magnitude faster than DriveBench 2.0’s effective write rate.

Anvil’s endurance test writes files sequentially, so it’s not an ideal real-world simulation. However, it’s the best tool we have, and it allows us to load drives with a portion of static data to challenge wear-leveling routines. We’re using 10GB of static data, including a copy of the Windows 7 installation folder, a handful of application files, and a few movies.

The Anvil utility also has an adjustable incompressibility scale that can be set to 0, 8, 25, 46, 67, or 100%. Among our test subjects, only the SandForce-based Intel 335 Series and Kingston HyperX 3K SSD can compress incoming data on the fly. We’ll be testing all the SSDs with incompressible data to even the playing field. To assess the impact of SandForce’s DuraWrite tech, we’ll also be testing a second HyperX drive with Anvil’s 46% “applications” compression setting.

Since the endurance benchmark tracks the number of gigabytes written to the drive, we can easily keep tabs on how the SSDs are progressing. We can also monitor the total bytes written by reading each drive’s SMART attributes. All the SSDs we’re testing have attributes that tally host writes and provide general health estimates.

There’s also a SMART attribute the counts bad blocks, giving us a sort of body count we can attribute to flash wear. As mounting cell failures compromise entire blocks, replacements will be pulled from overprovisioned spare area, reducing the amount of flash available to accelerate performance. To measure how this spare area shrinkage slows down our drives, we’ll stop periodically to benchmark the SSDs in four areas: sequential reads, sequential writes, random reads, and random writes. The drives will be secure-erased before each test session, ensuring a full slate of available flash pages. (The static data will be copied back after each endurance test.)

We’re not that interested in the performance differences between our guinea pigs; our reviews of each drive cover that subject in much greater detail. Instead, we want to observe how flash wear takes its toll on each drive. Some SSDs may age more gracefully than others.

To make testing practical, we’ve limited ourselves to one example of each SSD, plus the extra HyperX. Our sample size is too small to provide definitive answers about reliability, but testing six drives will give us a decent sense of the endurance of modern SSDs. Now, let’s meet our subjects.

Five SSD flavors

Our endurance experiment covers five distinctly different SSD configurations in the 240-256GB range. We’ll start with the latest version of Corsair’s Neutron Series GTX. We reviewed an earlier variant of this drive last year, and the Link_A_Media Devices controller hasn’t changed. However, Corsair has since upgraded the flash from 26-nm Toshiba MLC NAND to smaller 19-nm chips.

The Neutron’s new NAND comes with an accompanying price cut, bringing the GTX down to $220. That’s pretty affordable considering the five-year warranty; most SSDs in this price range are covered for only three years. Unfortunately, Corsair doesn’t list an official endurance specification for the Neutron GTX.

Given the 240GB storage capacity, one might assume Corsair has dedicated additional spare area to replace bad blocks. As far as we’re aware, though, the drive has the same ~7% overprovisioning as 256GB drives. In this case, another ~7% of the raw flash capacity is dedicated to parity data associated with the controller’s RAID-like redundancy scheme, which provides an extra layer of protection against physical flash failures.

Users can monitor the Neutron GTX’s health using Corsair’s SSD Toolbox software. The application is relatively new, and the interface could use a little more polish. It’ll do for our purposes, though. The information section displays the total host writes, and there’s a SMART section that reads the drive’s attributes. The host writes measure is linked to SMART attribute 241, which keeps tabs on the number of LBAs written. Attribute 231 is the generic wear indicator, while attribute 5 tallies bad blocks.

The next SSD on our list is Intel’s 335 Series. Behold its stark metal body:

The 335 Series pairs SandForce’s SF-2281 controller with 20-nm MLC NAND produced by IMFT, Intel’s joint flash venture with Micron. Like the Neutron GTX, the 335 Series derives 240GB of storage from 256GB of NAND. Part of the “missing” capacity is devoted to RAISE, the RAID-like redundancy feature built into the SandForce controller.

Intel says the 335 Series can endure 20GB of writes per day for the length of its three-year warranty. That rating applies to typical client workloads, and it adds up to 22TB overall. Our endurance test will be able to push past the specified limit in short order.

At $220 online, the 335 Series 240GB is a tad expensive in light of its pedestrian warranty coverage. You’re paying a premium for the Intel badge—and for the excellent SSD Toolbox software.

Despite bearing the same name as Corsair’s utility, Intel’s software is much nicer. The main screen doesn’t list host writes, but it does characterize drive health, and it estimates how much life is remaining. Again, clicking the SMART button brings up an attribute tracking panel.

The Intel 335 Series tabulates writes in several ways. Attribute 225 measures host writes, 233 tracks the number of LBAs written, and 249 reports NAND writes in 1GB increments. There’s also a media wear indicator, attribute 233, that ticks down from 100 as the NAND erodes. Once again, the number of retired blocks is covered by attribute 5, a.k.a. the reallocated sector count.

Like the Intel 335 Series, the Kingston HyperX 3K is based on second-gen SandForce controller technology. Both drives are equipped with MLC NAND fabbed by IMFT, but Kingston uses older 25-nm chips. That difference gives us an opportunity to compare the endurance of similar drives based on subsequent flash generations.

There’s no comparison when it comes to aesthetics, though. The HyperX series is the best-looking SSD family around.

Remember that we have a pair of identical HyperX drives to test. One will be run through the wringer with the same incompressible data as the other SSDs, while the other will be given a chance to flex SandForce’s write compression tech. The HyperX will be at the center of a couple of interesting subplots.

With a $185 price tag, the HyperX 3K is a pretty sweet deal right now. The three-year warranty is standard fare, but the 192TB endurance rating is very impressive. Crossing that threshold will take some time.

Kingston’s software looks pretty dated, and it doesn’t play nicely with some versions of Intel’s RST storage drivers—including the ones installed on our test rigs. Bummer. When drives are detected correctly, the app offers the basics: a general health indicator, a firmware update feature, a secure erase tool, and access to SMART data. Kingston tells us a new version of the Toolbox app is in the works, and I hope it has broader driver support.

As a consolation, perhaps, Kingston provides a handy PDF detailing all of the SSD’s smart attributes. We’ll be concentrating on attributes 5, 231, and 241, which cover bad blocks, overall drive health, and host writes, respectively.

Last, but not least, we have a couple of Samsung SSDs: the 840 Series and the 840 Pro. They look identical, and they’re based on the same in-house MDX controller. Their NAND is built by Samsung on the same 21-nm fabrication process, too. But the 840 Series packs three bits per cell into its TLC NAND, while the 840 Pro has two-bit MLC chips.

To account for the lower endurance of its TLC NAND, the 840 Series allocates more flash capacity to overprovisioned spare area that can be used to replace bad blocks. That’s why the drive advertises 250GB instead of the 256GB available in the 840 Pro. For what it’s worth, Samsung says it was overly conservative when defining the 840 Series’ spare area. The firm claims its first-gen TLC chips were more resilient than expected, which is why the newer, TLC-based 840 EVO uses that extra 6GB as a fancy write cache, instead.

As its $175 price tag attests, the 840 Series 250GB is a value-oriented model. You’ll have to shell out $240 for the 840 Pro 256GB, but you’ll get a longer five-year warranty in return. The 840 Series’ coverage runs out after three years. Unfortunately, Samsung hasn’t published official endurance specifications for the 840 family.

All the 840-series drives work with Samsung’s Magician utility. The application has an attractive interface that tracks total bytes written and overall drive health right there on the main screen.

Clicking the SMART button in the upper-right corner brings up the list of available attributes, and we’ll be watching a few of them. Attribute 241 tracks the total number of LBAs written, from which we can determine the number of bytes. We can also see how many write/erase cycles are consumed by watching the wear-leveling count, otherwise known as attribute 177. The number of bad blocks is tracked by attribute 5.

Although all of the vendor SSD utilities can read SMART attributes, we’ll also be monitoring those values with third-party software. Hard Disk Sentinel lets us dump SMART values to CSV files that can be saved and digested easily.

Now, let’s look at the systems that will serve as test rigs for the experiment.

Custom test rigs

Our endurance experiment will likely be running for many months, so we need dedicated systems to power the endeavor. We’ve assembled two identical rigs for the task. Each one lives in a closet with three test subjects inside.

Our test machines are built inside twin BitFenix Prodigy enclosures. We could have gone with smaller cases, especially since we’re using Mini-ITX motherboards. The Prodigy has room to grow, though. The thing boasts nine 2.5″ drive mounts—more than most mid-tower ATX enclosures. We certainly have room to expand our sample size if this initial experiment goes well. The Prodigy also supports full-sized CPU coolers and PSUs, which lets us keep the systems relatively quiet without too much effort.

Gigabyte’s H77N-WiFi motherboard sits inside our Prodigy chassis. This is one of our favorite mini Ivy Bridge boards. Apart from the platform hub, it’s identical to the Z77N-WiFi we reviewed earlier this year. The H77N-WiFi serves up dual 6Gbps SATA ports in addition to two 3Gbps ones—enough I/O connectivity for our first round of testing. It also has built-in 802.11n Wi-Fi that enables us to manage the systems while they’re stuffed in the closet.

We selected an Intel platform because we’ve found the firm’s SATA controllers to be faster than those in AMD chipsets. Our testing doesn’t require a lot of CPU horsepower, so we chose a pair of older Core i3-2100 processors from the Sandy Bridge generation. At 3.1GHz, the dual-core chips have more than enough oomph to swamp our SSDs. The Core i3’s integrated GPU eliminates the need for discrete graphics cards, as well.

A pair of Thermaltake NiC F3 air towers is tasked with cooling our CPUs. These puppies combine slim radiators with three heatpipes that make direct contact with the CPU. The mounting bracket is easy to use, and the four-pin PWM fan is relatively quiet. Truthfully, we don’t need anything fancy to keep the Core i3-2100’s temperatures in check.

Unlike a lot of aftermarket coolers, the NiC F3 leaves enough clearance for taller memory modules. We decided to take advantage by using some Corsair Dominator Platinum DIMMs left over from our PC build guide. The modules have monster heat spreaders, and they were a tight fit on one of the boards, whose DIMM slots are angled toward the CPU slightly. Doh! We ended up swapping the CPU fan over to the other side of the radiator to give the memory more room to breathe.

Our endurance testing is being conducted with the target drives connected as secondary storage. That means we need a separate system drive, and why not use another SSD? They’re silent and power-efficient, and I have a growing stack of ’em in the Benchmarking Sweatshop. To match the red CPU fans, I grabbed a couple of 60GB Corsair Force GTs that have been on the shelf since our look at SSD performance scaling.

Admittedly, the Rosewill Fortress 550W PSUs are overkill. We wanted something power-efficient, though, and these are 80 Plus Platinum-certified. They’re also very quiet, and they nicely match our system’s largely blacked-out theme. The PCIe power connectors even have a splash of red.

The Fortress is technically too large for the Prodigy, but we managed to marry the two with some careful cable routing. At least the case provides plenty of places to cram excess cabling. We ended up with pretty clean systems overall.

Setting the baseline

Before we start hammering our subjects with writes, we need to establish a performance baseline. We’ll use these factory fresh results as a point of reference when looking at how flash wear changes each drive’s performance characteristics. Since Anvil’s Storage Utilities includes a handful of benchmarks with the same compressibility settings as the endurance test, that’s what we’ll use to probe performance. We’re just sticking to the basics: 4MB sequential reads and writes, and 4KB random reads and writes. (We’re using Anvil’s QD16 random I/O tests and testing all the drives on the same 6Gbps SATA port on one of the test systems.)

Because we’ve limited performance benchmarking to a single application and a handful of tests, I wouldn’t draw any conclusions from the results below. Our latest SSD reviews explore the performance of most of these drives in much greater detail—and across a much broader range of real-world tests. We’re using Anvil’s benchmarks for convenience.

These numbers have only limited usefulness by themselves. Things should get more interesting as we add data points after tens and hundreds of terabytes have been written to the drives.

Note the differences between the HyperX configurations, though. The compressed config scores higher than the standard one in the sequential tests but not in the random ones. The differences in the sequential tests are much smaller than I expected from the “46% incompressible” setting, too.

That’s all the time we need to spend on performance for now. Our next set of benchmarks will be run after 22TB of data has been written, matching the endurance specification of the Intel 335 Series. I wouldn’t expect different results from those tests. However, we should see performance suffer as we get deeper into our endurance testing. Bad blocks will slowly eat away into the spare area that SSDs use to speed write performance, and reads may be slowed by the additional error correction required as wear weakens the integrity of the individual flash cells.

On your marks, get set…

If you’ve read our latest SSD reviews, you’ll know that most modern solid-state drives offer comparable all-around performance. Any halfway decent SSD should be fast enough for most users. This rough performance parity has made factors like pricing and endurance more important, which is part of the reason we’re undertaking this experiment in the first place.

Also, we couldn’t resist the urge to test six SSDs to failure. That may sound a bit morbid, but we’ve long known about flash memory’s limited write endurance, and we’ve often wondered what sort of ceiling that imposes on SSD life—and how it affects performance in the long run. The data produced by this experiment should provide some insight.

We’re just getting started with endurance testing, and there are opportunities for further exploration if this initial experiment goes well. Flash wear isn’t going away. In fact, it’s likely to become a more prominent issue as NAND makers pursue finer fabrication techniques that squeeze more bits into each cell. This smaller lithography will drive down the per-gigabyte cost, bringing SSDs to even more PC users. As solid-state drives become more popular, it will become even more important to understand how they age.

We have lots of data to write to this initial batch of drives, so it’s time to stop talking and start testing. We’ve outlined our plans, configured our test rigs, and taken our initial SMART readings. Let the onslaught of writes begin! We’ll see you in 22TB.

Update: The 22TB results are in. So far, so good.

Update: After 200TB, we’re starting to see the first signs of weakness.

Update: The drives have passed the 300TB mark, and we’ve added an unpowered retention test to see how well they retain data when unplugged.

Update: Our subjects have crossed the half-petabyte threshold, and they’re still going strong.

Update: All is well after 600TB of writes—and after a longer-term data retention test.

Update: We’ve now written one petabyte of data, and half the drives are dead.

Update: The SSDs are now up to 1.5PB—or two of them are, anyway. The last 500TB claimed another victim.

Update: The experiment has reached two freaking petabytes of writes. Amazingly, our remaining survivors are still standing.

Update: They’re all dead! Read the experiment’s final chapter right here.

Comments closed
    • gamoniac
    • 6 years ago

    It would be convenient if you can accompany the endurance result with the SSDs’ NAND type and fab technology in a tabular format. Thanks for the test, this is very interesting.

    • Cloef
    • 6 years ago

    I thinks it’s really great that we are seeing different tests around the Internet. Some are writing pure sequential data and will result in high TiB. Others are focusing more on real life write patterns.

    Here’s my take on live [url=http://ssdendurancetest.com/<]ssd endurance testing[/url<] with visualized SMART data.

    • iycgtptyarvg
    • 6 years ago

    I specifically registered to this website after hearing about this article on the Twich podcast to ask when we can expect a follow-up or update article explaining the current state of the SSDs. Could you give us an indication?

    • adamrussell
    • 6 years ago

    I think anyone worried about SSD longevity should consider that HD with moving parts dont last forever either. In fact, it might not be a bad idea to include one in this test.

    • wes123
    • 6 years ago

    I think this test is a best case scenario. Suppose the SSD is 90% full of unchanging files. Then writes get concentrated in the remaining 10% of the SSD and that 10% will wear out 10x faster than if you had no static files on the SSD. Is my suspicion right?

      • Wirko
      • 6 years ago

      There’s a secret sauce to prevent this and it’s called wear leveling. Even static data is occasionally relocated. This way, new data can be written to new locations across the whole SSD.

      These relocations, however, cause additional writes, and I wonder if these are included in the TBW data that can be retrieved from the drives.

    • MDPlatts
    • 6 years ago

    Are we nearly there yet ?

      • Wirko
      • 6 years ago

      +1. We should be at 35 TB per drive by now, calculated for the very worst case – the lowest speed of all (150 MB/s) and only one drive being read from/written to at a time.

    • thomass31
    • 6 years ago

    I would add some regular system re-starts – or power down-power up- also into to the testing.

    On the extremesystems endurance test if I remember the samsung 830 drive failed during a restart and not under the continuos loop.

    I mean daily or weekly at least one…

    • indeego
    • 6 years ago

    New Intel Solid State Drive Toolbox released:
    [url<]https://downloadcenter.intel.com/Detail_Desc.aspx?DwnldID=18455[/url<] Of particular importance: This release of the Intel SSD Toolbox includes a firmware release which contains the following: - Intel® SSD 530 Series firmware update to DC12 which addresses an occasional drive hang after resume from Link Power Management slumber state. Other tool related items are: - Added new descriptions for SMART attributes C2, C5, EA, and F9 - Corrected calculations for E1, F1, and F2 SMART attributes. Might be useful... Though I don't think you are testing the 530

    • Anodynic
    • 6 years ago

    I applaud this endeavor. However, in my profession I am hypersensitive to studies with a single data point.

    Folks, when these results come out, please treat them appropriately. The results will state how each individual drive performed in this study, and that will have some value because it will be public.

    I just hope we can collectively be smart enough to limit the conclusions we draw.

    The limitations on cost and time are non-trivial (that’s why my uncle’s dog isn’t making hard drives). I get that. But I would be more likely to treat these as a single population and call them “modern SSDs” rather than consider them representative of any one manufacturer.

    • BIF
    • 6 years ago

    Geoff, what is the plan for handling firmware and management software updates that may be released by the vendors during this long-term test?

    Will the drives be updated, or will they be kept in a static condition with regards to firmware updates?

    One potential issue I see is that of a firmware update halfway through the test…and either applying it gives the drive an advantage but could make results change halfway through… or not applying it could be considered unfair.

    I’m not saying one or the other is wrong, I am merely curious…what is TR’s position on this?

    • gamoniac
    • 6 years ago

    Is TR going to stream the video of these SSD chugging along for weeks on months? It would be awesome to capture the last moments when they are nearing end of life. Simply BSOD? Restarting in endless loop? Smoking? Or sparkling?

      • BIF
      • 6 years ago

      Yeah, there’s going to be so much motion and movement, I think it should be filmed in high speed digital. Maybe George Lucas still has a couple of his cameras laying about.

      And then we should have a scanning electron microscope trained on each memory cell to watch what happens at a sub-molecular level.

      I can’t wait to get the RSS feed on this and watch it from my smartphone in my favorite italian restaurant!

      • indeego
      • 6 years ago

      Delayed write failed in eventvwr and taskbar, I imagine, as the OS attempts to flush data from memory to drive (repeatedly, as it should) and fails. The operating system will keep running fine.

    • Chrispy_
    • 6 years ago

    Geoff, what data rate is Anvil writing to the drives – surely 22TB has been passed if this was started on 20th August?

    At 200MB/second that would have taken about one day.
    I’m just eager to see if there is any performance degredation after that much data!

    • Questar
    • 6 years ago

    Wow guys, this is awesome. As the owner of a couple of 335’s – one of which has 12TB of writes on it – I’m really looking forward to this!

      • indeego
      • 6 years ago

      How did you manage that?

        • Questar
        • 6 years ago

        I do everything on my SSD drives except long term storage. They are my download drives, work drives, temp files, etc.

        Download a 10GB video, unrar it and convert it to a mobile device? All done on an SSD.

        Get a new video card? Of course you need to redownload 60GB of games from Steam to burn it in.

        Ripping video? Why rip one stream at a time to a hard disk when you can rip three at a time to an SSD?

        Edit: Now that I think about it, it’s actually a 330 that has the 12TB of writes. I don’t know how much has been written to the 335.

    • Jim552
    • 6 years ago

    While this test is greatly appreciated as some form of “starting point” it is too bad that you were not able to perform a “real-world simulation”….

    How the SSD’s react during/after failure with an OS Present and a user activity, or program emulating a user activity, is affecting during/after the failure is also of great importance. (I am sure not only to me.)

    I know the 2 times that “flash media”, SD-Cards, have failed on me there was no indication from the device, camera in this case, that were was a problem. Pictures kept being saved, no issues. Then when I attempted to transfer the pictures that file were corrupt?

    It “seems like SSD’s” do have built in protection from this, but I have no confidence that their schemes actually are functional.

    Tests like this one, and more real-world tests, would be great to read through.

    Thanks for all of your efforts, and a great “First Step”,
    James…

    • Forge
    • 6 years ago

    I feel pretty good, having only put about 6.5TB of writes onto my 830 Pro in about a year. My desktop has the same SSD, with about half that in writes.

    I expect SSD longevity will outstrip HDD longevity for about 99% of users, and being all but shock-proof will end up being the biggest advantage. I’m shocked how many laptop HDDs I replace at work because folks drive nails with running laptops (at least I assume that’s what they are doing).

      • Aranarth
      • 6 years ago

      LOL Laptops are a true multi-purpose tool!

    • Fighterpilot
    • 6 years ago

    Reading that article on how memory cells work….ugh…who the hell thinks up this stuff?
    I bow to the giant brains that come up with these nano science inventions.(0-0)

    • holophrastic
    • 6 years ago

    I’ve actually been avoiding SSDs for this very reason. Storage speed means nothing in my business, but reliability means everything.

    Thanks in advance. I’m really looking forward to these results. If they wind up being reliable, in terms of failure rates, I can easily put a meter on each drive, and chuck it before it fails.

      • Chrispy_
      • 6 years ago

      This is not a reliability test, it is an endurance test.

      A steel cable designed to hold 10000 lbs of load is not unreliable if it fails after 15000 lbs.
      Similarly, an SSD with a 50TB write endurance is not unreliable if it fails after 75TB.
      ALL OF THESE DRIVES WILL FAIL.

      If you want a high-endurance drive, most manufacturers publish these values – buy the one with the highest endurace capacity.

        • holophrastic
        • 6 years ago

        I said:
        “I’m really looking forward to these results. If THEY wind up being reliable…”

        The “THEY” in that sentence refers to “these results”.

        So I’ll say it again, with more words.

        If these results wind up being reliable, and a given drive actually does reliably operate for a set number of TB, then I can simply meter my writes and replace the drive before it fails.

        Is that better for you?

          • indeego
          • 6 years ago

          Nope, because the results of this test have zero application to the real-world. Do you think if no drives fail that Joe user may get the same results? Or if they all fail, they Joe user won’t have double the endurance?

          Too many variables. Don’t draw a *single* conclusion from this test. Period.

          • BIF
          • 6 years ago

          [quote<]"Is that better for you?"[/quote<] No. Your use of the pronoun was fuzzy and unclear. And invalid. Just who was going to be the one to assess reliability of the results? And if you did indeed intend to refer to the reliability of the results and not the reliability of the drives, then why not just say "the results" instead of "they"? Why not be more clear from the outset? Unless you just wanted a reason to argue. 🙂

      • indeego
      • 6 years ago

      If you wanted reliability you would focus on RAID, the controller, the firmware and pairing of your drives within recommended specs. Assume the drives themselves will fail at any moment, no matter the manufacturer or type.

        • holophrastic
        • 6 years ago

        I’ve been down the raid road multiple times. RAID is only good for reliability before a drive starts to fail. Then, RAID quickly increases the liklihood of losing everything. It’s really a terrible technique when data loss is life-threatening.

          • indeego
          • 6 years ago

          This is why I parrot [b<]avoid consumer RAID[/b<]. It's junk. It doesn't take into account the firmware requirements, matching of drives with controller. If you build on servers, you already know this, or should. You don't just use x brand with y controller, they [b<]must[/b<] be tested and verified together. I'll get a bunch of enthusiasts downvoting me and saying they have never had an issue, but trust me, I've seen consumer/onboard RAID fail miserably, probably what you are talking about. I've never had a server RAID fail me (OK Dell's PERC did once in 2001 on a non-public published tech note. We dumped Dell that day for good.)

          • travbrad
          • 6 years ago

          Well I certainly wouldn’t rely on RAID as a backup, but it does at least offer some redundancy in your main “drive”. I would always want to have regular backups to other servers/drives/cloud in addition to the RAID for any important data.

          It also depends which RAID you use. I wouldn’t touch RAID5 with a ten foot pole.

          • Waco
          • 6 years ago

          RAID is not a backup technique, it’s an uptime technique for the vast majority of uses.

      • travbrad
      • 6 years ago

      I hate to burst your bubble, but for typical use cases SSDs will outlast traditional spinning disks the vast majority of the time. Good quality SSDs tend to only fail when the flash has reached/exceeded its lifetime write cycle capacity, whereas mechanical HDDs fail more randomly (from physical/mechanical problems). I’ll take predictable failures over random lottery failures any day.

      If you are writing so much data that you are quickly wearing out your SSD’s flash, you should probably be looking at enterprise level drives (with SLC flash) anyway. For people using SSDs as a boot/application drive it really isn’t an issue.

      • Mad_Dane
      • 6 years ago

      You don’t know what you are missing, once you go SSD you never look back. You can still use your old girls for RAID backup, just move OS and apps to SSD.

    • kilkennycat
    • 6 years ago

    Geoff: I don’t see any SSD “bit-rot” “latency” tests in your proposed test suite.

    Unlike Hard Disks, all flash memory cells using “floating gate” structures have charge leakage which progressively worsens as the write-count increases. A cell with an estimated bit-validity of 10 years on the first write-cycle might only have a bit-validity of 1 day after 10,000 write cycles. So endurance tests involving many frequent writes to the SSD will potentially hide this problem until the cell bit-validity time falls below the cell refresh-time as determined by the external test-program stimulus in combination with the wear-leveling algorithms plus having sufficient cells in this perilously-leaky state for the error-correction to fail .

    Such ” Bit-rot” is a highly user-significant source of SSD failure as it is likely to show up in normal use far earlier than failures from saturation write-testing. Charge leakage continues to occur when the SSD is entirely powered down (laptop usage….) or is left in a quiescent static powered state. So, if a lap-top is powered down for (say) a week and there are a preponderance of cells with less than a week of “bit-validity” due to leakage, un-correctable errors will occur on power-up when the error-correction is overwhelmed by the sheer number of invalid cells. This sort of problem does NOT happen with hard-disks (unless subjected to very powerful external magnetic fields while powered down).

      • Chrispy_
      • 6 years ago

      The results of that test will be availabe in ten years from now.

    • itachi
    • 6 years ago

    Epic idea, I look forward to see the results of this test !

    PS: hope the electric bill won’t hurt too much at techport basement ahha.

    • neahcrow
    • 6 years ago

    I found this article fascinating and look forward to seeing the results. I work for a company that sells a camera that uses SSDs for the storage media. We have to come up with a list of SSDs that are consistently fast enough to handle recording RAW video, but the other issue is the endurance and lifespan of a drive. A 240 GB SSD will be filled in about 15 minutes and these drives will be erased and used over and over again. We have tested SSDs extensively in the lab and I know our favorites for speed, reliability, and endurance and I’m interested to see how it compares to Tech Report’s test in workstations.

    • WillBach
    • 6 years ago

    Could we get an RSS feed that announces when drives die? Or a ticking JavaScript clock that shows which drives are (still) alive and stops when they die? Looking forward to the results 🙂

    • indeego
    • 6 years ago

    The more I think about this test, I feel like it’s unfair to the manufacturers or models represented here.

    So say [i<]one[/i<] of these fail. In a month, you publish that x failed, and now on a google search, this manufacturer has its name possibly sullied. Now, your intentions may be valiant, and most intelligent people can possibly make the conclusion that one failure doesn't mean ANYTHING meaningful can be drawn from the results. But a lot of other people are just looking for "brand reliability" and TR's likely to come up, they see a past model name, and they punish that manufacturer. Seems unfair, somehow. The problem is [i<]repeatability[/i<], a mainstay of scientific analysis. TR seems to have tossed this out. If you could repeat the experiment over many iterations? Maybe you'd have a case, albeit a very small one given your environment. But the number of variables that are outside the control of the manufacture for [i<]one experiment[/i<] is too great. So this is cute, and like the Extremesystems group experiments, that is also cute, but hardly scientific. And that's a shame.

      • uni-mitation
      • 6 years ago

      I couldn’t agree with you more. It doesn’t meet scientific muster.

      Quite simply put, the scientific method works as follows:

      1- You see natural phenomena.
      2- You make a hypothesis to predict or explain something about the phenomena.
      3- You conduct an easily repeatable experiment with controlled variables.
      4- You record the experiment and see if it meet your previous predictions.
      5- If your hypothesis stands correct in whole, then you conduct more experiments the same one to strengthen your hypothesis.
      6- If your hypothesis is incorrect in whole, it is discarded, and a new one made. Carry out the experiments anew.
      7- if part of hypothesis is wrong, discard that, keep the rest.
      8- After a long trail of experiments, and verifying your conclusions one and again do you start to form a strong body of theory regarding this natural phenomena.
      9- If new theory is accepted as proven then it is accepted until otherwise unproven. No theory is safe from perpetual study and criticism from scientific study.
      10- Previously accepted theories can be used in the work of new theories. The more vigorous the criticism of said scientific methods and data, the stronger our understanding of natural phenomena.

      Indeed, it could be argued that this method relies on the very axiom that “nothing can be proven until it is proven to be false”. Thus, we work all the possible ‘false” understandings, yet we never get to be certain that said theory will stand in perpetual reign of our understanding.

      • maxxcool
      • 6 years ago

      They are ALL going to fail, or become degraded enough to warrant a fail.

      • frumper15
      • 6 years ago

      I would assume that if a drive failed before the manufacturer stated limits they (TR) would take additional steps to evaluate the results – the most obvious being a retest of the same drive to determine if it is a design limit/flaw or just a bad drive. If a drive makes it past the manufacturer warrantied time period, I don’t think there is anything unfair to the manufacturer and in fact validates their own claims.
      That being said, this is TR doing a simple test that most of us don’t have the time or resources to perform ourselves and publishing the results on their site – this isn’t a government funded experiment being published a scientific journal so we can all apply the appropriate amount of salt to the results that are ultimately discovered without getting too upset about how scientific everything is or isn’t.

        • indeego
        • 6 years ago

        [i<]"I would assume that if a drive failed before the manufacturer stated limits they (TR) would take additional steps to evaluate the results - the most obvious being a retest of the same drive to determine if it is a design limit/flaw or just a bad drive. "[/i<] And then? What have we exactly learned? How would we know if it's not just any other number of myriad reasons that might cause the failure in their environment? [i<]If a drive makes it past the manufacturer warrantied time period, I don't think there is anything unfair to the manufacturer and in fact validates their own claims.[/i<] oh, so 3-5 years this test will occur? [i<]That being said, this is TR doing a simple test that most of us don't have the time or resources to perform ourselves and publishing the results on their site - this isn't a government funded experiment being published a scientific journal so we can all apply the appropriate amount of salt to the results that are ultimately discovered without getting too upset about how scientific everything is or isn't.[/i<] I'm not asking for anything of the sort, I'm just trying to figure out if we, TR, SSD manufacturers actually gain anything useful from this, versus a "better-run" experiment. This just is too narrow to provide anything useful for anyone, and it possibly causes harm through no fault of any particular manufacture. Will TR ask for another sample drive if it fails? That is probably what they would do if a GPU failed mid-testing.

      • Sabresiberian
      • 6 years ago

      Well, no one should take a test like this and read the results as a condemnation of a particular drive. It isn’t designed to test all of a manufacturer’s reliability points, just give us a rough idea of what is going on.

      Consider the scale of a true scientific test; we would need to see the results measured over hundreds of each drive (at least), and that just isn’t feasible for most sites to fund. You just can’t get anywhere near that kind of formal data, from anyone. Time and consumer experiences will give us more data, results from other testing sites will give us more, but in the end, no one should base their entire opinion based on one site’s tests for anything, not even a cutting edge, reliable site like Tech Report. That doesn’t mean what we see in one test is totally invalid though. It just gives us one more little clue about what we can expect. 🙂

        • Peldor
        • 6 years ago

        The common way to address this non-representative sample would be to just report the drives as A, B, C, D, etc rather than identify the manufacturer.

          • indeego
          • 6 years ago

          That would be a good compromise. I like that.

        • indeego
        • 6 years ago

        No, what you see in one test is [b<]completely invalid[/b<]. It has no relevance to anyone else's setup. It's not repeatable. No patterns could possibly emerge without upping the scale a LOT more, using a lot more variables. Again, what will the conclusion be when one of these drives fails? What if none fail? Does that mean we know how to base a technology or purchase decision? It's a cute time-waster, for everyone involved. And yes, there are already people in this very thread going gahgah over the tests metrics, so you can bet they will take the results as serious.

          • Peldor
          • 6 years ago

          [quote<]Again, what will the conclusion be when one of these drives fails? What if none fail? Does that mean we know how to base a technology or purchase decision?[/quote<] The main conclusion that I think could legitimately be drawn given the low number of units and the varied controllers, manufacturers, etc. is something like: "Current SSDs in this size range using these flash lithographies are likely to survive X terabytes of continuous writes done in this specific manner." That is you sort of homogenize the 5 current market samples and arrive at a rough lower bound on the assumption that Tech Report has a random sampling of drives (not 'golden' samples). There are a lot of conditionals in that statement if you read it carefully. Which brings me to your other point: [quote<]And yes, there are already people in this very thread going gahgah over the tests metrics, so you can bet they will take the results as serious.[/quote<] I heartily agree that people will take the results as evidence that X > Y. Brains are pretty much pattern recognition engines, and most are happy to begin with one data point for recognizing patterns. You can draw error bars and write caveats until the cows come home, if a chart shows X > Y by any margin, by god, people will think "I want X, Y sucks." Marketing uses a lot of this sort of thin "proof". Find a test which makes you look good and run with it. I wouldn't be surprised to see a manufacturer actually point back to these results if one drive survives particularly longer than the rest (which seems almost certain to happen.)

      • f0d
      • 6 years ago

      its not a matter of if they will fail but when
      they will all fail eventually

      • Chrispy_
      • 6 years ago

      You’re completely missing the point:

      If a drive is faulty, and gives up before the NAND wears out, then it ought to be replaced and the test restarted. “Testing these drives to failure” doesn’t mean doing a reliability test. All of these drives have a service life and the idea is to find out what the service life actually is. Failure is guaranteed

      If it was an auto tire test with different manufacturers and different products, you wouldn’t be right to say that earlier-failing tires are inferior, they all have different mileage ratings, softer tires might offer higher performance and eventually they’ll all fail when the rubber wears out, the carcass becomes the tread and you get a blowout. The aim of those tests is to see if the actual mileage matches the rated mileage.

      MLC ought to outlast TLC, and other factors like how agressive TRIM and how low write amplification are will make a difference. Manufacturers make claims about the service life of their drives, it’s not [i<]unfair[/i<] to put those claims to the test.

        • indeego
        • 6 years ago

        You: [quote<]"All of these drives have a service life and the idea is to find out what the service life actually is. Failure is guaranteed"[/quote<] TR: "Unfortunately, Corsair doesn't list an official endurance specification for the Neutron GTX." TR: "Unfortunately, Samsung hasn't published official endurance specifications for the 840 family." ----- "Manufacturers make claims about the service life of their drives, it's not unfair to put those claims to the test." No, they clearly do not.

          • Chrispy_
          • 6 years ago

          :rolleyes:

          Just because a manufacturer doesn’t list and official endurance spec doesn’t mean that they automatically claim that it has infinite endurance.

          Just because a wine bottle doesn’t list it’s acohol content by volume doesn’t mean it’s alcohol-free.

          You’re just being deliberately obtuse and you seem to be losing the argument with most people in the thread for it.

            • indeego
            • 6 years ago

            Perhaps rephrase your statements to contain more clarity? I’m simply quoting your statements with the reality of the situation. If the job is to test endurance, and two of the drives don’t even have endurance quoted, how is this even a valid comparison?

      • LaChupacabra
      • 6 years ago

      I completely disagree. If a drive fails before the rest, even if that isn’t scientific, it is at minimum indicative of the quality controls of that vendor. TR going to the store and purchasing a drive is the exact behavior of any consumer.

      Consumer testing is not, by definition, a scientific endeavor. That part should be done by the manufacturer before the product reaches the shelves. Review sites/magazines/periodicals are responsible for recreating the habits of the consumers and reporting the results.

        • indeego
        • 6 years ago

        “it is at minimum indicative of the quality controls of that vendor.”

        No, it isn’t. It’s rather scary people actually believe this. One sample size isn’t indicative of anything AT ALL. It’s purely anecdotal.

          • LaChupacabra
          • 6 years ago

          That’s exactly the point. This is from the perspective of the consumer. One bad product getting into a consumers hand doesn’t even mean the product was bad when it left the manufacturer. The goal of a review like this is a real world experience of the entire chain of events that lead to these drives getting installed. Damaged in shipping? Use a better vendor. A single bad memory chip that reduces the drives effective life? Use a better vendor. Bad SATA cable causing data corruption? Include a better cable.

          If TR were doing a scientific analysis of a single part of these drives then this would be incredibly shoddy methodology. In no way shape or form is TR buying a product and then testing it in any way unfair to any manufacturer.

      • Yan
      • 6 years ago

      I really don’t understand this criticism.

      That’s how all reviews are done: you choose several cars, or several irons, or several SSDs. You measure them, and you weigh them, and you throw them against the wall and see if they stick.

      If you’re testing cars, you drive them at 40 km/h into a brick wall and see what happens. If you’re testing irons, you drop them 1,5 m onto the floor and see whether you get electrocuted. No manufacturer complains that it’s unfair, because the instructions explicitly say not to drive the car into a brick wall or not to drop your iron.

      And certainly nobody says that you have to use the scientific method, or use a statistically significant number of samples, or any nonsense like that.

      So in this case, you write 20 TB and then 100 TB and then 500 TB of data on six SSDs and see when they stop working. And then you say what happened.

      What’s unfair about that?

    • ClickClick5
    • 6 years ago

    This is really a great idea and test!

    • LukeCWM
    • 6 years ago

    Would it be possible, in a separate test, to track consistent performance speed?

    I don’t claim to be an expert, but how I understand it is that SSDs write crazy fast when they are empty, but then they fill up and the performance drops significantly because the drive has to do garbage collection in real-time. If you simply hammer a drive, from what I’ve heard, most all drives fall apart. You’re probably never going to see this in a consumer environment, but it’s a reality for database servers with drives that never get a rest.

    I know AnandTech has done some testing on this. Check out the interactive charts on this page (and note the logarithmic scale for the interactive charts): [url<]http://www.anandtech.com/show/6433/intel-ssd-dc-s3700-200gb-review/3[/url<] In the words of Lal Shimpi from Anand: [quote<]I view the evolution of "affordable" SSDs as falling across three distinct eras. In the first era we saw most companies focusing on sequential IO performance. These drives gave us better-than-HDD read/write speeds but were often plagued by insane costs or horrible pausing/stuttering due to a lack of focus on random IO. In the second era, most controller vendors woke up to the fact that random IO mattered and built drives to deliver the highest possible IOPS. I believe Intel's SSD DC S3700 marks the beginning of the third era in SSD evolution, with a focus on consistent, predictable performance.[/quote<] I would like to see TechReport confirm or contradict their findings, and to show data for more drives, including more popular and current drives. And, course, no one is as thorough with the numbers and comprehensive with the data as Tech Report. =]

    • Peldor
    • 6 years ago

    How long does a TB of writes take in this sort of testing? Are they close to the synthetic test speeds?

    • DPete27
    • 6 years ago

    With regard to the “small sample” conundrum:
    In the world of research, you combine your own results with those of previous comparable studies to validate your conclusions. The trick is to keep some testing variables consistent with existing data, then add a unique “twist” so that you can extrapolate your data on top of what’s already available and make ground-breaking conclusions. Suddenly, the sample size has grown significantly without the need for additional effort on your part. Just because you didn’t do the tests, doesn’t mean you can’t use the data…as long as the data you’re borrowing is valid and you’re referencing the source.

    **Disclaimer: Even using these techniques, your sample size of 5 (6?) is still too small to be “scientifically” accurate, but you catch my drift…

      • Geonerd
      • 6 years ago

      Which is why TR should be going out of their way to explicitly reference the prior work of XtremeSystems and others (rather than pretend it doesn’t even exist.) While most of the XS tests have been performed on smaller drives, there will be useful overlap with regard to the specific chips, controllers, fab nodes, bits per cell, etc. By combining as much data as possible, it may be possible to make broad statements about a number of the these rather relevant parameters.

      • mattthemuppet
      • 6 years ago

      then you’re getting into meta-analysis which is a whole other kettle of fish.

      that approach might work for, say 100s of human studies with small no.s of subjects (20-30), as the numbers are large enough that you can start teasing out small but significant differences. For this, the differences in testing methodologies between studies and the tiny tiny sample sizes would result in themselves in more variance than you could resolve statistically. I.e. the results would still be largely meaningless.

        • JustAnEngineer
        • 6 years ago

        Since we’re talking about a continuous data variable (terabytes written before failure), we could perform a reasonable statistical analysis with a minimum experiment size of 30 drives (to get 30 measurements). If we wanted to compare drive type A to drive type B, we would normally run 30 tests on each of them.

        For count data (e.g.: number of SMART errors), you need many times more samples.

        For binary data (yes/no), we need an experiment with thousands of data points.

    • anotherengineer
    • 6 years ago

    Groovy, now I can see how long that samy 256GB 840 pro will last.

    I would also be curious to see if Toshiba Toggle nand last longer/shorter than IMFT of the same fab size.

    Edit – hey TR time to throw in a Plextor 256gb ssd with toshiba toggle nand 😉

    • Wirko
    • 6 years ago

    What if they refuse to die?

    [url<]http://en.wikipedia.org/wiki/Gustav_III_of_Sweden's_coffee_experiment[/url<]

      • danny e.
      • 6 years ago

      hilarious

      • danny e.
      • 6 years ago

      did you hear about the little indian boy who drank too much tea? drowned in his tea-pee.

      • Pholostan
      • 6 years ago

      Mmm, coffee.
      *glugg*

    • Lianna
    • 6 years ago

    Great to see SSD endurance test on TechReport.

    However, I really wish that the test would include real ‘desktop use’ data like your 2 week trace because of a) real compressibility for SF, b) relation to actual use scenarios, c) garbage collection and write amplification would matter in results. I guess c) would actually shorten the time to failure, so ‘an order of magnitude slower’ test would only be a little bit slower. Current one checks the ‘compressed video/pics/audio storage’ scenario that is currently rare for SSDs. If 2 week trace takes too long… that’s even better. It would mean that if, say, 2 week trace takes 1 hour to complete, 1 week of testing would mean 6.5 years of real desktop use.

    Second, I guess Prodigy case used here is not exactly representative of real world SSD usage conditions (i. e. much better). The test could be conducted in, say, 40C (or 50C), roughly corresponding to crowded laptop or position over HDD in desktop with low to no air space and no fan. Elevated temperature could shorten the time to failure… but that’s the point of testing in real (or worse than average but common) scenario.

    Third, after each write of ‘real data’, I’d read out the data and check them for errors. Yeah, that would double the time to failure, but as stated earlier, if the ‘real desktop’ scenario is accelerated 300 times, double time would mean 2 weeks of accelerated testing equaling 6 years of real use. Checking for real, ‘after correction’, error rate woud be extremely valuable, even for small sample size.

    Finally, checking for ‘readable’ failure mode would be of great importance. AFAIR SSD’s ‘read-only’ failure mode was touted as a ‘peace of mind’ solution e.g. by Intel.

    • Kougar
    • 6 years ago

    Very much looking forward to seeing how the drives handle running out of flash. There’s been so much confusion regarding whether they are supposed to turn into read-only memory, stop working outright, or just end up being longer detected by Windows.

    • Vivaldi
    • 6 years ago

    I know I don’t represent the majority, but is there any chance you could comment on how to access SSD metrics/data via the command line, for your Unix/Linux readers? Is there something similar to smartctl that each of these vendors provide for us?

    Thanks. Long time fan of TR, I know I don’t always speak up, but I assure you I’m lurking!

    -Viv

      • jwilliams
      • 6 years ago

      Why can’t you just use `smartctl` ?

        • Vivaldi
        • 6 years ago

        I’m aware smartctl exists (having mentioned it) but curious if each vendor has a console, proprietary or otherwise, that is unique to them for accessing the disks data. I’ve always found smartctl to be a rather poor source of reliable data.

        I believe, for example, Intel has such a console and/or CLI tool for accessing their data, that goes beyond just simply pushing firmware updates, although I’m not 100% sure which is why I’m asking TR.

        After all, who better to ask than Geoff who has all these devices in his possession? Cheers!

        -Viv

    • MarkG509
    • 6 years ago

    Potential “small sample” problem?

    Testing several of each would help that, but may become unwieldy.

    I would also hope to see a post mortem on any failures…controller failure, power plane, poor solder joint, etc. If not, let’s hope they only die from flash wear.

      • allreadydead
      • 6 years ago

      Yes, I second this. The failure can be observed with 1 drive but the results must be repeatable under same conditions to call it a viable result. And the results should be shown as median score just like you do with your video card, FPS and Frame Rate tests.

      Maybe you can get 3 of the same SSD, connect them at once and test it together with running 3 copy of that tool ? Do we hit data write bottleneck if we do it that way ? Dunno, just brainstorming. Doing same test x3 with testing only 1 drive could take a while….

      • stdRaichu
      • 6 years ago

      Testing several of each would be helpful but expensive, especially since the manufacturers aren’t supplying these. What I think would be good, given that the test is essentially looping and easily repeatable, is that when each drive dies, you send it back and out its replacement back in the test rig and resume thrashing. Repeat ad infinitum until the SATA cables wear out.

      That way as well as giving a nice tally of dead drives, it’ll also indicate what warranty replacements are like.

    • panthal001
    • 6 years ago

    This quoted comment below from xtreme forums thread is of extreme concern and TR needs to contact SSD vendors for some feedback about why SSD’s are not failing into read-only mode once they’ve reached the end of their PE cycles as they are supposed to do.Please,please.please get the manufacturer’s to comment on this,as read-only fail mode is supposed to work,so stated by the manufacturers..
    [url<]http://www.xtremesystems.org/forums/showthread.php?271063-SSD-Write-Endurance-25nm-Vs-34nm/page219[/url<] [quote<]Originally Posted by devsk View Post This is troubling, very troubling. None of the SSDs so far have gone into read-only mode upon failure. Not even the latest incarnation as of 2013. This was touted as one of the main features of SSD. In fact, the failure mode is hard. There is no way to get the data back and programs hang hard. Nostalgically looking, HDDs were way better in this area. I would gladly buy even a slower and slightly more expensive SSD drive which always fails into a read-only mode, leaving the data accessible.[/quote<]

      • jwilliams
      • 6 years ago

      I’m not sure you can blame that on the manufacturers of SSDs or the flash manufacturers. For the flash, they are clearly only rated for 3000 (or 1000 if TLC) erase cycles. For the SSD manufacturers, I have never seen them claim that they are good for more than a number of TB that is never more than the corresponding number of erase cycles (and usually less).

      The Xtremesystems failures have written far past the rated number of erase cycles. If they had instead stopped at the rated number of erase cycles, then the SSDs should be good read-only for at least a year, according the the applicable JEDEC spec for consumer flash.

      Now, if they were not good for 1 year of data retention after reaching the rated number of erase cycles, then that would be important news to know. If only techreport were testing that.

        • panthal001
        • 6 years ago

        I guess what i was getting at,was the fact that once SSD’s fail to write any more they are supposed to become read only. Isn’t that supposed to be the expected behavior or am i wrong?

          • jwilliams
          • 6 years ago

          I don’t recall seeing any major manufacturers make such a claim. Anyway, all the SSD tests I have seen that do write to failure show that the SSDs do NOT become read only once they can no longer write.

            • panthal001
            • 6 years ago

            I also saw an article where cutting the power to SSD usually result is in corruption. I’ll find the article and post it.people should always have their system on a UPS but….
            personally I have four SSD’s in my system and they are amazing

            In exposing 15 SSDs from five different vendors to power loss, researchers found that 13 suffered such failures as bit corruption, metadata corruption, and total device failure

            [url<]https://www.usenix.org/conference/fast13/understanding-robustness-ssds-under-power-fault[/url<] listen to the audio or check out the PDF below in the link.

            • jwilliams
            • 6 years ago

            It does not bother me very much, since I do not intend to use any of my SSDs past the rated number of erase cycles, and I am not even close to the rated number of erase cycles anyway.

            As for SSDs losing data when power is cut unexpectedly, I haven’t seen any credible studies on it, but it would not surprise me. There must be a good reason that some SSDs have capacitors to use in case of power loss. But again, this does not concern me greatly, since all the computers I use regularly have battery backup (UPS or laptop battery). The only way the SSD could lose power unexpectedly is if the computer PSU died or a power cable came loose. And if such an unlikely thing were to happen, well, that is why I backup my important data.

            • jwilliams
            • 6 years ago

            By the way, thanks for the link to the OSU study. I have to say, though, that study is more frustrating than illuminating, since they seem to have some strange reluctance to name the models of the SSDs they tested.

            • panthal001
            • 6 years ago

            Yes i agree.I am guessing they don’t want to sour relationships.Even though the sample size is small,having 13 out of 15 have data issues is disturbing lol. I cant imagine adding super-caps to drives would add that much cost.At this point i think it should just be a built in feature for all SSD’s.

            But maybe the cost is more than i assume.

            • jwilliams
            • 6 years ago

            It looks like 4 of them had power-loss capacitors, so since only 2 of them had no issues, that means that power-loss capacitors are not sufficient reason to choose an SSD if you want to avoid the issues they tested in that study. That is why I find it so frustrating that they did not name the model numbers.

            Since the issues they saw were repeatable, I don’t think it is a serious problem that they only tested one of each model.

            • panthal001
            • 6 years ago

            there was another test of this nature conducted and they found that of all the drives tested only 2 Intel models both with super caps were the only ones to not experience data corruption.I will see if I can find that also if possible.for some reason I’m thinking it was a leak out of this study but I’m not 100 percent positive on that.

            if it was indeed Intel I would not be surprised considering the amount of validation they do on their drives.take this with a grain of salt unless I can find where saw that info

    • HisDivineOrder
    • 6 years ago

    Smart not using any OCZ drives despite recommending them a lot. All those OCZ deaths might cause heads to explode and really those poor people who own those drives with the ill-fated O, C, Z letters on them really don’t need more evidence that when they were at Amazon or Newegg or wherever and they had to choose and the old man reminded them, “Remember… choose wisely…” and then they picked the OCZ drives because TR recommended them…

    Well, they must have been befuddled when the old man looked at them with sad, tired eyes and barely had the strength to sigh, “You have chosen… poorly…”

      • fantastic
      • 6 years ago

      I don’t think OCZ would do very well. I’d be much more concerned about the legal backlash. Which companies are most likely to send lawyers first ask and questions later? I’m not a fan of their products.

      The test is a great idea and I don’t remember ever seeing a similar test on mechanical hard drives. I don’t doubt that there is one, but I haven’t seen it.

      I love the Raiders reference and it sticks in my mind quite a bit.

    • muontrack
    • 6 years ago

    I just had two honest questions from the article:

    “All the SSDs we’re testing have attributes that tally host writes and provide general health estimates.”

    What are “general health estimates”? Does that mean current read/write/etc capabilities?

    “…we’ll stop periodically to benchmark the SSDs in four areas: sequential reads, sequential writes, random reads, and random writes.”

    How often is periodically? Given that you said you’re trying to accelerate their degradation, does that mean you’ll be performing these measurements daily?

    I think this is a great idea and would love to know the results. Good work TR!

      • Dissonance
      • 6 years ago

      The general health estimates are usually values that count down from 100 to 0. The drive doesn’t necessarily stop working when the ticker bottoms out, though. The estimate is related to NAND wear rather than performance.

      We won’t be testing performance daily or even weekly. We have our initial numbers, and we’ll be getting more when 22TB has been written to the drives. After that, I anticipate stopping at longer intervals, perhaps every 100TB. We may test more or less frequently depending on what the SMART data tells about bad block accumulation.

    • Aliasundercover
    • 6 years ago

    Don’t try this at home! 🙂

    I look forward to the results. It will be great to have independent test results to temper the stuff vendors send our way.

    Please consider a test of powered off data endurance. That is how long the data stays good or how quickly it fades when the machine is unplugged or the SSD sits on a shelf.

    Don’t SSDs go in to a read only state as their storage cells fail quality tests in programming? Perhaps a way to test would be to take the fast failing SSDs as their bad sectors pile up, load them with well known data and stick them on a shelf. Test how well they read in a week, a month, a quarter and a year. Yes, an awful drag to have a test which needs to sit dormant for so long. Maybe scratch the year as few would use an SSD for archival storage.

    I guess the way a normal person could hit this is moving or long foreign travel. The PC goes in storage while life throws curves. Will it still remember when there is time for the computer again?

      • bcronce
      • 6 years ago

      ^^ What Aliasundercover said, or something like it.

      • jwilliams
      • 6 years ago

      You are correct that a data-retention test is much more useful than a write-to-failure test.

      The write-to-failure tests have already been done multiple times by various different people and sites. They all find the same thing — you can far exceed the stated 3000 erase cycles before the SSDs actually fail. Also, every case I am aware of, the SSD does NOT become read-only, but rather fails to work at all after you have powered it off (for a few minutes) and back on. Apparently, when the flash gets erased so many times more than the rated number, it fails in such a way that the SSD controller cannot even initialize the device properly.

      Much more interesting, and useful, to the average consumer would be to test the data retention. According the the JEDEC spec, the consumer flash should retain its data for 1 year after reaching its rated number of erase cycles. This is obviously a long test, but it should not be very difficult to take a few SSDs rated for 3000 (or 1000 if TLC) erase cylces, use up the erase cycles, then put them in a drawer for a year to see if the manufacturers are pulling our leg or not.

      Another possibility for an interesting data retention test that does not take a year would be to alternate weeks of writing and then power-off: one week of write torture, one week of power off, one week of write torture, one week of power off….until the static data fails to pass the checksum test.

        • bcronce
        • 6 years ago

        I would rather it fail into read-only once I starts seeing errors. I am not sure how difficult this would be, but I’m sure they have some smart-pants engineers who could give us a way.

      • liquidsquid
      • 6 years ago

      You can accelerate the retention tests by storing the drives in the highest rated temperature such as a test oven. Commonly done in the industry. There are some standard formulas for temperature vs. time for stress testing.

      These are some of the points I worry about the most for moving from mechanical to SSD for storage of items like pictures. Both methods still require regular backups, you cannot rely on a single storage mechanism.

      • just brew it!
      • 6 years ago

      [quote<]Please consider a test of powered off data endurance. That is how long the data stays good or how quickly it fades when the machine is unplugged or the SSD sits on a shelf.[/quote<] This. As the cells wear, leakage currents increase; this reduces the length of time that stored data will be accurately retained. The drive may very well still be willing to accept new data long after it has passed its theoretical write limit; but it is possible that it will lose that data within a few weeks or months. Unfortunately, looking for this will (obviously) add a lot of time to the testing. But it might be interesting to take any drives that haven't failed after (say) 2 months, power them down for a week, then check whether all of the data can still be read back. Edit: I see that others have made similar suggestions.

    • odizzido
    • 6 years ago

    Looking forward to seeing some dead SSDs hehe

    • tinkerboy
    • 6 years ago

    As always TR delivers 😉
    Consider testing in the future Crucial M4 & Samsung 830 – these were, I think, most popular.

    • sjl
    • 6 years ago

    So, how’re the uninterruptible power supplies looking? Well tested, I hope? Fresh batteries?

    I can just imagine the howls of agony if the power died ten minutes before the test completed.

    Seriously, though, glad to see this sort of test performed – looking forward to seeing how things progress over the next few months and beyond.

      • UnfriendlyFire
      • 6 years ago

      Actually, I would be interested in seeing how much punishment SSDs can take from a power fault. I recall in the past, some users lost all of their data after the power cut out.

      About a decade ago, you could trigger a HDD head crash if you cut the power and move the laptop.

        • indeego
        • 6 years ago

        THIS. This test needs power off/on as part of its procedures.

      • Scrotos
      • 6 years ago

      If it crapped out, just continue where you left off. It’s total written, not total written without a reboot. It’s not like the drives will magically un-wear themselves if the system is powered off. 🙂

        • sjl
        • 6 years ago

        You’d think. But that depends on the testing software logging where it’s up to, in a way that won’t be lost if the power unexpectedly dies. If you don’t know where you were up to in the test, it makes it hard to simply pick up and carry on – for one thing, you can’t report just how much was written (and hence, how many write cycles were put through.)

        Or, in other words, when you’re testing something, you want to remove or control as many variables as possible.

    • oldDummy
    • 6 years ago

    Thanks.
    This data, once interpreted, will be great for consumers.
    Should separate the workers from the pretenders.
    Interesting, in a nerdy sort of way.

      • Fishface
      • 6 years ago

      This data will provide NO useful information for consumers:

      Semiconductor devices are not perfectly identical. They can vary from fab to fab, from wafer to wafer and even from device to device on the same wafer.

      TR cannot know if the devices in these SSDs come from the lower end, middle or upper end of the distribution of devices.
      Consumers cannot know if the SSD they plan to buy will come from the lower end, middle, or upper end of the distribution.

      Neither TR nor the consumer can they predict the shape of the distribution of devices – perhaps the majority of devices coming off the production line fall into the lower end of the distribution; perhaps they fall into the upper end.

      This test cannot predict anything about any SSD a consumer might buy. That SSD could perform far better or far worse than the one tested by TR

        • jihadjoe
        • 6 years ago

        You could still use it to show which drives are consistently bad. In the XS SSD endurance thread, the drives that were consistently good were from Intel, Samsung and Crucial, and every drive from those three exceeded their MWI, and some even exceeded the rated NAND endurance.

        OTOH, every single OCZ drive, up to the Vertex 4 failed at a point below the rated MWI.

        • oldDummy
        • 6 years ago

        Much can be inferred about random drives from random stores; Who will do it?
        One who views the possibility of cherry-picked drives going to Hardware review sites is still in a quandary.
        What can we expect a site to do?
        We could just rely on manufactures boiler plate…or not.
        My belief is just what TR is doing is great.
        Not perfect, life rarely is.
        Anyhow, thanks again TR.

    • JohnC
    • 6 years ago

    You need to use more different models… At least some model from Crucial would be nice.

      • internetsandman
      • 6 years ago

      SSD’s are still rather expensive, and for a trial run, the important thing is to get a sample of controller and NAND varieties along with a handful of different lifespan-increasing tricks

        • JohnC
        • 6 years ago

        This is exactly why I mentioned Crucial products. They have drives of various sizes, are not expensive (compared to drives of similar size) and are, in fact, more popular than the rest of the non-Samsung garbage used in this “test”:
        [url<]http://www.amazon.com/gp/bestsellers/electronics/1292116011/ref=pd_zg_hrsr_e_1_4_last[/url<]

          • Dissonance
          • 6 years ago

          We considered including the Crucial M500, but some of its SMART attributes are obfuscated, making it difficult to track wear reliably. When we last checked, Crucial had no plans to support write tracking via SMART attributes or utility software. That’s an essential component of the experiment, so we left the M500 on the sidelines.

            • peartart
            • 6 years ago

            shh, let him whine in peace.

            • JohnC
            • 6 years ago

            I see. Would be still interesting to see how long they would last before complete failure, though.

            • davidbowser
            • 6 years ago

            I wondered why I could not see the SMART counters that everyone else was seeing. I have various Crucial SSDs, so I would really like to understand how they match up, but I get that comparisons of the SMART metrics are the way.

    • Deanjo
    • 6 years ago

    Missing a few things IMHO. I would like to see a few mechanicals thrown in the mix in both 2.5 and 3.5 from the two big vendors, seagate and WD.

      • continuum
      • 6 years ago

      Since mechanical drives don’t have NAND wearout limits to worry about, I don’t think that would be all that useful. You’re not going to wear out the bits on a spinning platter and testing the entire drive for reliability with a single sample would not be statistically useful.

        • UnfriendlyFire
        • 6 years ago

        If you want to break HDDs, head over to server rooms that have racks of them and count up how many HDDs bite the dust during a set time duration.

          • Chrispy_
          • 6 years ago

          This.

          Mechanical failures don’t seem to have anything to do with workload, usually it’s environmental factors that kill a mechanical disk, or just manufacturing defect.

          My sample size is woefully small but based on 250-300 15K drives in servers and SANs I’ve managed over the years, I’d say that 2-3% random failure rate within three years is about right, and heavily-hammered database drives outlasted identical models doing benign file-server duty.

            • Scrotos
            • 6 years ago

            To wit, links for Deanjo:

            [url<]http://storagemojo.com/2007/02/19/googles-disk-failure-experience/[/url<] [url<]http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en/us/archive/disk_failures.pdf[/url<]

            • Deanjo
            • 6 years ago

            I could probably recite that article to you verbatim from memory however that study is fairly old now and the drives have changed a lot with higher densities and spindle speeds.

        • Deanjo
        • 6 years ago

        Having a “worst case” load ran on a mechanical would testify to their reliability and put some perspective toward mechanical vs ssd.

      • indeego
      • 6 years ago

      I’d like the next batch of mechanicals to be tested to be violently shaken while tested, and then throw in some SSDs for comparison.

    • frumper15
    • 6 years ago

    I’m glad to see TR doing something like this – I’m imagining there will be some point in the future where a number of the drives have failed in one way or another while a few just keep soldiering on to the point that you could say they’re “good enough” as drives much faster, larger, and cheaper have been available for a while.

    If you haven’t seen it before, this thread follows that trend with a little 40GB intel based SSD still holding on (by the author of the tool used by TR, none the less): [url<]http://www.xtremesystems.org/forums/showthread.php?271063-SSD-Write-Endurance-25nm-Vs-34nm[/url<]

      • UnfriendlyFire
      • 6 years ago

      There was a Youtube video of the world’s most expensive HDD tear down. It was a early 1980’s IBM 20 MB hard drive that weighed more than 10 pounds, and the banks in his area used them until only a few years ago because of the HDDs’ high purchase cost. In fact, the HDDs even had gas inlets to pump in fire extinguishing gas in the event of a fire.

      Technically, they could still run. You would need an absurd amount of them to even install Windows 8 though.

        • cygnus1
        • 6 years ago

        Ha, no way to gang enough of those together and/or connect them to a modern system to run Win8. Wouldn’t be surprised if they were MFM or RLL drives, stuff that predates even the old IDE. I’m kind of curious what sort of software was running on the systems with those drives at that bank.

          • UnfriendlyFire
          • 6 years ago

          He mentioned that the banks stored users’ account info in the HDDs, and had specialized racks to roll out the HDDs for maintenance or replacement. The HDDs used an ATX 6-pin power connector or something that resembled it.

      • mno
      • 6 years ago

      These tests (and TR’s too) are good, but they all ignore the issue of data retention time. JEDEC specifies that for NAND flash endurance ratings data retention must be at least 1 year at maximum P/E cycle count. (Naturally given the impracticality of actually conducting such tests, manufacturers use higher temperatures instead and extrapolate from there.) Note also that these standards apply only to individual NAND chips themselves and not whole SSDs.

      I’m quite curious what the results would be like when doing at least a relatively short (say 24 hours or a few days) data retention test reasonably often.

    • south side sammy
    • 6 years ago

    somebody already did an endurance test. TR isn’t cutting any new ground here.

      • derFunkenstein
      • 6 years ago

      Thanks for the useful links!

        • south side sammy
        • 6 years ago

        hardware.info was the first one I ever saw. there are others. google is your friend……….. the NSA’s friend too.

          • mcnabney
          • 6 years ago

          Hardware.info did an endurance test on ONE DRIVE.

      • UnfriendlyFire
      • 6 years ago

      There is no such thing as benchmark overkill. Lots of sites do an under-kill. “Let’s run SYSmark 2012, Starcraft 2, and BF3, and call it a day!”

      • internetsandman
      • 6 years ago

      Care to post a link or quote any results? Given the limited data on this subject, from a scientific mindset, the more studies conducted, the better. Just because one study was made doesnt mean it was perfect or that it was representative of an average

      • Geonerd
      • 6 years ago

      Why the down-votes, people?

      Sammy’s right, this is hardly cutting edge. I’m a little disappointed that TR didn’t even mention the Xtreme Systems tests that are already underway. If I didn’t know better, the tone of this article might convince me that TR is blazing a path of discovery.

      That said, I DO look forward to the findings!

        • south side sammy
        • 6 years ago

        doesn’t matter on this site. no matter what my name goes on, right, wrong or indifferent, these so called intelligent people who troll this site don’t like it if you make waves or don’t fall in line. screw them all. I can take it. just wish they would grow up a little or at least for the site owner to do away with the idiotic thumbs system all together. that in itself is disingenuous.

          • jss21382
          • 6 years ago

          ….It’s not you, it’s that your opinion is unpopular. I’ve gotten massively downvoted anytime I’ve pointed out positive things about iOS or Win 8, truth is irrelevant, opinion is what counts.

          • theonespork
          • 6 years ago

          Good god…put on your big boy pants.

          Perhaps the down vote was simply due to the fact your comment added nothing of use to anyone. Or, at least, that is why I down voted you. First, you state other people have done these tests, but you provide no links. Yes Google is an acquaintance, but that does not mean everyone cares to go searching for links just because you say they might exist.

          Then, you snarkily comment about not “cutting any new ground” when it was not stated new ground was being cut. You then, in the above comment, call into question the intelligence and maturity of all the readers of this site, and tell them (me) to screw off. How pleasant. You chose to be uncritically critical and then criticize others for what, in essence, is the same thing. Headache. Growing. Much. Fast.

          Let me fix your initial comment:

          “Several sites have done testing like this, such as __________ and ___________. They were quite interesting. I am curious what kind of spin TR will put on the testing to differentiate themselves. I am sure it will be thorough and I look forward to comparing their results to these other sites. Enjoy the links guys.”

          I doubt people would have been so critical had you put some real effort into your initial post.

          You are quite welcome, cheers…

            • south side sammy
            • 6 years ago

            idiot!

            • CasbahBoy
            • 6 years ago

            Please, continue making a fantastic case for the quality of your posting.

            • maxxcool
            • 6 years ago

            Idiot!

            • StainlessSteelMan
            • 6 years ago

            You told him!

          • cphite
          • 6 years ago

          Just so you know – since you’ve obviously put at least some thought into why it’s happening, even if what you came up with is kind of silly – the reason *I* down-voted your first post was because it contained no useful information. Simply proclaiming that “others” have done this type of testing and that therefore it’s not new ground was pointless. Nowhere does the author claim that this is ground-breaking; he simply pointed out that TR is doing this.

          I down-voted your following posts (including this one) because they amounted to little more than childish foot-stomping.

          Further, if you poke around a little, you’ll find that plenty of people who post on this page are fans of iOS and Windows 8, and yet somehow avoid being down-voted. Have you considered that maybe – just maybe – the problem is you?

          • travbrad
          • 6 years ago

          Have you seen a doctor lately? I think you may be coming down with a persecution complex.

        • Dissonance
        • 6 years ago

        Endurance testing is new to us, but I don’t think the tone implies it’s revolutionary. We talk at length about an existing tool designed explicitly for the task, complete with a link to its XtremeSystems origins. Also, the first page makes two mentions of reports of SSDs lasting well beyond their stated limits.

        I didn’t single out XtremeSystems’ results because they’re not the only ones who have done endurance testing. Numerous sites have tested endurance in different ways, and there are lots of independent user reports on top of that. Our particular combination of endurance and long-term performance testing may be unique, but I haven’t sifted through all the reports online to even be able to say that with certainty.

        • jss21382
        • 6 years ago

        If we needed not impressed we’d call Krogoth.

      • Sargent Duck
      • 6 years ago

      So…using your logic, the next time that a new video card/processor is reviewed and *insert hardware site here* is the first to review and post it, everybody else should just quit?

      Scott: “Oh look, [H]ardOCP was the first to review the new Radeon 8950. Well, I guess that’s it for TR guys, we’re throwing out our review because we’re not cutting any new ground here.”

      Oh, wait, even better.

      Scott: “we’re not going to review the new Ivy Bridge E because we already reviewed the Sandy Bridge E. We’re not cutting any new ground here!”

      Last One, I promise
      Jeremy Clarkson: “We’re not going to take any more cars around our track because we’re not cutting any new ground here”

      Wait, sorry, I lied.
      Every reviewer in the world: “Everything that can be reviewed has been reviewed, so we might as well all just throw in the towel since we’re not cutting any new ground”

        • JohnC
        • 6 years ago

        There is no “logic” – it was just a troll post, dude.

        • cphite
        • 6 years ago

        Whoa, whoa… the first example was enough. You aren’t cutting any new ground with those additional examples…

        • BIF
        • 6 years ago

        That’s funny. But I laughed already in my life, so it really didn’t cut any new ground.

        But I +1’d you anyway. 😉

      • allreadydead
      • 6 years ago

      SSD’s are around long enough to let some of other people did this test. And the cause of the test, SSD’s being weared out in time, is known since day 1 they hit the market. So, nor the cause and the test is new and TR stated it on first page of the article.

      However, I really want to see TR’s take on this with their own way and results. IMHO, in a test being first one doing it doesn’t really matter. For me, what matter is, opinions and thoughts of the editor on the topic.

        • indeego
        • 6 years ago

        There could be many causes of SSDs failing, NAND wearout is just one of them.

        Race conditions.
        Faulty firmware.
        Power short.
        Initialization failure.
        TRIM ***up.
        Etc.

        Now combine all the above with NAND wearout and you have too many variables to nail down a cause, period.

        This is a cute test, but by no means does it represent anything close to real-world. Nor do the ExtremeSystems, or any benchmarking. [i<]Real world[/i<] is just too variable.

          • BIF
          • 6 years ago

          Somebody always has to make it about race conditions! I’m OUTRAGED!

          LOL, not really. +1. 😀

      • maxxcool
      • 6 years ago

      Nice link!

      • oomjcv
      • 6 years ago

      Can we get some ‘hall of fame’ or something for the most disliked/hated comments ever?

      • travbrad
      • 6 years ago

      Why do they have to be “cutting new ground”? I don’t see anyone complaining when they review CPUs even though there are probably hundreds of websites that review and benchmark CPUs. That seems like a vastly more saturated area of PC hardware testing than SSD write endurance.

    • albundy
    • 6 years ago

    just what i’m looking for! cant wait for the results!

    • drfish
    • 6 years ago

    Woohoo! This is exciting! Just another example of why TR is the best.

      • Saber Cherry
      • 6 years ago

      Yep – SSD endurance testing is something that needed to be done, and I’m glad someone(s) is finally doing it!

      Next, we just need some sort of compilation of motherboard/video card/HDD life expectancies by manufacturer…

Pin It on Pinterest

Share This