Home Introducing the SSD Endurance Experiment
Reviews

Introducing the SSD Endurance Experiment

Geoff Gasior
Disclosure
Disclosure
In our content, we occasionally include affiliate links. Should you click on these links, we may earn a commission, though this incurs no additional cost to you. Your use of this website signifies your acceptance of our terms and conditions as well as our privacy policy.

SSDs are pretty awesome. They’re fast enough to provide a palpable improvement in overall system responsiveness and affordable enough that even budget rigs can get in on the action. Without moving parts, SSDs also tolerate rough handling much better than mechanical drives, making them particularly appealing for mobile devices. That’s a pretty good all-around combination.

Despite the perks, SSDs have a dirty little secret. Their flash memory may be inherently robust, but it’s also fundamentally weak. Writing data erodes the nano-scale structure of the individual memory cells, imposing a ceiling on drive life that can be measured in terabytes. Solid-state drives are living on borrowed time. The question is: how much?

Drive makers typically characterize lifespans in total bytes written. Their estimates usually range from 20-40GB per day for the length of the three- or five-year warranty. However, based on user accounts all over the web, those figures are fairly conservative. They don’t tell us what happens to SSDs as they approach the end of the road, either.

Being inquisitive types, we’ve decided to seek answers ourselves. We’ve concocted a long-term test that will track a handful of modern SSDs—the Corsair Neutron Series GTX, Intel 335 Series, Kingston HyperX 3K, and Samsung 840 and 840 Pro Series—as they’re hammered with an unrelenting torrent of data over the coming weeks and months. And we won’t stop until they’re all dead. Welcome to the SSD Endurance Experiment.

Why do SSDs die?
Before we dive into the specifics of our experiment, it’s important to understand why SSDs wear out. The problem lies within the very nature of flash memory. NAND is made up of individual cells that store data by trapping electrons inside an insulated floating gate. Applied voltages shuffle these electrons back and forth through the otherwise insulating oxide layer separating the gate from the silicon substrate. This two-way traffic slowly weakens the physical structure of the insulator, a layer that is only getting thinner as Moore’s Law drives the adoption of finer fabrication techniques.

Another side effect of this electron traffic—tunneling, as it’s called—is that some of the negatively charged particles get stuck in the insulator layer. As this negative charge accumulates over time, it narrows the range of voltages that can be used to represent data within the cell. This form of flash wear is especially troublesome for three-bit TLC NAND, which must differentiate between eight discrete values within that shrinking window. Two-bit MLC NAND has only four values to consider.

Flash cells are typically arranged in 4-16KB pages grouped into 512-8096KB blocks. SSDs can write to empty pages directly. However, they can only write to occupied pages through a multi-step process that involves reading, modifying, and then writing the entire block. To offset this block-rewrite penalty, the TRIM command and garbage collection routines combine to move data around in the flash, ensuring a fresh supply of empty pages for incoming writes. Meanwhile, wear-leveling routines distribute writes and relocate static data to spread destructive cycling more evenly across the flash cells. All of these factors conspire to inflate the number of flash writes associated with each host write, a phenomenon known as write amplification.

SSD makers tune their algorithms to minimize write amplification and to make the most efficient use of the flash’s limited endurance. They also lean on increasingly advanced signal processing and error correction to read the flash more reliably. Some SSD vendors devote more of the flash to overprovisioned spare area that’s inaccessible to the OS but can be used to replace blocks that have become unreliable and must be retired. SandForce goes even further, employing on-the-fly compression to minimize the flash footprint of host writes. Hopefully, this experiment will give us a sense of whether those techniques are winning the war against flash wear.

The experiment
Clearly, many factors affect SSD endurance. Perhaps that’s why drive makers are so conservative with their lifespan estimates. Intel’s 335 Series 240GB is rated for 20GB of writes per day for three years, which works out to just under 22TB of total writes. If we assume modest write amplification and a 3,000-cycle write/erase tolerance for the NAND, this class of drive should handle hundreds of terabytes of flash writes. With similarly wide discrepancies between the stated and theoretical limits of most SSDs, it’s no wonder users have reported much longer lifespans. Our experiment intends to find out just how long modern drives actually last.

The ideal workload for endurance testing would be a trace of real-world I/O like our DriveBench 2.0 benchmark, which comprises nearly two weeks of typical desktop activity. There’s just one problem: it’s too darned slow. Reaching the 335 Series’ stated limit would take more than a month, and we’d have to wait substantially longer to approach the theoretical limits of the NAND.

We can push SSD endurance limits much faster with synthetic benchmarks. There are myriad options, but the best one is Anvil’s imaginatively named Storage Utilities.

Developed by a frequenter of the XtremeSystems forums, this handy little app includes a dedicated endurance test that fills drives with files of varying sizes before deleting them and starting the process anew. We can tweak the payload of each loop to write the same amount of data to each drive. There’s an integrated MD5 hash check that verifies data integrity, and the write speed is more than an order of magnitude faster than DriveBench 2.0’s effective write rate.

Anvil’s endurance test writes files sequentially, so it’s not an ideal real-world simulation. However, it’s the best tool we have, and it allows us to load drives with a portion of static data to challenge wear-leveling routines. We’re using 10GB of static data, including a copy of the Windows 7 installation folder, a handful of application files, and a few movies.

The Anvil utility also has an adjustable incompressibility scale that can be set to 0, 8, 25, 46, 67, or 100%. Among our test subjects, only the SandForce-based Intel 335 Series and Kingston HyperX 3K SSD can compress incoming data on the fly. We’ll be testing all the SSDs with incompressible data to even the playing field. To assess the impact of SandForce’s DuraWrite tech, we’ll also be testing a second HyperX drive with Anvil’s 46% “applications” compression setting.

Since the endurance benchmark tracks the number of gigabytes written to the drive, we can easily keep tabs on how the SSDs are progressing. We can also monitor the total bytes written by reading each drive’s SMART attributes. All the SSDs we’re testing have attributes that tally host writes and provide general health estimates.

There’s also a SMART attribute the counts bad blocks, giving us a sort of body count we can attribute to flash wear. As mounting cell failures compromise entire blocks, replacements will be pulled from overprovisioned spare area, reducing the amount of flash available to accelerate performance. To measure how this spare area shrinkage slows down our drives, we’ll stop periodically to benchmark the SSDs in four areas: sequential reads, sequential writes, random reads, and random writes. The drives will be secure-erased before each test session, ensuring a full slate of available flash pages. (The static data will be copied back after each endurance test.)

We’re not that interested in the performance differences between our guinea pigs; our reviews of each drive cover that subject in much greater detail. Instead, we want to observe how flash wear takes its toll on each drive. Some SSDs may age more gracefully than others.

To make testing practical, we’ve limited ourselves to one example of each SSD, plus the extra HyperX. Our sample size is too small to provide definitive answers about reliability, but testing six drives will give us a decent sense of the endurance of modern SSDs. Now, let’s meet our subjects.

Five SSD flavors
Our endurance experiment covers five distinctly different SSD configurations in the 240-256GB range. We’ll start with the latest version of Corsair’s Neutron Series GTX. We reviewed an earlier variant of this drive last year, and the Link_A_Media Devices controller hasn’t changed. However, Corsair has since upgraded the flash from 26-nm Toshiba MLC NAND to smaller 19-nm chips.

The Neutron’s new NAND comes with an accompanying price cut, bringing the GTX down to $220. That’s pretty affordable considering the five-year warranty; most SSDs in this price range are covered for only three years. Unfortunately, Corsair doesn’t list an official endurance specification for the Neutron GTX.

Given the 240GB storage capacity, one might assume Corsair has dedicated additional spare area to replace bad blocks. As far as we’re aware, though, the drive has the same ~7% overprovisioning as 256GB drives. In this case, another ~7% of the raw flash capacity is dedicated to parity data associated with the controller’s RAID-like redundancy scheme, which provides an extra layer of protection against physical flash failures.

Users can monitor the Neutron GTX’s health using Corsair’s SSD Toolbox software. The application is relatively new, and the interface could use a little more polish. It’ll do for our purposes, though. The information section displays the total host writes, and there’s a SMART section that reads the drive’s attributes. The host writes measure is linked to SMART attribute 241, which keeps tabs on the number of LBAs written. Attribute 231 is the generic wear indicator, while attribute 5 tallies bad blocks.

The next SSD on our list is Intel’s 335 Series. Behold its stark metal body:

The 335 Series pairs SandForce’s SF-2281 controller with 20-nm MLC NAND produced by IMFT, Intel’s joint flash venture with Micron. Like the Neutron GTX, the 335 Series derives 240GB of storage from 256GB of NAND. Part of the “missing” capacity is devoted to RAISE, the RAID-like redundancy feature built into the SandForce controller.

Intel says the 335 Series can endure 20GB of writes per day for the length of its three-year warranty. That rating applies to typical client workloads, and it adds up to 22TB overall. Our endurance test will be able to push past the specified limit in short order.

At $220 online, the 335 Series 240GB is a tad expensive in light of its pedestrian warranty coverage. You’re paying a premium for the Intel badge—and for the excellent SSD Toolbox software.

Despite bearing the same name as Corsair’s utility, Intel’s software is much nicer. The main screen doesn’t list host writes, but it does characterize drive health, and it estimates how much life is remaining. Again, clicking the SMART button brings up an attribute tracking panel.

The Intel 335 Series tabulates writes in several ways. Attribute 225 measures host writes, 233 tracks the number of LBAs written, and 249 reports NAND writes in 1GB increments. There’s also a media wear indicator, attribute 233, that ticks down from 100 as the NAND erodes. Once again, the number of retired blocks is covered by attribute 5, a.k.a. the reallocated sector count.

Like the Intel 335 Series, the Kingston HyperX 3K is based on second-gen SandForce controller technology. Both drives are equipped with MLC NAND fabbed by IMFT, but Kingston uses older 25-nm chips. That difference gives us an opportunity to compare the endurance of similar drives based on subsequent flash generations.

There’s no comparison when it comes to aesthetics, though. The HyperX series is the best-looking SSD family around.

Remember that we have a pair of identical HyperX drives to test. One will be run through the wringer with the same incompressible data as the other SSDs, while the other will be given a chance to flex SandForce’s write compression tech. The HyperX will be at the center of a couple of interesting subplots.

With a $185 price tag, the HyperX 3K is a pretty sweet deal right now. The three-year warranty is standard fare, but the 192TB endurance rating is very impressive. Crossing that threshold will take some time.

Kingston’s software looks pretty dated, and it doesn’t play nicely with some versions of Intel’s RST storage drivers—including the ones installed on our test rigs. Bummer. When drives are detected correctly, the app offers the basics: a general health indicator, a firmware update feature, a secure erase tool, and access to SMART data. Kingston tells us a new version of the Toolbox app is in the works, and I hope it has broader driver support.

As a consolation, perhaps, Kingston provides a handy PDF detailing all of the SSD’s smart attributes. We’ll be concentrating on attributes 5, 231, and 241, which cover bad blocks, overall drive health, and host writes, respectively.

Last, but not least, we have a couple of Samsung SSDs: the 840 Series and the 840 Pro. They look identical, and they’re based on the same in-house MDX controller. Their NAND is built by Samsung on the same 21-nm fabrication process, too. But the 840 Series packs three bits per cell into its TLC NAND, while the 840 Pro has two-bit MLC chips.

To account for the lower endurance of its TLC NAND, the 840 Series allocates more flash capacity to overprovisioned spare area that can be used to replace bad blocks. That’s why the drive advertises 250GB instead of the 256GB available in the 840 Pro. For what it’s worth, Samsung says it was overly conservative when defining the 840 Series’ spare area. The firm claims its first-gen TLC chips were more resilient than expected, which is why the newer, TLC-based 840 EVO uses that extra 6GB as a fancy write cache, instead.

As its $175 price tag attests, the 840 Series 250GB is a value-oriented model. You’ll have to shell out $240 for the 840 Pro 256GB, but you’ll get a longer five-year warranty in return. The 840 Series’ coverage runs out after three years. Unfortunately, Samsung hasn’t published official endurance specifications for the 840 family.

All the 840-series drives work with Samsung’s Magician utility. The application has an attractive interface that tracks total bytes written and overall drive health right there on the main screen.

Clicking the SMART button in the upper-right corner brings up the list of available attributes, and we’ll be watching a few of them. Attribute 241 tracks the total number of LBAs written, from which we can determine the number of bytes. We can also see how many write/erase cycles are consumed by watching the wear-leveling count, otherwise known as attribute 177. The number of bad blocks is tracked by attribute 5.

Although all of the vendor SSD utilities can read SMART attributes, we’ll also be monitoring those values with third-party software. Hard Disk Sentinel lets us dump SMART values to CSV files that can be saved and digested easily.

Now, let’s look at the systems that will serve as test rigs for the experiment.

Custom test rigs
Our endurance experiment will likely be running for many months, so we need dedicated systems to power the endeavor. We’ve assembled two identical rigs for the task. Each one lives in a closet with three test subjects inside.

Our test machines are built inside twin BitFenix Prodigy enclosures. We could have gone with smaller cases, especially since we’re using Mini-ITX motherboards. The Prodigy has room to grow, though. The thing boasts nine 2.5″ drive mounts—more than most mid-tower ATX enclosures. We certainly have room to expand our sample size if this initial experiment goes well. The Prodigy also supports full-sized CPU coolers and PSUs, which lets us keep the systems relatively quiet without too much effort.

Gigabyte’s H77N-WiFi motherboard sits inside our Prodigy chassis. This is one of our favorite mini Ivy Bridge boards. Apart from the platform hub, it’s identical to the Z77N-WiFi we reviewed earlier this year. The H77N-WiFi serves up dual 6Gbps SATA ports in addition to two 3Gbps ones—enough I/O connectivity for our first round of testing. It also has built-in 802.11n Wi-Fi that enables us to manage the systems while they’re stuffed in the closet.

We selected an Intel platform because we’ve found the firm’s SATA controllers to be faster than those in AMD chipsets. Our testing doesn’t require a lot of CPU horsepower, so we chose a pair of older Core i3-2100 processors from the Sandy Bridge generation. At 3.1GHz, the dual-core chips have more than enough oomph to swamp our SSDs. The Core i3’s integrated GPU eliminates the need for discrete graphics cards, as well.

A pair of Thermaltake NiC F3 air towers is tasked with cooling our CPUs. These puppies combine slim radiators with three heatpipes that make direct contact with the CPU. The mounting bracket is easy to use, and the four-pin PWM fan is relatively quiet. Truthfully, we don’t need anything fancy to keep the Core i3-2100’s temperatures in check.

Unlike a lot of aftermarket coolers, the NiC F3 leaves enough clearance for taller memory modules. We decided to take advantage by using some Corsair Dominator Platinum DIMMs left over from our PC build guide. The modules have monster heat spreaders, and they were a tight fit on one of the boards, whose DIMM slots are angled toward the CPU slightly. Doh! We ended up swapping the CPU fan over to the other side of the radiator to give the memory more room to breathe.

Our endurance testing is being conducted with the target drives connected as secondary storage. That means we need a separate system drive, and why not use another SSD? They’re silent and power-efficient, and I have a growing stack of ’em in the Benchmarking Sweatshop. To match the red CPU fans, I grabbed a couple of 60GB Corsair Force GTs that have been on the shelf since our look at SSD performance scaling.

Admittedly, the Rosewill Fortress 550W PSUs are overkill. We wanted something power-efficient, though, and these are 80 Plus Platinum-certified. They’re also very quiet, and they nicely match our system’s largely blacked-out theme. The PCIe power connectors even have a splash of red.

The Fortress is technically too large for the Prodigy, but we managed to marry the two with some careful cable routing. At least the case provides plenty of places to cram excess cabling. We ended up with pretty clean systems overall.

Setting the baseline
Before we start hammering our subjects with writes, we need to establish a performance baseline. We’ll use these factory fresh results as a point of reference when looking at how flash wear changes each drive’s performance characteristics. Since Anvil’s Storage Utilities includes a handful of benchmarks with the same compressibility settings as the endurance test, that’s what we’ll use to probe performance. We’re just sticking to the basics: 4MB sequential reads and writes, and 4KB random reads and writes. (We’re using Anvil’s QD16 random I/O tests and testing all the drives on the same 6Gbps SATA port on one of the test systems.)

Because we’ve limited performance benchmarking to a single application and a handful of tests, I wouldn’t draw any conclusions from the results below. Our latest SSD reviews explore the performance of most of these drives in much greater detail—and across a much broader range of real-world tests. We’re using Anvil’s benchmarks for convenience.

These numbers have only limited usefulness by themselves. Things should get more interesting as we add data points after tens and hundreds of terabytes have been written to the drives.

Note the differences between the HyperX configurations, though. The compressed config scores higher than the standard one in the sequential tests but not in the random ones. The differences in the sequential tests are much smaller than I expected from the “46% incompressible” setting, too.

That’s all the time we need to spend on performance for now. Our next set of benchmarks will be run after 22TB of data has been written, matching the endurance specification of the Intel 335 Series. I wouldn’t expect different results from those tests. However, we should see performance suffer as we get deeper into our endurance testing. Bad blocks will slowly eat away into the spare area that SSDs use to speed write performance, and reads may be slowed by the additional error correction required as wear weakens the integrity of the individual flash cells.

On your marks, get set…
If you’ve read our latest SSD reviews, you’ll know that most modern solid-state drives offer comparable all-around performance. Any halfway decent SSD should be fast enough for most users. This rough performance parity has made factors like pricing and endurance more important, which is part of the reason we’re undertaking this experiment in the first place.

Also, we couldn’t resist the urge to test six SSDs to failure. That may sound a bit morbid, but we’ve long known about flash memory’s limited write endurance, and we’ve often wondered what sort of ceiling that imposes on SSD life—and how it affects performance in the long run. The data produced by this experiment should provide some insight.

We’re just getting started with endurance testing, and there are opportunities for further exploration if this initial experiment goes well. Flash wear isn’t going away. In fact, it’s likely to become a more prominent issue as NAND makers pursue finer fabrication techniques that squeeze more bits into each cell. This smaller lithography will drive down the per-gigabyte cost, bringing SSDs to even more PC users. As solid-state drives become more popular, it will become even more important to understand how they age.

We have lots of data to write to this initial batch of drives, so it’s time to stop talking and start testing. We’ve outlined our plans, configured our test rigs, and taken our initial SMART readings. Let the onslaught of writes begin! We’ll see you in 22TB.

Update: The 22TB results are in. So far, so good.

Update: After 200TB, we’re starting to see the first signs of weakness.

Update: The drives have passed the 300TB mark, and we’ve added an unpowered retention test to see how well they retain data when unplugged.

Update: Our subjects have crossed the half-petabyte threshold, and they’re still going strong.

Update: All is well after 600TB of writes—and after a longer-term data retention test.

Update: We’ve now written one petabyte of data, and half the drives are dead.

Update: The SSDs are now up to 1.5PB—or two of them are, anyway. The last 500TB claimed another victim.

Update: The experiment has reached two freaking petabytes of writes. Amazingly, our remaining survivors are still standing.

Update: They’re all dead! Read the experiment’s final chapter right here.

Latest News

Sam Bankman-Fried Appeals Sentence as FTX Repayment Plans Unfold
Crypto News

Sam Bankman-Fried Appeals Sentence as FTX Repayment Plans Unfold

BlackRock's Treasure Trove: How the Asset Giant Could Be Powering Ethereum's Stablecoin Explosion
Crypto News

BlackRock’s Treasure Trove: How the Asset Giant Could Be Powering Ethereum’s Stablecoin Explosion

The crucial bridge stablecoins provide between traditional and digital financial systems has made them increasingly popular. At the forefront of this stablecoin revolution is the USD Coin, a joint venture...

New meme coins under $1 to buy in April
Crypto News

New Underpriced Meme Coins to Buy in April Before They Go Parabolic

The Bitcoin halving is four days away, and Dogeday is only five. This might be the last chance to invest in the newest meme coins before the crypto industry goes...

Anza's Upgrade Eases Solana Network Congestion, Details
Crypto News

Anza’s Upgrade Eases Solana Network Congestion, Details

Worldcoin App's Userbase Reaches 10 Million Within 1 Year
Crypto News

Worldcoin App’s Userbase Reaches 10 Million Within 1 Year

Apple Loses Its No.1 Phone Maker Spot to Samsung
News

Samsung Overtakes Apple as the #1 Smartphone Maker in the World

WhatsApp Confirms Testing New AI Features on Beta Users
News

WhatsApp Confirms Testing New AI Features on Beta Users: AI Chabot & Image Editor Expected