Setting the baseline
Before we start hammering our subjects with writes, we need to establish a performance baseline. We'll use these factory fresh results as a point of reference when looking at how flash wear changes each drive's performance characteristics. Since Anvil's Storage Utilities includes a handful of benchmarks with the same compressibility settings as the endurance test, that's what we'll use to probe performance. We're just sticking to the basics: 4MB sequential reads and writes, and 4KB random reads and writes. (We're using Anvil's QD16 random I/O tests and testing all the drives on the same 6Gbps SATA port on one of the test systems.)
Because we've limited performance benchmarking to a single application and a handful of tests, I wouldn't draw any conclusions from the results below. Our latest SSD reviews explore the performance of most of these drives in much greater detail—and across a much broader range of real-world tests. We're using Anvil's benchmarks for convenience.
These numbers have only limited usefulness by themselves. Things should get more interesting as we add data points after tens and hundreds of terabytes have been written to the drives.
Note the differences between the HyperX configurations, though. The compressed config scores higher than the standard one in the sequential tests but not in the random ones. The differences in the sequential tests are much smaller than I expected from the "46% incompressible" setting, too.
That's all the time we need to spend on performance for now. Our next set of benchmarks will be run after 22TB of data has been written, matching the endurance specification of the Intel 335 Series. I wouldn't expect different results from those tests. However, we should see performance suffer as we get deeper into our endurance testing. Bad blocks will slowly eat away into the spare area that SSDs use to speed write performance, and reads may be slowed by the additional error correction required as wear weakens the integrity of the individual flash cells.
On your marks, get set...
If you've read our latest SSD reviews, you'll know that most modern solid-state drives offer comparable all-around performance. Any halfway decent SSD should be fast enough for most users. This rough performance parity has made factors like pricing and endurance more important, which is part of the reason we're undertaking this experiment in the first place.
Also, we couldn't resist the urge to test six SSDs to failure. That may sound a bit morbid, but we've long known about flash memory's limited write endurance, and we've often wondered what sort of ceiling that imposes on SSD life—and how it affects performance in the long run. The data produced by this experiment should provide some insight.
We're just getting started with endurance testing, and there are opportunities for further exploration if this initial experiment goes well. Flash wear isn't going away. In fact, it's likely to become a more prominent issue as NAND makers pursue finer fabrication techniques that squeeze more bits into each cell. This smaller lithography will drive down the per-gigabyte cost, bringing SSDs to even more PC users. As solid-state drives become more popular, it will become even more important to understand how they age.
We have lots of data to write to this initial batch of drives, so it's time to stop talking and start testing. We've outlined our plans, configured our test rigs, and taken our initial SMART readings. Let the onslaught of writes begin! We'll see you in 22TB.
Update: The 22TB results are in. So far, so good.
Update: After 200TB, we're starting to see the first signs of weakness.
Update: The drives have passed the 300TB mark, and we've added an unpowered retention test to see how well they retain data when unplugged.
Update: Our subjects have crossed the half-petabyte threshold, and they're still going strong.
Update: All is well after 600TB of writes—and after a longer-term data retention test.
Update: We've now written one petabyte of data, and half the drives are dead.
Update: The SSDs are now up to 1.5PB—or two of them are, anyway. The last 500TB claimed another victim.
Update: The experiment has reached two freaking petabytes of writes. Amazingly, our remaining survivors are still standing.
Update: They're all dead! Read the experiment's final chapter right here.
161 comments — Last by gamoniac at 9:45 PM on 10/28/13
|The Tech Report System Guide: September 2017 editionHog heaven at the high end||89|
|Adata's SD700 portable SSD reviewed3D TLC in a rugged shell||7|
|Samsung's Portable SSD T5 reviewed64 layers on the run||12|
|Toshiba's XG5 1TB NVMe SSD reviewedA new type of 3D NAND takes the stage||12|
|Adata's Ultimate SU900 256GB SSD reviewedTwo bits per cell in three dimensions||11|
|Computex 2017: Adata goes all-in on M.2 SSDsGumsticks galore||18|
|Corsair's Force Series MP500 240GB NVMe SSD reviewedAnother NVMe challenger girds itself with 15-nm MLC||39|
|The Tech Report System Guide: May 2017 editionRyzen 5 takes the stage||111|
|Geil lights up its Evo X ROG-certified RAM||0|
|Google Compute Engine is now powered in part by Pascal||6|
|EVGA slaps 12 GT/s memory on the GTX 1080 Ti FTW3 Elite||12|
|G.Skill unleashes AMD-ready Trident Z RGB kits up to 3200 MT/s||10|
|Asus' ZenFone 4 Pro offers high-end photography and networking||17|
|Radeon 17.9.2 drivers put the pedal to the metal for Project Cars 2||4|
|ROG Strix X299-XE Gaming motherboard is rather groovy||4|
|Miniature Golf Day Shortbread||18|
|GeForce 385.69 drivers are Game Ready for a ton of titles||2|