Setting the baseline
Before we start hammering our subjects with writes, we need to establish a performance baseline. We'll use these factory fresh results as a point of reference when looking at how flash wear changes each drive's performance characteristics. Since Anvil's Storage Utilities includes a handful of benchmarks with the same compressibility settings as the endurance test, that's what we'll use to probe performance. We're just sticking to the basics: 4MB sequential reads and writes, and 4KB random reads and writes. (We're using Anvil's QD16 random I/O tests and testing all the drives on the same 6Gbps SATA port on one of the test systems.)
Because we've limited performance benchmarking to a single application and a handful of tests, I wouldn't draw any conclusions from the results below. Our latest SSD reviews explore the performance of most of these drives in much greater detail—and across a much broader range of real-world tests. We're using Anvil's benchmarks for convenience.
These numbers have only limited usefulness by themselves. Things should get more interesting as we add data points after tens and hundreds of terabytes have been written to the drives.
Note the differences between the HyperX configurations, though. The compressed config scores higher than the standard one in the sequential tests but not in the random ones. The differences in the sequential tests are much smaller than I expected from the "46% incompressible" setting, too.
That's all the time we need to spend on performance for now. Our next set of benchmarks will be run after 22TB of data has been written, matching the endurance specification of the Intel 335 Series. I wouldn't expect different results from those tests. However, we should see performance suffer as we get deeper into our endurance testing. Bad blocks will slowly eat away into the spare area that SSDs use to speed write performance, and reads may be slowed by the additional error correction required as wear weakens the integrity of the individual flash cells.
On your marks, get set...
If you've read our latest SSD reviews, you'll know that most modern solid-state drives offer comparable all-around performance. Any halfway decent SSD should be fast enough for most users. This rough performance parity has made factors like pricing and endurance more important, which is part of the reason we're undertaking this experiment in the first place.
Also, we couldn't resist the urge to test six SSDs to failure. That may sound a bit morbid, but we've long known about flash memory's limited write endurance, and we've often wondered what sort of ceiling that imposes on SSD life—and how it affects performance in the long run. The data produced by this experiment should provide some insight.
We're just getting started with endurance testing, and there are opportunities for further exploration if this initial experiment goes well. Flash wear isn't going away. In fact, it's likely to become a more prominent issue as NAND makers pursue finer fabrication techniques that squeeze more bits into each cell. This smaller lithography will drive down the per-gigabyte cost, bringing SSDs to even more PC users. As solid-state drives become more popular, it will become even more important to understand how they age.
We have lots of data to write to this initial batch of drives, so it's time to stop talking and start testing. We've outlined our plans, configured our test rigs, and taken our initial SMART readings. Let the onslaught of writes begin! We'll see you in 22TB.
Update: The 22TB results are in. So far, so good.
Update: After 200TB, we're starting to see the first signs of weakness.
Update: The drives have passed the 300TB mark, and we've added an unpowered retention test to see how well they retain data when unplugged.
Update: Our subjects have crossed the half-petabyte threshold, and they're still going strong.
Update: All is well after 600TB of writes—and after a longer-term data retention test.
Update: We've now written one petabyte of data, and half the drives are dead.
Update: The SSDs are now up to 1.5PB—or two of them are, anyway. The last 500TB claimed another victim.
Update: The experiment has reached two freaking petabytes of writes. Amazingly, our remaining survivors are still standing.
Update: They're all dead! Read the experiment's final chapter right here.
161 comments — Last by gamoniac at 9:45 PM on 10/28/13
|Toshiba's XG5 1TB NVMe SSD reviewedA new type of 3D NAND takes the stage||7|
|Adata's Ultimate SU900 256GB SSD reviewedTwo bits per cell in three dimensions||11|
|Computex 2017: Adata goes all-in on M.2 SSDsGumsticks galore||18|
|Corsair's Force Series MP500 240GB NVMe SSD reviewedAnother NVMe challenger girds itself with 15-nm MLC||39|
|The Tech Report System Guide: May 2017 editionRyzen 5 takes the stage||111|
|Intel's 32GB Optane Memory storage accelerator reviewed3D Xpoint offers a helping hand to hard drives||137|
|Intel gives hard drives a boost with Optane MemoryTaking another crack at storage caching||84|
|Intel Optane SSD DC P4800X opens new frontiers in datacenter storage3D Xpoint bridges DRAM and NAND||70|
|Adata D16750 power bank is tougher than the average juice pack||4|
|Gigabyte SA-SBCAP3350 puts formidable power on a single board||1|
|Deals of the week: fast memory, an AM4 motherboard, and more||1|
|Corsair RMx White Series PSUs take a walk on the snowy side||19|
|Intel crams 100 GFLOPS of neural-net inferencing onto a USB stick||30|
|Toshiba's XG5 1TB NVMe SSD reviewed||7|
|Microsoft and Johnson Controls put Cortana in a thermostat||21|
|Space Exploration Day Shortbread||17|
|Geil de-blings its Evo Spear memory modules||12|