Single page Print

A note on SSD performance degradation
Not long ago, the folks at PC Perspective uncovered an interesting problem with Intel's X25-M SSD: its performance dropped over time. AnandTech also dug into the issue, illustrating a similar performance drop with solid-state drives from other manufacturers. But why do SSDs slow down over time? To answer that question, we have to explore the nature of flash memory and how drives interact with modern operating systems.

Flash memory stores data in cells—two bits per cell, for MLC flash. These cells can't be addressed individually. Instead, they're organized into pages that are typically 4KB in size. These pages are a part of larger blocks, which have 128 pages each, for a total of 512KB.

While data can be read from individual pages, it can only be written directly to empty ones. If a drive needs to write to a page that already holds data, it has to rewrite the entire block. During a block rewrite, the contents of a block must first be read into a drive's cache. The pages to be rewritten are then modified, and the entire block is rewritten. Adding these read and modify steps to the write process predictably causes a performance hit.

Eventually, though, an SSD is going to run out of fresh pages. That can actually happen quicker than one might expect, because those old pages getting marked as deleted can chew up space quickly. An SSD can have plenty of "free" storage capacity and yet no empty pages available for writing, bringing the block rewrite penalty into play for every subsequent write operation and slowing performance accordingly.

So how much of a performance penalty does a block rewrite incur? To find out, we used a handy app called HDDerase to wipe the contents of each drive, returning it to a factory-fresh state with empty pages. (The Apex drive's funky RAID config didn't get along with HDDerase, so we emptied the drive by flashing its firmware instead). Next, we put each drive through a 4k random writes test with IOMeter and recorded the average response time.

The JMicron-based Apex and Transcend drives have a problem with IOMeter's default configuration, which uses a starting sector setting of 0 and a maximum disk size value of 0. This config yielded much lower performance than we expected from the JMicron drives, so we retested them using a starting sector of 512 and a 2000000 maximum disk size value, as suggested by OCZ. These settings yielded results more in line with our expectations, so we've included them in the graphs below and in the IOMeter section later in the review. In both cases, we've marked the custom IOMeter configs with a (2) to separate them from the results obtained with the default settings. Incidentially, we also tried using starting sectors of 1, 20, 50, and 100, but ran into the same abysmal performance as with a zeroed starting sector.


With a clean slate, the Indilinx-based Vertex and UltraDrive have the lowest response times, followed by the X25-M and the Samsung-based PB22-J and P256. Even with their custom IOMeter config, the JMicron-based drives still lag behind the others. They're significantly slower with IOMeter's default config, though.

So how do the drives respond after they've been used throughly? We ran the same random writes test on each one after it'd been beaten and battered by our storage benchmark suite, and obtained the following results:


The Indilinx-based drives are still in the lead, but their response times have increased by more than an order of magnitude. The X25-M is also notably slower, but only by about 3.5 times. Interestingly, the Samsung-based drives suffer the most here. Their used response times grow to 15 times that of their factory-fresh scores. The random write performance of the Apex and Transcend SSDs changes very little when the drives are used, regardless of the IOMeter config.

Wiping drives is easy enough with HDDerase, but that's hardly a practical solution to maintaining SSD performance levels. Most folks are going to end up using, er, used drives. Fortunately, our hard drive testing methodology puts drives in the pratical equivalent of a used state right off the bat. HD Tach is the first benchmark we run, and its write speed test writes the full length of the disk, which should eliminating any empty pages. We run HD Tach three times, too, leaving little chance that any pages will remain untouched.

Just to be sure, I checked a fresh X25-M's random write performance after three runs through HD Tach. The drive's 2.3-2.5 ms response times were a little off those we observed after the X25-M completed our full test suite, but they're close enough to give us confidence that HD Tach effectively rids SSDs of empty pages. Keep in mind that the test results on the following pages are not indicative of how the drives perform in a pristine, empty state.

Hopefully, we won't need to track fresh versus used SSD performance for too long. The storage industry is working on a proposed TRIM command to alleviate SSD performance degradation. Rather than simply marking pages as available when files are deleted, TRIM would require that a page's contents be emptied. This provision wouldn't avoid the block rewrite penalty, but it would shift the performance hit to the time of deletion, which makes more sense than hampering writes.

TRIM requires a compatible operating system, and it looks like Windows 7 will support it. OCZ has also produced a TRIM application for its Vertex drive that clears any occupied pages marked as available. However, this app is very much in beta form, and it can only be run manually.

Although not related to the TRIM command, Intel recently updated the firmware of its X25-M series to improve the drive's long-term performance. We've flashed our X25-M with this latest 8820 release and retested the drive to see how it fares against the previous firmware revision.