SandForce showdown: Corsair’s Force F100 and OCZ’s Agility 2 and Vertex 2 SSDs

As we’ve learned over the past few years, a solid-state drive’s destiny is determined largely by its underlying storage controller. Some controllers, like JMicron’s JMF602, were doomed from the beginning. The JMicron was plagued by severe stuttering issues when it was first released, dragging down a whole wave of SSDs based on the chip. A new revision and updated firmware attempted to address the issue, but problems persisted, and the damage was done. Even today, SSDs based on new JMicron controllers are viewed with a healthy—and prudent—dose of skepticism.

JMicron might never have managed to fix the JMF602, but Indilinx had much better luck tuning its original Barefoot controller. First popularized by OCZ’s Vertex SSD, the Barefoot design had promise but suffered from poor used-state write performance, particularly under Windows XP. Then came a flurry of firmware updates that improved internal garbage collection schemes, added a manual wiper utility to clear erased flash pages, and eventually incorporated long-promised native support for the TRIM command built into Windows 7. The addition of TRIM support is probably most responsible for the excellent all-around performance offered by Indilinx-based drives in Microsoft’s latest operating system; it’s just a shame the update didn’t become available to end users until some eight months after drives like the Vertex hit shelves.

The controller that powers Intel’s X25-series drives has been the most resilient design we’ve seen thus far. Yes, the chip giant’s second-generation drives were plagued by a couple of embarrassing firmware issues. While those problems are certainly inexcusable, they don’t take away from the exceptional all-around performance that Intel SSDs have offered since before TRIM came into the picture. Intel SSDs have only gotten faster with TRIM, and they’re arguably the gold standard against which all newcomers are judged.

As you might have heard, the next big thing on the SSD front is a controller from a company called SandForce. We’ve been hearing bits and pieces about SandForce and its SF-1000-series storage controllers for a little while now, and consumer-oriented drives based on the SF-1200 have finally started to show up on store shelves. The chip’s specifications certainly look impressive: 260MB/s for sequential reads and writes, 30,000 IOps for 4KB random reads, and 10,000 IOps for random writes. No wonder there’s been a lot of hype.

But is it deserved? To find out, we’ve take a closer look at SandForce’s controller architecture and the unique approach it takes to prolonging drive life. We’ve also tested a handful of SF-1200-based SSDs, including Corsair’s Force F100 and OCZ’s Agility 2 and Vertex 2, to see how they measure up against the fastest drives on the market. Read on for everything you need to know about the SF-1200 and the new wave of SandForce-based SSDs it has spawned.

Introducing the SF-1000 family

SandForce emerged from a self-imposed “stealth mode” a little more than a year ago when it first introduced (PDF) the SF-1000 SSD controller family. There are two members of the family thus far: the SF-1200 and its enterprise-oriented twin, the SF-1500. The two models use the very same silicon, with the only differences between them being more extensive validation for the SF-1500 and a firmware-level cap on the SF-1200’s random-write performance. Without that cap in place, the SF-1500 can purportedly push 30,000 4KB random-write IOps—a threefold increase over the SF-1200.

The SF-1500’s impressive random-write capacity suggests the controller’s architecture was developed with enterprise-class workloads in mind. Enterprise-class SSDs tend to employ pricey single-level cell (SLC) flash, so it’s no surprise to see that the SF-1000 family supports SLC memory. However, SandForce very much wants to bring less expensive multi-level cell (MLC) flash into the enterprise world. SLC memory has proven popular for server applications because it has ten times the write-erase-cycle endurance of the MLC flash typically found in consumer-grade SSDs. Few desktop users write more than a few gigabytes in a given day, but enterprise applications can generate mountains of data very quickly and continuously, and that’ll burn through an SLC-based drive a lot slower than one with MLC flash.

To combat MLC’s comparatively low write-erase ceiling, SandForce has focused on more efficiently using the limited cycles available. In addition to increasing the lifespan of consumer-grade drives based on the SF-1200, SandForce says its approach will enable drive makers to craft SSDs using the lower-grade flash memory usually reserved for thumb drives. This B-grade flash tops out at 2,000-5,000 write-erase cycles, while the fancier stuff generally lasts for 10,000 cycles.

Limiting the number of write-erase cycles the SF-1000 family consumes is handled by collection of techniques shrouded by a mysterious black box dubbed DuraWrite. Some form of compression undoubtedly plays a role in DuraWrite, although it may be more along the lines of block-level deduplication than the equivalent of a real-time WinZip. When pressed for details, SandForce Senior Marketing Manager Jeremy Werner would only say that SandForce is using the “right feature set,” and that DuraWrite “in no way compromises data integrity.”

We can tell you that DuraWrite is handled by the storage processor in hardware. SandForce says the SF-1000 family has a lot of built-in hardware acceleration, and that all data written to the flash is encrypted on the fly using a 128-bit AES algorithm. On-the-fly encryption can’t be disabled, suggesting that it’s a key component of the whole DuraWrite scheme. Drives are configured with a blank password by default, allowing them to behave like unencrypted SSDs unless users choose to set their own passwords.

SandForce’s SF-1200 storage controller

Just how effective is DuraWrite? That depends on the nature of the data. SandForce says that program files are particularly vulnerable to its secret SSD shrink ray. Files that already contain compressed data, such as DivX movies, MP3s, or even JPEG images, will be less amenable to further belt-tightening.

According to SandForce, a Windows Vista and Office 2007 install that totals 25GB invokes just 11GB worth of flash writes thanks to DuraWrite. That’s an impressive 56% reduction in writes, and close to what SandForce says you can expect with “typical” workloads. The company claims a write amplification factor of 0.5 for the controller, which means that a 1GB host write typically translates to only 500MB worth of flash writes. Windows will still see 1GB of data on the drive, but writing the data will have used up half the number of write-erase cycles than would have otherwise been necessary.

Covering host writes with half the number of flash writes is particularly impressive considering that all other SSDs have write-amplification factors of greater than 1—that is, that they use more write-erase cycles than would strictly be necessary to complete a host write. SSD controller makers generally don’t publish write-amplification specifications, but Intel says its X25 series has factor of 1.1. SandForce suggests that a write amplification factor of 10 is typical for the industry.

In addition to the collection of techniques masked under DuraWrite, SandForce attempts to reduce flash wear further by being less aggressive with garbage collection. The SF-1000 series supports the TRIM command, which flags flash pages containing deleted data as available to be erased. A performance-oriented garbage-collection scheme might reclaim these deleted pages eagerly to avoid the effects of the block-rewrite penalty, sacrificing precious write-erase cycles in the process. SandForce’s desire to write as little as possible to the flash dictates a less aggressive garbage-collection routine for the SF-1000 line.

The SF-1000 series attempts to use the write-erase cycles at its disposal efficiently, but it’s considerably less efficient at providing storage capacity. SandForce reserves a whopping 28% of available flash capacity as “free area” that may be used only by the controller. Consumer-grade SSDs typically provision only 7% of their flash for free area, which is why 50 and 100GB SandForce drives are lining up against 60 and 120GB models based on competing controllers. Greater overprovisioning is par for the course in enterprise-class drives, though. Intel’s X25-E, for example, allocates 27% of its flash capacity for spare area; the X25-M only sections off 7%.

Like the Intel controller, the SF-1000 family will reclaim additional spare area if you create a partition smaller than the total capacity of the drive right after a secure erase. If you’d rather move in the other direction, SandForce has developed a firmware revision for the SF-1200 that only allocates 7% of the drive’s flash capacity as free space. This firmware is available to SandForce’s drive partners. They can release the update to end users, incorporate it into existing drives, or even use it to create higher-capacity models.

The SF-1000 family’s block diagram. Source: SandForce.

An SSD’s free area is generally used as a working space for garbage-collection and wear-leveling algorithms. The SF-1000 series also employs it as a part of RAISE, which stands for Redundant Array of Independent Silicon Elements. Sound familiar? If you’re thinking RAID, you’re on the right track. SandForce says that RAISE is similar to RAID 5, although Werner fell short of confirming that actual parity bits are being calculated. Given the SF-1000 family’s built-in encryption, RAISE may use some sort of hash/parity hybrid to achieve a measure of redundancy within a drive.

According to SandForce, RAISE reserves the capacity of one flash die in an SSD to store its pseudo-parity information. Rather than being housed on a single chip, this redundancy data is spread across the entirety of the drive. Using it, RAISE can recover entire pages or blocks worth of lost or corrupted data.

SandForce pairs RAISE with a robust error-correction engine and end-to-end CRC protection. This blend of elements is claimed to reduce the controller’s Uncorrectable Bit Error Rate (UBER) dramatically, which SandForce says is important to bringing MLC memory into enterprise environments. The SF-1000 family’s resiliency in this department is also key to bringing three-bit-per-cell and 2x-nano flash into the mainstream, the company says.

You’ll note that we haven’t made any mention of cache thus far. Unlike most SSD controllers, the SF-1000 family wasn’t designed to be paired with a DRAM sidekick. SandForce points out that cache memory chips add to a drive’s cost and increase its power consumption. Enterprise folks also prefer that cache be backed by a battery, which can further inflate costs. I suspect the controller has large internal caches to make up for the lack of a separate DRAM chip, but SandForce wouldn’t provide details.

Without a separate cache, there’s little need for the SF-1000 family to reach into 6Gbps Serial ATA territory. Remember that the controllers are only rated for maximum sequential read and write rates of 260MB/s, which nicely fit under the old SATA spec’s 300MB/s ceiling.

Three different takes on the SF-1200

SandForce has partnered with an impressive number of SSD vendors, including enthusiast heavyweights Corsair and OCZ.We have three drives in hand: Corsair’s Force F100 and OCZ’s Agility 2 and Vertex 2. The Force is a new SSD line for Corsair, but we’ve seen the Vertex and Agility names before from OCZ. The first generations of those drives were powered by Indilinx controllers and live on to this day using the new Barefoot ECO chip. Corsair does sell Indilinx-based drives, too, under its Nova line.

At first glance, all three of these SandForce-based SSDs look to be very similar. Each is wrapped in a nondescript metal casing that conforms to the 9.5-mm variant of the 2.5″ mobile hard drive form factor. All three offer the same 100GB capacity, as well. In the next couple of weeks, OCZ will release firmware updates offering 120GB of capacity for Agility 2 and Vertex 2 thanks to 7% overprovisioning. At the moment, Corsair has no plans to offer a similar update for the Force 200.

Even today, firmware is the biggest difference between these drives and also a source of some controversy. Remember the firmware cap that limits the SF-1200’s random-write performance to 10,000 IOps? It wasn’t included in a 3.0.1 firmware release candidate SandForce supplied to its partners earlier in the year. The cap returned in version 3.0.5 of the firmware, which is the first rev SandForce deemed fit for mass-production drives.

Rather than shipping Force SSDs with SandForce’s mass-production firmware, Corsair used the older 3.0.1 release candidate as the basis for its F100 firmware, which bears a 0.2 version number. You’re essentially getting an extra 20k random-write IOps for free, which sounds like a pretty good deal. But not for OCZ, which has worked closely with SandForce for some time, and whose Vertex 2 was supposed to have exclusive access to a “Max IOps” firmware without the random-write cap.

When asked about the situation, SandForce rep Jeremy Werner indicated the company can’t legally limit which firmware revisions are released by its partners. Corsair Technical Marketing Manager Robert Pearce told us his company has no plans to revise the Force’s firmware in ways that might artificially cripple drive performance, either.

Should prospective Force buyers be worried about Corsair’s 0.2 firmware revision? Probably not. Werner says not even a single failure has been reported on drives using mass-production firmware, but he could only point to one specific issue with the 3.0.1 firmware: a Mac hibernation problem that has since been fixed. I’m probably relying on Corsair’s reputation for producing reliable enthusiast-oriented products a little here, but I expect the company has sufficiently tested the F100’s 0.2 firmware to ensure that it won’t compromise data integrity or otherwise brick the drive.

Those who would prefer to use firmware officially endorsed by SandForce will take comfort in knowing that the OCZ drives are both based on the mass-production 3.0.5 rev. The Agility 2 is a standard SF-1200 model with the 10k random-write cap intact, while the Vertex 2 uses the same firmware with the limiter removed.

OCZ’s official pricing only separates the two drives by $10; the Vertex 2 costs $410, and the Agility 2 is slated to sell for $400. Already, though, Newegg has the Agility 2 discounted to $360. Newegg had the F100 listed for $410 a little while ago, which makes sense given the fact that it has the same performance ratings as the Vertex 2. Force SSDs have since disappeared from Newegg, but Amazon looks to have knocked their price down to $378.

Intel flash on the Agility 2…

And Micron chips on the Force (and Vertex 2)

I’m curious to see how pricing plays out for these drives, because they’re very similar under the hood. Corsair’s PCB layout looks a little different than OCZ’s, but each populates the board with eight flash modules per side. On the F100 and Vertex 2, the chips are Micron-branded models carrying a 29F64G08CFABA part number. The Agility, on the other hand, uses Intel flash chips with a nearly identical 29F64G08CAMDB part number. Intel and Micron have a joint flash venture called IM Flash Technologies, so it’s not surprising to see similar part numbers on chips from the two providers. The specifications for these chips aren’t listed online, but like most of the flash modules popping up in new SSDs, they’ve been fabricated with 34-nm process technology.

Corsair and OCZ differ a little on the warranty front, with the former offering two years of coverage to the latter’s three. The OCZ drives are also available at a lower capacity point than the Corsair. All three drives are offered in 100 and 200GB capacities, but only the OCZs are available in 50GB flavors.

Since I know the especially nerdy among us like to gawk at silicon just as much as silicone, here are a couple of extra nudies of the drives. Additional (and higher-resolution) images are also available in the gallery associated with this article.

The Agility 2’s PCB looks exactly like the Vertex 2’s

Corsair uses a slightly different design for the Force

Our testing methods

If you’re unfamiliar with The Twins, our new duo of storage test platforms, I recommend checking out this page from our recent VelociRaptor VR200M review. These systems pack potent hardware and have been furiously testing hard drives and SSDs for weeks now. Unfortunately, Intel still hasn’t resolved the performance scaling issue we found in its latest storage controller drivers for the P55 chipset. As a result, The Twins are still running the Microsoft AHCI driver built into Windows 7.

Before dipping into pages of benchmark graphs, let’s set the stage with a quick look at the players we’ve assembled for comparison today. Below is a chart highlighting some of the key attributes that can affect drive performance.

Flash controller Interface speed Spindle speed Cache size Platter capacity Total capacity
Agility 2 SandForce

SF-1200

3Gbps NA NA NA 100GB
Caviar Black 2TB NA 3Gbps 7,200 RPM 64MB 500GB 2TB
Force F100 SandForce

SF-1200

3Gbps NA NA NA 100GB
Nova V128 Indilinx

Barefoot ECO

3Gbps NA 64MB NA 128GB
PX-128M1S Marvell Da Vinci 3Gbps NA 128MB NA 128GB
SiliconEdge Blue JMicron JMF612 3Gbps NA 64MB NA 256GB
SSDNow V+ Toshiba T6UG1XBG 3Gbps NA 128MB NA 128GB
VelociRaptor VR150M NA 3Gbps 10,000 RPM 16MB 150GB 300GB
VelociRaptor VR200M NA 6Gbps 10,000 RPM 32MB 200GB 600GB
Vertex 2 SandForce

SF-1200

3Gbps NA NA NA 100GB
X25-M G2 Intel PC29AS21BA0 3Gbps NA 32MB NA 160GB
X25-V Intel PC29AS21BA0 3Gbps NA 32MB NA 40GB

The SandForce SSDs will take on a slew of direct competitors, including drives based on Indilinx, Intel, JMicron, Marvell, and Toshiba controllers. We’ve covered all of the bases, with the exception of Crucial’s RealSSD C300, which is powered by a new Marvell controller with a 6Gbps SATA interface. Crucial just recently released a firmware update to address serious performance issues, and now that it’s out, we’ll have a full review of the RealSSD soon.

Although it might not seem like a fair fight, we’ve thrown in results for a striped RAID 0 array built using a pair of Intel’s X25-V SSDs. You can actually pick up three 40GB X25-Vs for the cost of a single SandForce-based drive, although the TRIM command can’t currently be passed through to SSDs acting as members of a RAID array. Our X25-V array was configured using Intel’s P55 storage controller, the default 128KB stripe size, and the company’s latest 9.6.0.1014 Rapid Storage Technology drivers.

We’ve also included performance data from a trio of mechanical hard drives. Western Digital’s 10k-RPM VelociRaptor VR200M is the fastest mechanical hard drive that plugs into a Serial ATA interface, so it’s the most appropriate foil for these SSDs. We’ve also included the VR200M’s predecessor, the VR150M, which still has quicker access times than most desktop drives. Finally, we’ve thrown in a two-terabyte Caviar Black to represent the best performance 7,200-RPM mechanical drives have to offer.

The block-rewrite penalty inherent to SSDs and the TRIM command designed to offset it both complicate our testing somewhat, so I should explain our SSD testing methods in greater detail. Before testing the drives, each was returned to a factory-fresh state with a secure erase, which empties all the flash pages on a drive. Next, we fired up HD Tune and ran full-disk read and write speed tests. The TRIM command requires that drives have a file system in place, but since HD Tune requires an unpartitioned drive, TRIM won’t be a factor in those tests.

After HD Tune, we partitioned the drives and kicked off our usual IOMeter scripts, which are now aligned to 4KB sectors. When running on a partitioned drive, IOMeter first fills it with a single file, firmly putting SSDs into a used state in which all of their flash pages have been occupied. We deleted that file before moving onto our file copy tests, after which we restored an image to each drive for some application testing. Incidentally, creating and deleting IOMeter’s full-disk file and the associated partition didn’t affect HD Tune transfer rates or access times.

Our methods should ensure that each SSD is tested on an even, used-state playing field. However, differences in how eagerly an SSD elects to erase trimmed flash pages could affect performance in our tests and in the real world. Testing drives in a used state may put the TRIM-less Plextor SSD at a disadvantage, but I’m not inclined to indulge the drive just because it’s using a dated controller chip.

With few exceptions, all tests were run at least three times, and we reported the median of the scores produced. We used the following system configuration for testing:

Processor Intel Core i5-750 2.66GHz
Motherboard Gigabyte GA-P55A-UD7
Bios revision F4
Chipset Intel P55 Express
Chipset drivers Chipset 9.1.1.1015
Memory size 4GB (2 DIMMs)
Memory type OCZ Platinum DDR3-1333 at 1333MHz
Memory timings 7-7-7-20-1T
Audio Realtek ALC889A with 2.42 drivers
Graphics Gigabyte Radeon HD 4850 1GB with Catalyst 10.2 drivers
Hard drives Western Digital VelociRaptor VR200M 600GB
Western Digital Caviar Black 2TB
Western Digital VelociRaptor VR150M 300GB
Corsair Nova V128 128GB with 1.0 firmware
Intel X25-M G2 160GB with 02HD firmware
Intel X25-V 40GB with 02HD firmware
Kingston SSDNow V+ 128GB with AGYA0201 firmware
Plextor PX-128M1S 128GB with 1.0 firmware
Western Digital SiliconEdge Blue 256GB with 5.12 firmware
OCZ Agility 2 100GB with 1.0 firmware
OCZ Vertex 2 100GB with 1.0 firmware
Corsair Force F100 100GB with 0.2 firmware
Power supply OCZ Z-Series 550W
OS Windows 7 Ultimate x64

We used the following versions of our test applications:

The test systems’ Windows desktop was set at 1280×1024 in 32-bit color at a 75Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.

Most of the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

HD Tune

We’ll kick things off with HD Tune, which replaces HD Tach as our synthetic benchmark of choice. Although not necessarily representative of real-world workloads, HD Tune’s targeted tests give us a glimpse of each drive’s raw capabilities. From there, we can explore which drives live up to their potential.

The SandForce drives fall well short of their 260MB/s read-speed ratings. However, they’re hardly slouches in this sequential throughput test. All three manage an average read speed of at least 190MB/s, which only puts them a little behind the single-drive leaders. Read speeds are pretty consistent from start to finish, as well.

Remarkably, the SandForce drives maintain higher transfer rates in the write-speed test than they did with reads. It’s hard to say how DuraWrite handles this kind of synthetic test, or whether the data written to the disk is easily compressed. In any case, the SandForce drives easily outclass their competition, delivering average write speeds that are 37MB/s quicker than anything else we’ve tested.

As you can see, there’s little difference in performance between the Force, Agility, and Vertex. The F100’s write speeds oscillate a little more than those of the OCZ drives across the length of the test, but with a smaller amplitude than we’ve seen from other SSDs.

Next up: some burst-rate tests that should test the cache speed of each drive. We’ve omitted the X25-V RAID array from the following results because it uses a slice of system memory as a drive cache.

Among the single-drive configs, the SandForce units look to very competitive. Each bursts at about 200MB/s with both reads and writes, although the Force is a smidgen slower than the Agility and Vertex in both tests.

Our HD Tune tests conclude with a look at random access times, which the app separates into 512-byte, 4KB, 64KB, and 1MB transfer sizes.

The SandForce drives are just a wee bit behind the pack-leading X25-M through all four transfer sizes. All three offer close to identical access times, and they manage to stay nicely ahead of SSDs based on competing controllers from Indilinx, JMicron, Marvell, and Toshiba.

Although the X25-M continues to perform well when we switch to random writes, the SF-1200-based drives take over the lead with the 64KB transfer size and hold it through the 1MB test. Interestingly, the Agility’s random-write IOps cap doesn’t seem to be hindering its performance here. The Force, Agility, and Vertex are essentially tied throughout.

File Copy Test

Since we’ve tested theoretical transfer rates, it’s only fitting that we follow up with a look at how each drive handles a more realistic set of sequential transfers. File Copy Test is a pseudo-real-world benchmark that times how long it takes to create, read, and copy files in various test patterns. We’ve converted those completion times to MB/s to make the results easier to interpret.

Windows 7’s intelligent caching schemes make obtaining consistent and repeatable performance results rather difficult with FC-Test. To get reliable results, we had to drop back to an older 0.3 revision of the application and create our own custom test patterns. During our initial testing, we noticed that larger test patterns tended to generate more consistent file creation, read, and copy times. That makes sense, because with 4GB of system memory, our test rig has plenty of free RAM available to be filled by Windows 7’s caching and pre-fetching mojo.

For our tests, we created custom MP3, video, and program files test patterns weighing in at roughly 10GB each. The MP3 test pattern was created from a chunk of my own archive of ultra-high-quality MP3s, while the video test pattern was built from a mix of video files ranging from 360MB to 1.4GB in size. The program files test pattern was derived from, you guessed it, the contents of our test system’s Program Files directory.

Even with these changes, we noticed obviously erroneous results pop up every so often. Additional test runs were performed to replace those scores.

So much for DuraWrite? The SandForce SSDs turn in abysmally slow file creation speeds in FC-Test, failing to match the write performance of Intel’s budget X25-V.

Curiously, though, the drives offer substantially better performance when reading the very same files. They’re not the quickest of the lot by any stretch, but the SF-1200 doesn’t stumble with any of the three file sets. The Indilinx-based Nova and X25-M both struggle reading the MP3 file set, and the Nova is slow with our collection of program files, as well.

Our copy tests push the SandForce drives to the back of the class once more. There are small differences in performance between the three, but they’re all much slower than we’d expected given the sequential write speeds we just saw in HD tune.

We contacted SandForce about the issue, and the company suggested performance might be suffering because FC-Test only uses a queue depth of one. The SF-1200 supports Native Command Queuing, which can be used to, er, queue up to 32 concurrent requests. FC-Test doesn’t take advantage of NCQ, but that doesn’t explain why the SandForce drives seem to be the only ones so adversely affected. The X25-M also makes extensive use of command queuing to improve performance, and it’s not suffering nearly as much the SandForce-based drives in the file creation and copy tests.

Dig deeper, I must. First, I wanted to see whether the files generated by FC-Test are somehow difficult for DuraWrite to handle. I generated the MP3 file set using FC-Test, but instead of copying those files using the app, I did a hand-timed file copy in Windows. The files were copied from the SSD to another folder on the same drive, just like in FC-Test. My hand-timed results were much different, however. The SandForce drives managed around 62MB/s, while the X25-M and Indilinx-based Nova hit 68 and 77MB/s, respectively. A standard Windows 7 file copy isn’t as slow on the SandForce drives as our FC-Test results suggest.

The fact that our test suite tackles FC-Test directly after battering subjects with four IOMeter workloads, which fill the drives with a single file that plunges the SSDs into a used state, left me wondering whether SandForce’s more relaxed approach to garbage collection might be hindering write performance. To test that theory, I secure-erased the X25-M, Nova, and Vertex 2, wiping the slate clean for a bonus round of testing. After the secure erase, I performed a hand-timed Windows file copy test with 7GB worth of documents, digital pictures, MP3s, movies, and program files. Next, I ran our IOMeter workstation access pattern with 256 concurrent I/O requests for 30 minutes to put drives into a tortured used state, after which I ran the copy test again.

Interesting. The Indilinx and Intel drives offer identical copy speeds going from a factory-fresh to used state, but the SandForce drive sees its copy speed drop by 13MB/s. SandForce isn’t reclaiming trimmed flash pages as quickly as it could, and that transforms the SF-1200’s 7MB/s lead into a 6MB/s deficit in this real-world test.

Oddly, the SandForce drive was quicker in our first couple of used-state test runs but slowed in consecutive ones. The Indilinx and Intel drives were consistent in both the fresh- and used-state tests.

That’s not the end of it, either. On SandForce’s suggestion, I ran through FC-Test once more, this time using Intel’s current 9.6.0.1014 drivers rather than the Microsoft ones built into Windows 7. I only used the Vertex, which was secure erased and then run through our suite from the beginning, just like it was with the Microsoft drivers. The results? About a two-to-threefold increase in file creation and copy speeds.

There are numerous factors at play here, but it looks like poor optimization for Microsoft’s drivers, a comparatively lazy garbage-collection scheme, and a reliance on queue depths greater than one are all conspiring to hinder SF-1200 write speeds in FC-Test.

Application performance

We’ve long used WorldBench to test performance across a broad range of common desktop applications. The problem is that few of those tests are bound by storage subsystem performance—a faster hard drive isn’t going to improve your web browsing or 3ds Max rendering speeds. A few of WorldBench’s component tests have shown favor to faster hard drives in the past, though, so we’ve included them here.

The SandForce drives are competitive in WorldBench’s Photoshop test, with the OCZ models leading the Corsair by about 10 seconds. In Nero, the pack spreads out a little more, and the SandForce SSDs find themselves trailing the top four contenders by larger margins. This time around, it’s the Force that gets the better of the Agility and Vertex.

Although source-code compiling isn’t a part of the WorldBench suite, we’ve often been asked to add a compile test to our storage reviews. And so we have. For this test, we built a dump of the Firefox source code from March 23, 2010 using Visual Studio 2008. This process writes over 22,000 files totaling about 840MB, so there’s plenty of disk activity. However, we had to restrict compiling to a single thread because using multiple threads in Windows 7 proved to be unstable. Mozilla recommends that Firefox be compiled with a single thread.

All of the drives we’ve tested turn in about the same compile time. However, the SandForce SSDs are slower than others, even if it’s only by about 30 seconds at most. That’s not a whole lot of time given that the compile takes just over half an hour to complete, but it’s an interesting result given that this test writes a rather large volume of files during that time.

Clearly, though, performance in this test is being limited more by the CPU than the storage subsystem. If you have any suggestions for a multithreaded compiling test that will run in Windows 7, won’t be bound by our CPU, and preferably uses open-source code available to the general public, please shoot me an email.

Boot and load times

Our trusty stopwatch makes a return for some hand-timed boot and load tests. When looking at the boot time results, keep in mind that our system must initialize multiple storage controllers, each of which looks for connected devices, before Windows starts to load. You’ll want to focus on the differences between boot times rather than the absolute values.

This boot test starts the moment the power button is hit and stops when the mouse cursor turns into a pointer on the Windows 7 desktop. For what it’s worth, I experimented with some boot tests that included launching multiple applications from the startup folder, but those apps wouldn’t load reliably in the same order, making precise timing difficult. We’ll take a look at this scenario from a slightly different angle in a moment.

I wouldn’t make too much of small gaps given the hand-timed nature of this test. The SandForce drives are certainly among the fastest-booting SSDs we’ve ever tested.

A faster hard drive is not going to improve frame rates in your favorite game (not if you’re running a reasonable amount of memory, anyway), but can it get you into the game quicker?

The SF-1200’s strong performance continues in our gaming tests. All three SandForce-based SSDs load our Crysis level in about the same time. However, the Agility 2 consistently loaded our Modern Warfare 2 special-ops mission a few seconds faster than the Vertex 2 and Force F100.

Disk-intensive multitasking

TR DriveBench is a new addition to our test suite that allows us to record the individual IO requests associated with a Windows session and then play those results back on different drives. We’ve used this app to create a new set of multitasking workloads that should be representative of the sort of disk-intensive scenarios folks face on a regular basis.

Each workload is made up of two components: a disk-intensive background task and a series of foreground tasks. The background task is different for each workload, but we performed the same foreground tasks each time.

In the foreground, we started by loading up multiple pages in Firefox. Next, we opened, saved, and closed small and large documents in Word, spreadsheets in Excel, PDFs in Acrobat, and images in Photoshop. We then fired up Modern Warfare 2 and loaded two special-ops missions, playing each one for three minutes. TweetDeck, the Pidgin instant-messaging app, and AVG Anti-Virus were running throughout.

For background tasks, we used our Firefox compiling test; a file copy made up of a mix of movies, MP3s, and program files; a BitTorrent download pulling seven Linux ISOs from 800 connections at a combined 1.2MB/s; a video transcode converting a high-def 720p over-the-air recording from my home-theater PC to WMV format; and a full-disk AVG virus scan.

DriveBench produces a trace file for each workload that includes all IOs that made up the session. We can then measure performance by using DriveBench to play back each trace file. During playback, any idle time recorded in the original session is ignored—IOs are fed to the disk as fast as it can process them. This approach doesn’t give us a perfect indicator of real-world behavior, but it does illustrate how each drive might perform if it were attached to an infinitely fast system. We know the number of IOs in each workload, and armed with a completion time for each trace playback, we can score drives in IOs per second.

Below, you’ll find an overall average followed by scores for each of our individual workloads. The overall score is an average of the mean performance score in each multitasking workload.

DriveBench doesn’t produce reliable results with Microsoft’s ACHI driver, forcing us to obtain the following performance results with Intel’s 9.6.0.1014 RST drivers. We couldn’t get DriveBench to play nicely with our the X25-V RAID config, either, which is why it’s not listed in the graphs below. The app will only run on unpartitioned drives, so we tested drives after they’d completed the rest of the suite.

The SandForce SSDs finish just off the podium overall, but they’re not much slower than the SSDNow or Nova. Of course, they’re also not much faster than the SiliconEdge Blue. Let’s see how the results shake out with each workload.

Our virus-scanning and video-transcoding workloads prove fertile ground for the SandForce drives. However, the SF-1200 is weaker with our compiling and file copy workloads, where it pushes fewer IOps than the SiliconEdge Blue and SSDNow.

Curious to see whether removing the multitasking element of these tests would have any bearing on the standings, I recorded a control trace without a background task.

Taking multitasking out of the equation moves the SiliconEdge up into second place, pushing the SandForce SSDs down one notch in the standings. There’s a bigger gap between the SF-1200 drives and the leaders this time around, too, suggesting that the SandForce controller is at its best when presented with more demanding workloads.

DriveBench lets us start recording Windows sessions from the moment the storage driver loads during the boot process. We can use this capability to take another look at boot times, again assuming our infinitely fast system. For this boot test, I configured Windows to launch TweetDeck, Pidgin, AVG, Word, Excel, Acrobat, and Photoshop on startup.

This trace only took nine seconds to run on the SandForce drives, which share the lead with the X25-M. The next batch of SSDs was two seconds slower. As you can see, provided with a sufficiently fast host system, SSDs can dramatically improve boot times over mechanical hard drives. The VR200M completed this startup trace in a plodding 44 seconds.

IOMeter

Our IOMeter workloads are made up of randomized access patterns, presenting a good test case for both seek times and command queuing. The app’s ability to bombard drives with an escalating number of concurrent IO requests also does a nice job of simulating the sort of demanding multi-user environments that are common in enterprise applications.

The SF-1200 absolutely dominates three of four IOMeter workloads, offering transaction rates several times higher than the next closest contender. Our X25-M and X25-V RAID configs prove substantially quicker with the web server test pattern, but the SandForce drives still manage a rough tie for third place. The fact that the web server test pattern is made up exclusively of read operations, while the others have a mix of read and write ops, suggests the SF-1200 owes its strong showing in other patterns to superior random-write performance. The data generated by IOMeter may be particularly amenable to DuraWrite compression, as well.

I was expecting the Force F100 and Vertex 2 to surge ahead of the firmware-capped Agility 2 in our IOMeter tests, but there’s little difference in transaction rates between the three drives. We don’t have a test pattern made up exclusively of random writes. However, even with the web server test pattern, none of the SandForce drives managed much more than 8,000 IOps.

As one might expect, the SandForce drives consume more CPU cycles than the others with the file server, database, and workstation access patterns. These results are better presented in terms of efficiency, so we’ve graphed the number of IOps per percent CPU utilization below.

Obviously, our X25-V RAID array is the most efficient configuration here. The Intel SSDs have a higher efficiency than the SandForce drives with the web server test pattern, as well. Still, the single-drive setups are largely bunched together with the other test patterns.

Noise levels

Noise levels were measured with a TES-52 Digital Sound Level meter 1″ from the side of the drives at idle and under an HD Tune seek load. Drives were run with the PCB facing up.

Our noise level and power consumption tests were conducted with the drives connected to the motherboard’s P55 storage controller.

I’ve consolidated the solid-state drives here because they’re all completely silent. The SSD noise level depicted above is a reflection of the noise generated by the rest of the test system, which has a passively-cooled graphics card, a very quiet PSU, and a nearly silent CPU cooler.

The SandForce drives may be no quieter than the average SSD, but they can lower system noise levels by quite a bit versus high-performance mechanical hard drives. This fact is especially apparent under seek loads, which tend to make mechanical drives chatter audibly.

Power consumption

For our power consumption tests, we measured the voltage drop across a 0.1-ohm resistor placed in line with the 5V and 12V lines connected to each drive. We were able to calculate the power draw from each voltage rail and add them together for the total power draw of the drive. Drives were tested while idling and under an IOMeter load consisting of 256 outstanding I/O requests using the workstation access pattern.

The Agility 2, Force F100, and Vertex 2 sit in the middle of the middle of the pack in our power consumption tests. Whatever wattage SandForce saves by not using a DRAM cache is clearly being consumed elsewhere in these drives. With DuraWrite and RAISE accelerated in hardware, I suspect the SF-1200 pulls more juice than other SSD controllers.

Capacity per dollar

After spending pages rifling through a stack of performance graphs, it might seem odd to have just a single one set aside for capacity. After all, the amount of data that can be stored on a hard drive is no less important than how fast that data can be accessed. Yet one graph is really all we need to express how these drives stack up in terms of their capacity, and more specifically, how many bytes each of your hard-earned dollars might actually buy.

We took drive prices from Newegg to establish an even playing field for all the contenders. Mail-in rebates weren’t included in our calculations. Rather than relying on manufacturer-claimed capacities, we gauged each drive’s capacity by creating an actual Windows 7 partition and recording the total number of bytes reported by the OS. Having little interest in the GB/GiB debate, I simply took that byte total, divided by a Giga (109), and then by the price. The result is capacity per dollar that, at least literally, is reflected in gigabytes.

With greater overprovisioning than is typical for consumer-grade SSDs, the SandForce drives score poorly on our storage-per-dollar scale. Of course, these drives are also very new, and some are still out of stock at a number of online retailers. Prices may fall as supply catches up with demand. Firmware updates with less overprovisioning will also help to improve the cost per gigabyte of SF-1200-based drives.

Conclusions

With the right workload, SandForce’s SF-1200 storage controller has enormous potential. All three of the SF-1200-based drives we tested offer solid sequential throughput and very fast random access times across a range of transfer sizes. The SF-1200 proved particularly adept at handling random writes with larger transfer sizes, and it absolutely crushed the field when fed IOMeter workloads that contained a mix of read and write operations. Throw in quick load times and a strong showing in our multitasking tests, and all the hype surrounding SandForce looks very much deserved.

That said, the SF-1200 isn’t well optimized for all situations. As our FC-Test file creation and copy results illustrate, the controller’s sequential write speed appears to be much more dependent on effective use of command queuing than other SSD architectures. Microsoft’s Windows 7 AHCI drivers appear to make matters worse, as does a garbage-collection algorithm that doesn’t reclaim erased flash pages as quickly as Indilinx- or Intel-based drives.

I’m encouraged that SandForce is looking into these issues, but it remains to be seen whether firmware updates can smooth them out. At the very least, I don’t expect these rough edges to draw blood with most users. Moving files around in Windows 7 is still plenty quick with the SF-1200, even when running Microsoft’s AHCI drivers.

I would, however, temper any expectations of improved performance with SandForce’s stated desire to minimize flash writes. The company’s decision opt for less aggressive garbage collection was a deliberate one, and SandForce intends to pursue even greater endurance with future products. Drive longevity is something that’s all but impossible for us to measure in a timely way, but I do expect DuraWrite to allow SSDs based on the SF-1200 to consume write-erase cycles more sparingly than the competition.

SSD endurance is going to be much more important for enterprise customers and notebook users than it will be for enthusiasts looking to use a solid-state drive primarily to house their operating system and applications. The SF-1200’s architecture also seems to be better suited to enterprise-class workloads than to the demanding desktop multitasking scenarios we simulated with DriveBench. You need to come up with a pretty punishing workload to see this controller live up to its potential.

The fact that not even our IOMeter workloads were able to tease out a meaningful performance difference between the firmware-capped Agility 2 and the Force F100 and Vertex 2 tells me that there’s little point in spending extra on SandForce drives with higher random-write ratings. The Agility 2 is already the cheapest 100GB SandForce-based drive at $360 online. Based on current pricing and the performance parity we’ve observed, the Agility 2 is the one I’d recommend. Plus, OCZ gives you one more year of warranty coverage than Corsair and the promise of future firmware updates that will increase the drive’s capacity to 120GB.

Until that firmware update is released to the public, I’m going to reserve final judgment on the SF-1200. The higher cost per gigabyte that results from greater overprovisioning may not matter so much to potential enterprise customers, but it’s a big weakness in the increasingly competitive consumer SSD market. Destiny is still to be determined for the SF-1200, then. SandForce is definitely a contender, and it’s most certainly in the mix with Indilinx and Intel.

Update 5/26/2010 — Corsair has extended the warranty coverage for all its SSDs to three years. The company is now free to use SandForce’s mass-production “Max IOps” firmware, as well.

Comments closed
    • LoneWolf15
    • 9 years ago

    *sigh*….Still waiting for the Velociraptor WD6000HLHX 600GB to come back-in-stock with retailers. I’m regretting not having bought one in the first wave.

    I know they’re not as fast as an SSD, but they’re a lot more capacity for less money, and the longevity isn’t in question. Good warranty, too.

    • UberGerbil
    • 9 years ago

    Between the forthcoming OCZ reduced-provisioning firmware and an eventual Intel driver update, you’ve got a pretty major re-test looming. Couple that with new RealSSD review and hopefully a test of the OCZ 50GB version, and I forsee a long hot summer in the storage test sweatshop. The demand for interns comes into better focus.

    And then there’s the whole question of revisiting AMD chipset SATA….

      • SomeOtherGeek
      • 9 years ago

      Heh heh! Damage labs in for good ride this summer, but then that means more goodies for us geeks!

    • flip-mode
    • 9 years ago

    The article I thank you for, Geoff.

      • bdwilcox
      • 9 years ago

      Did you just channel Yoda?

    • zzyxy
    • 9 years ago

    Something is fishy with the price/GB chart. X25M 80G/160G goes for $210/$420 on NewEgg. Those drives report pretty much exactly 80*10^9/160*10^9 bytes. If I do my math correctly, it’s ~ $2.65 per 10^9 bytes..

    Vertex2 drives, on the other hand seem to go for $200/50GB ($4/GB) and $410/100GB($4.1/GB) and $749/200GB($3.75/GB)

    Yet on the chart the situation is opposite. SF-based drives listed around $2.5/GB and intel at $3.5/GB. What gives? Did they mix up numbers for X25 and for SF-based controllers?

      • Dissonance
      • 9 years ago

      /[

        • zzyxy
        • 9 years ago

        Oops. You’re right.
        Note to self: don’t post before coffee.

    • indeego
    • 9 years ago

    My thoughts:

    -Seems odd to include the RAID for one drive, but none of the others. I realize you just did a review of Intel’s Value in a RAID, but why include it without RAID’ing the others?

    -A little birdie (friend of a friend at a major DB software shop) told me that SSD’s don’t support writing pending I/O to disk, (in order to preserve flash lifespan,) and therefore should not be considered seriously for transactional databases. It’s basically ignoring flushes. Thoughts? Yes you can work around this with Battery backup cache/UPS/ various other levels of redundancy, but the thought that SSD’s don’t support a critical aspect of disk I/O is somewhat of a niggling factor in the back of my mind… Might be interesting to put one of these in a real-world “cut the power and see how it handles it” situation.

    – The firmware reliability issues really needs to be taken care of. Crucial’s C300 drives were bricking earlier in the month after a bad update. I don’t think any drive should be getting recommended until 6 months+ of real-world usage. While I’ve had great luck with a myriad of SSD’s on desktops, I’ve heard others have had bricked drives soon after install. How common is thisg{

      • Firestarter
      • 9 years ago

      As for flushing write caches, I’d wager to guess that enterprise drives like the X25-E don’t violate any transactional rules.

        • draksia
        • 9 years ago

        Both the X25-M and X25-E currently don’t commit writes when they say they do.

        The upcoming supercap drives sill prevent this from being a problem but if data integrity is required I wouldn’t use any Intel drive.

          • indeego
          • 9 years ago

          Interesting. Now coincidentally on one of the systems I manage a SSD is giving me this every few hours of useg{<: <}g /[https://secure-communities.intel.com/message/69441;jsessionid=A1B4BFF86A2B782A9752AEB9BA4B9DED.node5COMS<]§ ruh rohg{<.<}g

    • 5150
    • 9 years ago

    Very interesting article, thanks for it! Look forward to more SSD reviews!

    • Mr Bill
    • 9 years ago

    Whatever happened to AMD’s lead in flash production a few years back? Why isn’t AMD offering an SSD?

    • Jigar
    • 9 years ago

    I think i am falling in love with Nova 128.

      • KamikaseRider
      • 9 years ago

      Exactly what I was thinking.

      Every time TR reviews a new SSD the nova remains a very good competitor and has decent write and read speeds.

    • luipugs
    • 9 years ago

    1. i don’t quite get how DuraWrite works. does it provide the SSD with double the capacity it advertises, or do the SSDs only have half as much actual capacity as advertised? for example, the write amplification factor for the SandForce controllers are 0.5, so a 1 GB write from the OS would only translate to 500MB worth of flash writes. does this mean that a 100 GB SSD would be able to store 200 GB of data or that it only really has 50 GB capacity, but enables 100 GB writes?

    2. i also don’t understand why the WinZip 10 test is always included if it only gives similar times for all drives. it is not a meaningful test anymore.

    3. since you included SSDs in a RAID configuration, i think it would be appropriate in the future to also include plain hard disks in a RAID configuration. their price per GB is so much better compared to SSDs, so the RAID configuration would make them more competitive without increasing cost too much.

      • Firestarter
      • 9 years ago

      AFAIK the 100GB Sandforce SSDs have 128GB worth of flash, so theoretically they could store more than 200GB of compressible data. It’s not technically feasible without the OS knowing about it though. You could go nuts and use NTFS compression on a regular SSD like an X25-M, it would be interesting to see what effect that would have on write amplification and performance in general.

      • eitje
      • 9 years ago

      If I had to guess, I would say that DuraWrite looks to lump as much data into a single 4KB block as it can.

      Consider 32 KB (4KB x 8) of storage, and 6 KB of data. It is conceivable that 1KB could be written to each of 6 blocks out of the 8. Instead, if you shuffle around data to ensure that you’re writing 4 KB + 2 KB, then you only need to write to 2 of the blocks, resulting in 1/3 as many writes taking place.

        • MadManOriginal
        • 9 years ago

        That’s probably part of it although I’d hope most SSD controllers do something like that by now. Sandforce claims to do more though: on the fly compression so that they actually don’t write as much data as the OS sends.

      • Waco
      • 9 years ago

      1. A gigabyte of writes (no matter what the actual size on disk) is reported to the OS as a gigabyte. It’s a completely transparent compression algorithm that you’d never know was there from simply talking back and forth to the drive.

      Honestly I’m pretty amazed that HDD manufacturers don’t do the same thing. On the fly compression (assuming the controller can keep up) doesn’t really have any drawbacks except when trying to physically recover data from the platters. Even that drawback can be ignored if the compression algorithm is well-known and can be compensated for.

        • 5150
        • 9 years ago

        Side note: I was looking for a name for my new puppy, saw your name, and immediately thought MONGO! Thanks for the help! Now my dog is only a pawn in the game of life.

    • Bensam123
    • 9 years ago

    Where are the SAS drives? 🙁

    It would seem like it’d be a big hassle to use them like SCSI used to be, but they’re not and SAS controllers can also be used with normal SATA HDs (such as SSDs). IF that was keeping you guys from adding them.

    • bdwilcox
    • 9 years ago

    I’ll just buy Intel. Does the $20 difference really mean that much? Not for me. With Intel, you’re getting guaranteed world-class performance. It truly is a known quantity unlike the hope and change that a SandForce firmware update /[

      • Firestarter
      • 9 years ago

      I only hope TR gets to remove the storage drivers from the equation. Considering Intels golden reputation for its chipset drivers (other than graphics), I shudder to think what the various broken implementations of AMD make of this new-fangled SSD thing.

      I am a dissapoint that TR didn’t use an SSD in the 890FX review.

        • shank15217
        • 9 years ago

        Golden reputation.. please, they crippled their own driver and wont even give a reason.

          • Firestarter
          • 9 years ago

          And this instantly destroys decades of hard work? They do have a reputation, for better or worse.

      • shank15217
      • 9 years ago

      What are you talking about? What exactly wasn’t guaranteed with the Sandforce ssds? They are superior to the Intel ssds in performance, unless you read only one review on the interwebs.

        • bdwilcox
        • 9 years ago

        What isn’t guaranteed with the SandForce based drives? Exactly what I said: consistent performance. Intel drives a known quantity and an excellent one at that. SandForce has weird performance quirks and dependencies that we keep hearing firmware and chipset driver updates will fix. No thanks, I’ll stick with the known quantity. If SandForce can fix its controller like Indilinx seems to have done with its BareFoot controller, then I’ll take another look. Until then, it’s Intel for me and the people I support.

          • shank15217
          • 9 years ago

          Performance IS consistent, the review pointed out an anomaly with Intel drivers and the ssds. I am sure TR will dig deeper because some numbers don’t jive with other reviews. Case in point, the AT review shows sequential read performane at 260+ MBps, on a x58 chipset with the same drivers.

            • Freon
            • 9 years ago

            Great if you keep up with all this kind of stuff, pay special attention to drivers by scouring the web for these type of quirks, benchmark your drives to make sure they’re working right…

        • Trymor
        • 9 years ago

        l[

      • oldDummy
      • 9 years ago

      When the time comes to replace my 160G G2 all these problems and questions will have been worked out.
      Intel will most likely still be a contender; Sandforce…hmmm, I don’t know.

    • grantmeaname
    • 9 years ago

    Frist pots.

      • cygnus1
      • 9 years ago

      booo

      • Chrispy_
      • 9 years ago

      Get out.

Pin It on Pinterest

Share This