SandForce’s SF-1200 is easily one of the most intriguing solid-state disk controllers on the market. Not only is SandForce a new player in the field, but it also has a unique special sauce dubbed DuraClass. This secret blend of herbs and spices includes elements of compression, encryption, and even RAID-like redundancy, but the company has so far been reticent to reveal the recipe. My inner nerd is desperately curious about how all the elements intermingle, especially since the SF-1200 has shown so much promise.
Although seemingly dependent on command queuing to achieve optimal performance, the SF-1200 boasts competitive sequential transfer rates, truly stunning random-write throughput, and the promise of greater flash longevity than other MLC-based designs. Unfortunately, the first SF-1200-based SSDs were also burdened with a rather high cost per gigabyte, straining their value proposition.
Drive makers weren’t jacking up prices to cash in on early adopters, though. The SF-1200 comes from a controller family that was architected with enterprise-class environments in mind. To deal with the high volume of random writes typical of high-performance servers and workstations, enterprise-oriented SSDs generally reserve a much greater portion of their flash capacity as free area that’s inaccessible to the user. This practice, called overprovisioning, can improve write performance and enhance a drive’s longevity. The initial SF-1200 SSDs used an overprovisioning percentage of 28%, causing drives with 128GB worth of flash to offer only 100GB of useful storage. Typical desktop SSDs only use 7% overprovisioning, so they can wring 120GB of storage space from the same amount of flash.
To keep SF-1200-based drives from having to fight with extra capacity tied behind their backs, SandForce developed a new firmware revision for the SF-1200 that scales overprovisioning down to 7%. Solid-state drives sporting this firmware have already started popping up online, and Corsair’s Force F120 is the first one to arrive in the Benchmarking Sweatshop. The F120 comes on the heels of the 100GB Force F100, predictably offering a more conventional 120GB of usable capacity.
Since nothing has changed on the controller front, we won’t go into too much detail there. I suggest reading our three-way SandForce showdown for the skinny on the SF-1200 and what we know about the chip’s architecture. The design’s defining feature is DuraWrite, a term SandForce uses to describe a mix of compression and other techniques that conspire to achieve a write amplification factor of just 0.5. SSDs typically have write amplification factors of greater than one, but the SF-1200 actually writes less to the flash than the operating system thinks has been written to the disk, which should conserve precious write-erase cycles. The SF-1200’s on-the-fly encryption engine is no doubt tied closely to DuraWrite, and both are likely intertwined with the controller’s flash-level RAISE redundancy scheme.
Most solid-state drives spread their flash chips across multi-channel arrays in a bid to improve performance. Think of a striped RAID 0 array, but with flash chips instead of hard drives. RAISE, which stands for Redundant Array of Independent Silicon Elements, looks more like a parity-infused RAID 5 array. To protect against data loss due to the failure of a flash die, RAISE reserves an area equal to the capacity of one die to store redundancy data. This redundancy data is likely a parity/hash hybrid, and it’s spread across the entirety of the drive rather than being sequestered on a single die.
I’m dwelling on RAISE a little because it complicates the overprovisioning picture. The pseudo-parity redundancy data used by RAISE is stored in an SSD’s spare area so that it doesn’t reduce the amount of storage capacity available to the user. Just how much of that free area does RAISE consume on the F120? To answer that question, we need to take a closer look at the drive’s flash chips.
The F120 has 16 Intel NAND chips that pack 8GB apiece. According to Intel, there are two dies per chip, which means 4GB is dedicated to RAISE redundancy data. That leaves the F120 without about half as much usable spare area as other drives with 7% overprovisioning.
Solid-state drives use their free area as reserve capacity to negate bad blocks that might crop up in the portion of flash available to the user. With less free area at its disposal, the F120 will have to make do with fewer backup blocks in reserve. SandForce has confirmed that a lower overprovisioning percentage can also increase the SF-1200’s write amplification factor in “worst case corners,” but it claims there’s no impact with typical usage scenarios, such as installing an operating system. In desktop environments, I suspect the F120’s life expectancy will be longer than that of typical SSDs due to the SF-1200’s low write amplification factor. Of course, if you’re going to be using an SSD as an OS and applications drive and aren’t writing tens of gigabytes of data per day, longevity shouldn’t be an issue for any mainstream SSD.
In addition to serving as a pool of backup blocks, an SSD’s spare area can be used to accelerate write performance by providing incoming writes with fresh flash pages. Less overprovisioning means fewer flash pages available for inbound writes, but SandForce says the SF-1200’s peak throughput for random writes is unaffected by a lower overprovisioning percentage.
Interestingly, Corsair’s web site lists a random-write capacity of just 15,000 IOps for the F120, F60, and F240, all of which use 7% overprovisioning. The F100 uses SandForce’s “Max IOps” firmware to hit 30,000 random-write IOps, neatly doubling the F120’s peak theoretical throughput. SF-1200-based drives that don’t make use of SandForce’s juiced firmware have thus far been capped at 10k random-write IOps, suggesting that the IOps ceiling may have been raised for drives with 7% overprovisioning. We’ll get a better sense of things as we dig through the results of our extensive performance testing.
According to the specifications on Corsair’s web site, overprovisioning should have no impact on the SF-1200’s sustained read or write speeds. The Force F100 and F120 both boast 285MB/s read- and 275MB/s write-speed ratings. Corsair doesn’t list the random-read throughput of either drive, but SandForce says the SF-1200 is capable of crunching 30,000 4KB random-read IOps.
By now you must be wondering whether the F120 is just an F100 running different firmware. After all, changing the SF-1200’s overprovisioning doesn’t require new hardware. SandForce even allows SSD makers to release multiple firmware revisions for each drive, enabling end users to choose between overprovisioning percentages. As it turns out, though, the F120’s circuit board differs from what you’ll find under the skin of our F100. I’ve posed the two side-by-side in the picture below. The F100 is on the left, while the F120 sits to the right.
Both drives may use the same SF-1200 controller and basic layout, but Corsair has completely revamped the mix and position of other surface-mounted components. Micron flash memory chips are used on the F100, while Intel ones appear on the F120. The flash chips on each drive are fabricated using 34-nm process technology, and I suspect they’re very similar, if not identical apart from what’s silk-screened on the surface. After all, Intel does have a flash joint venture with Micron.
As with other SandForce-based solid-state drives, you won’t find a DRAM cache memory chip anywhere on the F120. The SF-1000 family is designed to be used without a traditional DRAM cache.
We appreciate good warranty coverage here at TR, so we’re pleased to note that Corsair recently bumped up the coverage for all its SSDs up to three years. That matches the three-year warranties offered by most other SSD makers and the vast majority of mechanical hard drives. However, it’s worth noting that flagship hard drives like Western Digital’s Black series and Seagate’s XTs come with five years of coverage.
Our testing methods
Before dipping into pages of benchmark graphs, let’s set the stage with a quick look at the players we’ve assembled for comparison. We’ve called up a wide range of competitors, including a selection of desktop hard drives, traditional notebook drives, Seagate’s Momentus XT hybrid, and a cubic assload of pure solid-state goodness. Below is a chart highlighting some of the key attributes of the contenders we’ve lined up.
|Flash controller||Interface speed||Spindle speed||Cache size||Platter capacity||Total capacity|
|Agility 2||SandForce SF-1200||3Gbps||NA||NA||NA||100GB|
|Caviar Black 2TB||NA||3Gbps||7,200 RPM||64MB||500GB||2TB|
|Force F100||SandForce SF-1200||3Gbps||NA||NA||NA||100GB|
|Force F120||SandForce SF-1200||3Gbps||NA||NA||NA||120GB|
|Momentus 7200.4||NA||3Gbps||7,200 RPM||16MB||250GB||500GB|
|Momentus XT||NA||3Gbps||7,200 RPM||32MB||250GB||500GB|
|Nova V128||Indilinx Barefoot ECO||3Gbps||NA||64MB||NA||128GB|
|RealSSD C300||Marvell 88SS9174||6Gbps||NA||256MB||NA||256GB|
|Scorpio Black||NA||3Gbps||7,200 RPM||16MB||160GB||320GB|
|Scorpio Blue||NA||3Gbps||5,400 RPM||8MB||375GB||750GB|
|SiliconEdge Blue||JMicron JMF612||3Gbps||NA||64MB||NA||256GB|
|SSDNow V+||Toshiba T6UG1XBG||3Gbps||NA||128MB||NA||128GB|
|VelociRaptor VR150M||NA||3Gbps||10,000 RPM||16MB||150GB||300GB|
|VelociRaptor VR200M||NA||6Gbps||10,000 RPM||32MB||200GB||600GB|
|Vertex 2||SandForce SF-1200||3Gbps||NA||NA||NA||100GB|
|X25-M G2||Intel PC29AS21BA0||3Gbps||NA||32MB||NA||160GB|
On the SSD front, we’ve pitted the Force F120 against its F100 brother and a couple of other SandForce-based drives with 28% overprovisioning. We’ve collected all the other relevant players, including drives based on Indilinx, Intel, JMicron, Marvell, and Toshiba controllers. Although it might not seem like a fair fight, we’ve also thrown in results for a striped RAID 0 array built using a pair of Intel’s X25-V SSDs. The X25-V only runs a little more than $100 online, making multi-drive RAID arrays affordable enough to be tempting for desktop users. Our X25-V array was configured using Intel’s P55 storage controller, the default 128KB stripe size, and the company’s latest 220.127.116.114 Rapid Storage Technology drivers.
The block-rewrite penalty inherent to SSDs and the TRIM command designed to offset it both complicate our testing somewhat, so I should explain our SSD testing methods in greater detail. Before testing the drives, each was returned to a factory-fresh state with a secure erase, which empties all the flash pages on a drive. Next, we fired up HD Tune and ran full-disk read and write speed tests. The TRIM command requires that drives have a file system in place, but since HD Tune requires an unpartitioned drive, TRIM won’t be a factor in those tests.
After HD Tune, we partitioned the drives and kicked off our usual IOMeter scripts, which are now aligned to 4KB sectors. When running on a partitioned drive, IOMeter first fills it with a single file, firmly putting SSDs into a used state in which all of their flash pages have been occupied. We deleted that file before moving onto our file copy tests, after which we restored an image to each drive for some application testing. Incidentally, creating and deleting IOMeter’s full-disk file and the associated partition didn’t affect HD Tune transfer rates or access times.
Our methods should ensure that each SSD is tested on an even, used-state playing field. However, differences in how eagerly an SSD elects to erase trimmed flash pages could affect performance in our tests and in the real world. Testing drives in a used state may put the TRIM-less Plextor SSD at a disadvantage, but I’m not inclined to indulge the drive just because it’s using a dated controller chip.
With few exceptions, all tests were run at least three times, and we reported the median of the scores produced. We used the following system configuration for testing:
You can read more about the hardware that makes up our twin storage test systems on this page of our VelociRaptor VR200M review. Thanks to Gigabyte for providing the twins’ motherboards and graphics cards, OCZ for the memory and PSUs, Western Digital for the system drives, and Thermaltake for SpinQ heatsinks that keep the Core i5s cool.
We used the following versions of our test applications:
- WorldBench 6
- Intel IOMeter 2006.07.27
- Xbit Labs File Copy Test 0.3
- HD Tune 4.01
- Visual Studio 2008 with 03-23-2010 Firefox source
- Call of Duty: Modern Warfare 2
- Crysis Warhead
The test systems’ Windows desktop was set at 1280×1024 in 32-bit color at a 75Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.
Most of the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.
We’ll kick things off with HD Tune, which replaces HD Tach as our synthetic benchmark of choice. Although not necessarily representative of real-world workloads, HD Tune’s targeted tests give us a glimpse of a drive’s raw capabilities. From there, we can explore whether the drives live up to their potential.
The F120 offers read performance essentially identical to the F100 and other SandForce-based drives in HD Tune. Quite a few competing SSDs hit higher average and minimum read speeds, though.
They start a little slow, but the F100, Agility, and Vertex almost immediately hit their peak sustained write speeds, which average out to 216MB/s. The F120 averages a much slower 162MB/s, finding itself four places shy of the rest of the SandForce pack.
More troubling than the F120’s lower average write speed is the cause: wild oscillations that can clearly be seen in the line graph above. The F120 alternates between lows under 150MB/s and a highs of nearly 220MB/s, peaking briefly and spending most of its time in those deep valleys. Corsair may list the same 275MB/s sustained write speed rating for the F100 and F120, but the latter is clearly slower.
Next up: some burst-rate tests that should test the cache speed of each drive. We’ve omitted the X25-V RAID array from the following results because it uses a slice of system memory as a drive cache.
Differences in overprovisioning don’t affect the SF-1200’s performance in HD Tune’s burst speed tests. The F120’s burst speeds are impressively quick given the drive’s lack of a DRAM cache.
Our HD Tune tests conclude with a look at random access times, which the app separates into 512-byte, 4KB, 64KB, and 1MB transfer sizes.
All of the SandForce-based SSDs have very low access times with random reads. The F120’s access times are marginally quicker than those of the F100, Agility, and Vertex, but it still can’t catch the X25-M.
Random writes prove to be fertile ground for the F120, which is in the mix with the leaders across all four transfer sizes. Like the other SandForce-based drives, the F120 fares comparatively better with the larger 64KB and 1MB transfer sizes. The lower overprovisioning percentage certainly isn’t slowing the 120GB Force here.
File Copy Test
Since we’ve tested theoretical transfer rates, it’s only fitting that we follow up with a look at how each drive handles a more realistic set of sequential transfers. File Copy Test is a pseudo-real-world benchmark that times how long it takes to create, read, and copy files in various test patterns. We’ve converted those completion times to MB/s to make the results easier to interpret.
Windows 7’s intelligent caching schemes make obtaining consistent and repeatable performance results rather difficult with FC-Test. To get reliable results, we had to drop back to an older 0.3 revision of the application and create our own custom test patterns. During our initial testing, we noticed that larger test patterns tended to generate more consistent file creation, read, and copy times. That makes sense, because with 4GB of system memory, our test rig has plenty of free RAM available to be filled by Windows 7’s caching and pre-fetching mojo.
For our tests, we created custom MP3, video, and program files test patterns weighing in at roughly 10GB each. The MP3 test pattern was created from a chunk of my own archive of ultra-high-quality MP3s, while the video test pattern was built from a mix of video files ranging from 360MB to 1.4GB in size. The program files test pattern was derived from, you guessed it, the contents of our test system’s Program Files directory.
Even with these changes, we noticed obviously erroneous results pop up every so often. Additional test runs were performed to replace those scores.
According to SandForce, the SF-1200’s poor file creation performance in FC-Test is due to the application’s failure to take advantage of command queuing. That’s a fair point to make, but given the far superior performances of the other SSDs here, one could also argue that the SF-1200 is too reliant on command queuing to accelerate writes. Based on the results of some previous testing, our use of Microsoft’s standard Windows 7 AHCI drivers also appears to be holding back the SandForce drives; they perform better with Intel’s own AHCI drivers.
Unfortunately, the F120 doesn’t solve any of those problems. In fact, the drive’s file creation speeds are even slower than those of the other SF-1200-based SSDs, relegating this latest Force to the back of the pack.
The SF-1200’s dependence on command queuing doesn’t seem to hamper the F120’s read performance. All of the SandForce drives are much more competitive when reading the very same files they had so much trouble creating. Unlike some of the other SSDs, whose read speeds drop precipitously with the MP3 and program file sets, the SandForce drives offer solid performance across all three file sets.
Overprovisioning doesn’t appear to have much impact here. The F120 is about as fast as the F100, whose performance matches OCZ’s SandForce-based SSDs.
The SF-1200’s slow file creation speeds tank the F120’s chances in the copy tests. With write performance the likely bottleneck for the SandForce drives, it’s no surprise to see the F120 lagging behind the F100, Agility, and Vertex once more.
File copy speed
Although FC-Test does a good job of highlighting how quickly drives read, write, and copy different types of files, the app is antiquated enough to completely ignore the command queuing logic built into modern hard drives and SSDs. FC-Test only uses a queue depth of one, while Native Command Queuing can stack up to 32 I/O requests when asked. To get a better sense of how these drives react when moving files around in Windows 7, we performed a set of hand-timed copy tests with 7GB worth of documents, digital pictures, MP3s, movies, and program files. These files were copied from the drive to itself to eliminate any other bottlenecks.
Before conducting our first wave of tests, I secure-erased each of the SSDs to put them into a pristine, fresh state. Curious to see how the SSDs would handle the same test when in a used state, I ran our IOMeter workstation access pattern with 256 concurrent I/O requests for 30 minutes before launching into a second batch of copy tests.
IOMeter creates a massive test file that spans the entirety of a drive’s capacity, and deleting it to make room for our second salvo of copy tests should gives us a glimpse at each SSD’s TRIM recovery strategy. What we’ve essentially done here is filled all of an SSD’s flash pages, subjected the drive to a punishing workload with a highly-randomized access pattern, and then marked all of the flash pages as available to be reclaimed by garbage-collection or wear-leveling routines.
Mechanical hard drives aren’t subject to the block-rewrite penalty that causes SSD performance degradation as flash pages become occupied, so you won’t see any difference between their fresh- and used-state performances below. We tested the mechanical drives in both states just to be sure, though.
This is a recent addition to our test suite, and since we had to return our PX-128M1S sample to Plextor after reviewing the drive, we were unable to include it here. You’re probably not missing out, though. The Plextor SSD doesn’t support TRIM, so it wouldn’t have fared well.
Now that’s a relief! The SF-1200 was flummoxed by FC-Test, yet Windows 7 appears to make effective use of command queuing when copying files. The F120 fares much better in this real-world copy test, and it’s every bit as quick as the other SandForce drives.
Where the SF-1200 falls in the standings depends on whether the drive is running in a fresh or used state. In keeping with its desire to minimize flash writes and extend drive life spans, SandForce uses a less aggressive garbage collection algorithm that doesn’t reclaim trimmed flash pages as quickly as some others. The consequence of that approach is a drop in performance between fresh and used states that moves the F120 from ahead of the Nova and X25-M to behind them.
We’ve long used WorldBench to test performance across a broad range of common desktop applications. The problem is that few of those tests are bound by storage subsystem performancea faster hard drive isn’t going to improve your web browsing or 3ds Max rendering speeds. A few of WorldBench’s component tests have shown favor to faster hard drives in the past, though, so we’ve included them here.
WorldBench’s Nero and Photoshop tests both show the F120 trailing its 100GB Force counterpart by substantial margins. The F120 is more than a minute and a half slower in Photoshop and just about 40 seconds off in Nero, leaving the drive well behind the fastest SSDs in those tests.
Although source-code compiling isn’t a part of the WorldBench suite, we’ve often been asked to add a compile test to our storage reviews. And so we have. For this test, we built a dump of the Firefox source code from March 23, 2010 using Visual Studio 2008. This process writes over 22,000 files totaling about 840MB, so there’s plenty of disk activity. However, we had to restrict compiling to a single thread because using multiple threads in Windows 7 proved to be unstable. Mozilla recommends that Firefox be compiled with a single thread.
Given how close all the drives are in this test, it should come as no surprise that the F120 closely shadows the other SandForce SSDs. We’re currently looking into alternatives for this test, so if you have a suggestions for a multithreaded compiling test that will run in Windows 7, won’t be bound by our CPU, preferably uses open-source code available to the general public, and isn’t OpenOffice (which we’re exploring already), please shoot me an email.
Boot and load times
Our trusty stopwatch makes a return for some hand-timed boot and load tests. When looking at the boot time results, keep in mind that our system must initialize multiple storage controllers, each of which looks for connected devices, before Windows starts to load. You’ll want to focus on the differences between boot times rather than the absolute values.
This boot test starts the moment the power button is hit and stops when the mouse cursor turns into a pointer on the Windows 7 desktop. For what it’s worth, I experimented with some boot tests that included launching multiple applications from the startup folder, but those apps wouldn’t load reliably in the same order, making precise timing difficult. We’ll take a look at this scenario from a slightly different angle in a moment.
The F120 pulls to within a second of the fastest boot time we’ve ever recorded with these new test systems. Once more, the F120 matches the performance of the F100, Agility, and Vertex drives.
A faster hard drive is not going to improve frame rates in your favorite game (not if you’re running a reasonable amount of memory, anyway), but can it get you into the game quicker?
Although not quite the fastest SSD of the lot, the F120 performs well in our level load tests. The 120GB Force is really only a few seconds shy of the leaders, making it considerably quicker than mechanical hard drives and even Seagate’s Momentus XT hybrid.
TR DriveBench is a new addition to our test suite that allows us to record the individual IO requests associated with a Windows session and then play those results back on different drives. We’ve used this app to create a new set of multitasking workloads that should be representative of the sort of disk-intensive scenarios folks face on a regular basis.
Each workload is made up of two components: a disk-intensive background task and a series of foreground tasks. The background task is different for each workload, but we performed the same foreground tasks each time.
In the foreground, we started by loading up multiple pages in Firefox. Next, we opened, saved, and closed small and large documents in Word, spreadsheets in Excel, PDFs in Acrobat, and images in Photoshop. We then fired up Modern Warfare 2 and loaded two special-ops missions, playing each one for three minutes. TweetDeck, the Pidgin instant-messaging app, and AVG Anti-Virus were running throughout.
For background tasks, we used our Firefox compiling test; a file copy made up of a mix of movies, MP3s, and program files; a BitTorrent download pulling seven Linux ISOs from 800 connections at a combined 1.2MB/s; a video transcode converting a high-def 720p over-the-air recording from my home-theater PC to WMV format; and a full-disk AVG virus scan.
DriveBench produces a trace file for each workload that includes all IOs that made up the session. We can then measure performance by using DriveBench to play back each trace file. During playback, any idle time recorded in the original session is ignoredIOs are fed to the disk as fast as it can process them. This approach doesn’t give us a perfect indicator of real-world behavior, but it does illustrate how each drive might perform if it were attached to an infinitely fast system. We know the number of IOs in each workload, and armed with a completion time for each trace playback, we can score drives in IOs per second.
Below, you’ll find an overall average followed by scores for each of our individual workloads. The overall score is an average of the mean performance score in each multitasking workload.
DriveBench doesn’t produce reliable results with Microsoft’s AHCI driver, forcing us to obtain the following performance results with Intel’s 18.104.22.1684 RST drivers. We couldn’t get DriveBench to play nicely with our the X25-V RAID config, either, which is why it’s not listed in the graphs below. The app will only run on unpartitioned drives, so we tested drives after they’d completed the rest of the suite.
The Force’s DriveBench throughput drops by more than 17% when we move from the F100 to the F120. That’s quite a step down, one that allows the SiliconEdge Blue to edge ahead of the 120GB model.
Let’s break down the overall average into individual test results to see if anything stands out.
The file copy and BitTorrent workloads seem to be particularly challenging for the F120, at least when compared to its 100GB sibling. Interestingly, the F120 doesn’t match the performance of the other SandForce drives in any of our multitasking workloads.
Curious to see whether removing the multitasking element of these tests would have any bearing on the standings, I recorded a control trace without a background task.
Yup, the F120’s still slower. And this time, it’s beaten by Intel’s budget X25-V.
DriveBench lets us start recording Windows sessions from the moment the storage driver loads during the boot process. We can use this capability to take another look at boot times, again assuming our infinitely fast system. For this boot test, I configured Windows to launch TweetDeck, Pidgin, AVG, Word, Excel, Acrobat, and Photoshop on startup.
Before reading too much into these results, note that our startup trace runs very quickly on the fastest SSDs. The difference in trace playback times between the top five SSDs and the F120 only amounts to a single second. This test does provide an excellent example of why solid-state drives can offer much quicker boot times than their mechanical counterparts, though.
Our IOMeter workloads are made up of randomized access patterns, presenting a good test case for both seek times and command queuing. The app’s ability to bombard drives with an escalating number of concurrent IO requests also does a nice job of simulating the sort of demanding multi-user environments that are common in enterprise applications.
Throughout our testing, we’ve seen the F120 offer slower write performance than the F100. That trend continues in IOMeter, as the 120GB Force falls a little behind the other SandForce drives with the file server, database, and workstation access patterns. The F120 is just as quick as the other SF-1200 models with the web server test, which is made up exclusively of read operations. The others have a mix of random reads and writes.
Of course, the SF-1200 can afford to give up a little performance with those three mixed workloads. When writes are a part of the equation, the F120 still offers substantially higher transaction rates than its closest competition. The SandForce SSDs don’t fare quite so well with the web server access pattern, but they still push just as many IOps as the Nova and more than the SiliconEdge Blue, SSDNow V+, and PX-128M1S.
For our power consumption tests, we measured the voltage drop across a 0.1-ohm resistor placed in line with the 5V and 12V lines connected to each drive. We were able to calculate the power draw from each voltage rail and add them together for the total power draw of the drive. Drives were tested while idling and under an IOMeter load consisting of 256 outstanding I/O requests using the workstation access pattern.
The F120’s power consumption closely matches that of the other SandForce-based drives. Even without cache memory chips onboard, the SF-1200-based SSDs draw more juice than the SSDNow V+ and Nova V128.
Noise levels were measured with a TES-52 Digital Sound Level meter 1″ from the side of the drives at idle and under an HD Tune seek load. Drives were run with the PCB facing up.
Our noise level and power consumption tests were conducted with the drives connected to the motherboard’s P55 storage controller.
I’ve consolidated the solid-state drives here because they’re all completely silent. The SSD noise level depicted below is a reflection of the noise generated by the rest of the test system, which has a passively-cooled graphics card, a very quiet PSU, and a nearly silent CPU cooler.
Solid-state drives have no impact on system noise levels. If you’re starting off with a quiet rig, adding an SSD isn’t going to make the system any louder. A mechanical hard drive will, especially when it’s seeking.
Capacity per dollar
After spending pages rifling through a stack of performance graphs, it might seem odd to have just a single one set aside for capacity. After all, the amount of data that can be stored on a hard drive is no less important than how fast that data can be accessed. Yet one graph is really all we need to express how these drives stack up in terms of their capacity, and more specifically, how many bytes each of your hard-earned dollars might actually buy.
We took drive prices from Newegg to establish an even playing field for all the contenders. Mail-in rebates weren’t included in our calculations. Rather than relying on manufacturer-claimed capacities, we gauged each drive’s capacity by creating an actual Windows 7 partition and recording the total number of bytes reported by the OS. Having little interest in the GB/GiB debate, I simply took that byte total, divided by a Giga (109), and then by the price. The result is capacity per dollar that, at least literally, is reflected in gigabytes.
The F120 isn’t available just yet, but Corsair expects the drive to sell for $339 when it arrives. At that price, the 120GB Force offers quite a bit more capacity per dollar than SandForce-based drives with higher overprovisioning. However, the F120’s cost per gigabyte is only about average for a consumer-grade SSD. Corsair’s Nova V128 also serves up 120GB at close to the same price point, while SSDs from Plextor, Kingston, WD, Crucial, and Intel provide a little more capacity per dollar.
When I concluded our SandForce showdown, I deferred final judgment on the SF-1200 until SandForce released firmware for the controller with 7% overprovisioning. Now that the F120 has arrived, it’s time to settle up. I’m glad I waited.
As it turns out, the Force is not as strong with the F120 as it is with the original F100. The 120GB drive does offer near-identical read performance to its 100GB sibling, but it’s slower in both sequential and random writes. That fact doesn’t affect program load times or even copying files in Windows 7. However, it does hamper performance with multitasking and multi-user workloads, as evidenced by our DriveBench and IOMeter results.
In IOMeter, the SF-1200 has a commanding enough lead over the rest of the field that losing a few IOps isn’t a big deal; it’s still an absolute beast. But in our multitasking tests, where F100 already trailed a number of its competitors, SandForce can ill afford to give up ground.
Then there’s the matter of the SF-1200’s continued struggles with FC-Test. I’d be inclined to cut SandForce some slack if other SSDs exhibited similar issues, but the SF-1200’s reliance on command queuing to maximize sequential throughput seems to be unique. Fortunately, the drive was quick to boot Windows and load levels in our gaming tests. It can copy files very quickly in the real world, too, although not quite as fast as some of its competitors, especially in a used state.
SandForce’s less aggressive garbage collection algorithm is yet another reminder that the SF-1000 family was designed with a eye toward extending SSD endurance in workstation and server environments with a high volume of writes. There’s little doubt in my mind that this focus, which birthed DuraWrite, RAISE, and everything else inside the black box at the heart of the SF-1000 series, will allow SandForce-based drives to use their flash’s limited write-erase cycles more sparingly than other SSDs. That should be a boon to users who might be eyeing the SF-1200 as a low-cost alternative to enterprise-specific SSDs. The F120 might work well as a scratch disk in a workstation where space is at a premium, but we’d still feel more comfortable recommending the SandForce drives with 28% overprovisioning for most server applications.
For notebook users, the F120 makes a lot more sense than the F100. The extra capacity is going to be particularly valuable in the absence of a secondary hard drive for mass storage, and you should still get plenty of longevity thanks to the SF-1200’s low write amplification factor.
So, what about desktops? I think the F120 is better suited for desktop use than SF-1200-based drives with 28% overprovisioning. Users will welcome the extra 20GB of breathing room. Most mainstream SSDs have sufficient longevity for desktop computers, and thanks to DuraWrite, the F120 should still last longer than most.
The move to 7% overprovisioning allows SF-1200-based drives like the F120 to compete more directly with other desktop SSDs on price. Unfortunately, the direct competition is stiff, even within Corsair’s own stable. The Indilinx-based Nova V128 has nearly an identical cost per gigabyte, but faster used-state copy speeds, higher multitasking throughput, more consistent write speeds, and even better performance in our handful of application tests. For a desktop OS and applications drive, I’d choose the Nova over the Force. The Force F120 is only more attractive than rivals like the Nova if you’re willing to give up some performance for greater longevity and higher peak throughput with random writes. I can see specific cases where those attributes might be desirable enough to tip the balance, but not in a typical enthusiast’s desktop.