Let’s be honest. In the PC world, size matters. This is true not only for the height of ATX towers, but also for the thickness of ultrabooks. The size of one’s SSD is important, too. In addition to defining how many applications, games, and data can enjoy the speedy access times of solid-state storage, an SSD’s capacity plays a large role in determining its overall performance.
Drive makers admit as much on their datasheets, which routinely list slower performance specifications for lower rungs on the capacity ladder. Writes are affected more than reads, the MB/s and IO/s ratings say, and models around 256GB are typically the fastest of each respective breed. As one might expect, it’s these higher capacity points that are first sampled to the press. They’re not the capacities most folks end up buying, though.
The price of flash memory has fallen in recent years, yet high-capacity SSDs remain expensive luxuries at $300 and up. Drives in the 120-128GB range are much more attainable, with street prices comfortably below $200. 64GB variants are easily affordable at around $100, so they’re especially tempting as system drives for desktops and streamlined notebooks.
We’ve already explored how contemporary 120-128GB SSDs compare and how their performance scales up to higher-capacity models. Today, we’re moving in the opposite direction with a stack of 64GB and smaller SSDs. We’ve tested these Blue Light Specials to complete the performance scaling picture. Our test results now run the capacity gamut, from 64GB or less to 256GB or more. We’ve also thrown pairs of 64GB drives into RAID 0 arrays to see whether it’s worth doubling up on cheaper SSDs or splurging on a single, higher-capacity model.
So, yeah, we have a mountain of performance data spread across pages of pretty graphs, plus the usual value analysis to help make sense of it all. And thousands of dollars worth of SSDs photographed in pretentious poses. This article is the culmination of quite literally months of testing in the Benchmarking Sweatshop, and I’d really like to get on with it. Shall we?
And then there were four
There were five drive families represented in our first SSD scaling article, but we’ve had to cut back to four because the Intel 510 Series is only available in 120GB and 250GB capacities. Since it relies on older 34-nm flash memory, the 510 Series is overdue for a 25-nm replacement, anyway. That’s about all I can say on that subject for now—but stay tuned.
|Corsair Force Series 3||6Gbps||SandForce SF-2281||25-nm Micron async||NA||3 years|
|Corsair Force Series GT||6Gbps||SandForce SF-2281||25-nm Intel sync||NA||3 years|
|Crucial m4||6Gbps||Marvell 88SS9174||25-nm Micron sync||128MB||3 years|
|Intel 320 Series||3Gbps||Intel PC29AS21BA0||25-nm Intel||64MB||5 years|
Without the Intel 510 Series, we’re still left with a range of drives covering the most popular configurations available today. Two of these configs are exclusive: the 320 Series is only available from Intel, and the m4 uses a unique mix of chips offered by Crucial alone. Corsair’s Force SSDs are a little different. They represent a couple of SandForce-based configurations also offered by a number of other drive makers, including OCZ and Kingston.
The Force Series 3 is the slower of the two SandForce configs due to its asynchronous NAND, which isn’t as exotic as the synchronous stuff found in the GT model. Both drives rely on the same SandForce SF-2281 controller. We examined this chip in depth in our early look at the OCZ Vertex 3, so I won’t burden you with all of the details here. It’s worth noting that SandForce uses an on-the-fly compression scheme to speed write performance and reduce NAND wear. Unlike other SSD controllers on the market, the SF-2281 doesn’t make use of separate DRAM cache memory. Otherwise, the chip has a 6Gbps Serial ATA interface and eight memory channels.
Marvell provides the controller in Crucial’s m4. The 88SS9174 chip is familiar from the Intel 510 Series and the old Crucial C300, but in the Crucial m4, it’s paired with the latest 25-nm flash. This memory comes from the same synchronous class of NAND found in the Force GT, making the m4 a similarly premium solution. As we explained when we first looked at the m4, the Marvell chip matches the SandForce controller with a 6Gbps interface and eight memory channels. There’s no compression voodoo at work in it, though.
The Crucial m4 has double the cache memory of Intel’s 320 Series, whose 3Gbps SATA interface hails from the previous generation. Indeed, the origins of the Intel PC29AS21BA0 controller at the heart of the 320 Series can be traced all the way back to the original X25-M, which came out more than three years ago. The chip has ten memory channels, although due to their vintage, each one is likely slower than a modern equivalent. Nevertheless, the drive is outfitted with new 25-nm NAND that we suspect is of the asynchronous variety. (Intel keeps certain details about the 320 Series to itself.)
Since our storage test rigs feature 6Gbps SATA controllers, the Intel 320 Series has a pretty big handicap right out of the gate. Intel isn’t selling the drive as a performance leader, instead focusing on reliability. This is the only drive of the lot with five years of warranty coverage—two years more than the industry norm.
With 8-10 memory channels each, the controllers behind our stack of SSDs have plenty of internal parallelism. Saturating those parallel data pathways is the key to exploiting each controller’s performance potential. We couldn’t get SSD makers to go into too many specifics about what’s required to keep each controller’s memory channels at full utilization, but the number of NAND dies is an integral component of the equation.
Solid-state drives split their NAND dies between multiple physical packages. The size of the NAND dies can vary, as can the number of dies per package. To help you get a sense of how the various SSDs and capacity points stack up, we’ve whipped up a handy chart detailing each model’s die configuration.
|Corsair Force Series 3||60GB||8 x 64Gb||1||$95|
|120GB||16 x 64Gb||1||$170|
|240GB||32 x 64Gb||1 or 2||$315|
|Corsair Force Series GT||60GB||8 x 64Gb||1||$110|
|120GB||16 x 64Gb||1||$190|
|240GB||32 x 64Gb||1 or 2||$355|
|Crucial m4||64GB||16 x 32Gb||2||$105|
|128GB||32 x 32Gb||2||$180|
|256GB||32 x 64Gb||2||$345|
|Intel 320 Series||40GB||6 x 64Gb||1||$93|
|120GB||16 x 64Gb||1 or 2||$200|
|300GB||40 x 64Gb||2||$530|
Let’s start with the easy ones: the Corsair Force 3 and Force GT, which use the same die configurations at each capacity point. All of the dies weigh in at 64Gb, so the number of them doubles with each step up the ladder. We’ll be making stops at 60GB, 120GB, and 240GB. Both of these 240GB drives come in two configurations: one with 32 dies spread across the same number of physical packages, and another with two dies per package. Corsair assures us the performance of these die configs is identical. For what it’s worth, our Force 3 240GB sample has one die per package, while the Force GT we tested has two.
Despite slight differences in packaging, the Corsair Force SSDs should give us a good sense of how the SandForce controller’s performance scales up with the number of NAND dies. Clearly, there’s something to be gained from having more than one die per memory channel. The 60GB Force SSD has enough NAND dies to match the eight channels in the SandForce controller, but it’s tagged with lower performance ratings than the 120GB and 240GB drives.
The scaling picture will be a little more complicated with the Crucial m4. This drive uses 32Gb NAND dies to serve the 64GB and 128GB capacity points, but the 256GB unit is equipped with 64Gb dies. As a result, the 64GB drive has 16 dies, while its higher-capacity brothers have 32 dies each. Any performance deltas between the 128GB and 256GB versions of the m4 will be due to differences in the NAND dies themselves rather than their number. All of the m4s squeeze two dies per package, so those higher-capacity models also have the same package counts.
Admittedly, our selection of Intel 320 Series SSDs doesn’t map perfectly to the capacity points we’ve collected for the others. We’ve reached all the way down to a 40GB model at the low end and up to a 300GB monster at the high end. The 40GB drive is only marginally cheaper than its 60GB and 64GB competition, though. While the 300GB model costs considerably more than our 240GB and 256GB examples, it’s the only 320 Series north of 180GB.
Like the Force drives, the Intel 320 Series uses 64Gb NAND dies throughout. The fact that the 40GB model sports four fewer NAND dies than the controller has memory channels probably won’t help performance. The 120GB version has an additional 10 NAND dies and a slightly unconventional configuration. There are 10 NAND packages on the chip but 16 flash dies, so some of the packages have one die, while others pack two.
We’d feel worse about throwing the Intel 320 Series into the cage with a bunch of 6Gbps rivals if Intel were offering its drives at substantial discounts. Despite being based on an older controller architecture that uses a dated SATA interface, the 320 Series 120GB costs more than the competition.
Because of the capacity differences involved, it’s better to look at each drive’s cost per gigabyte. In the chart below, we’ve combined Newegg prices with the amount of storage capacity available to end users. We’ve included Western Digital’s Caviar Black 1TB, one of our favorite 7,200-RPM desktop drives, for reference.
From a cost-per-gigabyte perspective, the Intel 320 Series is a pretty lousy deal. The 40GB drive is the most expensive of the bunch by a fair margin over the next-closest alternative, a 60GB Force GT that should be quite a bit faster. The higher-capacity flavors of the Intel 320 Series don’t look all that good on this scale, either.
Surprisingly, the highest-capacity Force 3 and Crucial m4 models offer the most storage per dollar. The Corsair SSD is just 31 cents shy of the elusive dollar-per-gigabyte threshold, while the Crucial m4 runs four cents more. In both cases, the 120-128GB variants will set you back about $1.40 per gigabyte.
Although the Intel 320 Series and the Force GT don’t follow the same behavior, they aren’t excepted from the other trend our cost-per-gigabyte analysis reveals. Across the board, the lowest-capacity SSDs cost more per gigabyte than their higher-capacity counterparts. Budget SSDs may offer lower costs of entry, but their value proposition isn’t quite as strong, at least from a capacity perspective. To see how the cheaper drives shake out overall, we’ll now move on to performance.
Test notes and methods
Before dropping you into a deluge of graphs, we’ll take a moment to highlight our testing methods. If you’re already familiar with how we do things around here, feel free to skip ahead to the performance analysis.
We used the same testing methods here as in other recent storage reviews, so the results on the following pages are comparable to the larger data set on display in our OCZ Octane 512GB review. To narrow our attention on performance scaling across multiple capacities, we’ve trimmed all but our 3.5″ desktop reference, the Caviar Black 1TB.
In addition to testing the lower-capacity drives on their own, we combined two of each into RAID 0 arrays using the RAID feature of the P67 storage controllers on our test systems. The arrays were configured with 128KB stripe sizes, which is the default for the Intel RAID controller.
We should note that the TRIM command used to combat the block-rewrite penalty associated with flash memory doesn’t work with RAID arrays—at least not yet. Intel is working on a driver update that will bring TRIM support to SSD RAID configurations, but a timeline for its release hasn’t been made public. Fortunately, RAID doesn’t affect the garbage collection routines inherent to each SSD controller.
We used the following system configuration for testing:
|Processor||Intel Core i7-2500K 3.3GHz|
|Motherboard||Asus P8P67 Deluxe|
|Platform hub||Intel P67 Express|
|Platform drivers||INF update 188.8.131.520
|Memory size||8GB (2 DIMMs)|
|Memory type||Corsair Vengeance DDR3 SDRAM at 1333MHz|
|Audio||Realtek ALC892 with 2.62 drivers|
|Graphics||Asus EAH6670/DIS/1GD5 1GB with Catalyst 11.7 drivers|
|Hard drives||Corsair Force Series 3 60GB with 1.3.2 firmware
Corsair Force Series 3 120GB with 1.3 firmware
Corsair Force Series 3 240GB with 1.3.2 firmware
Corsair Force Series 3 120GB RAID with 1.3.2 firmware
Corsair Force Series GT 60GB with 1.3.2 firmware
Corsair Force Series GT 120GB with 1.3 firmware
Corsair Force Series GT 240GB with 1.3.2 firmware
Corsair Force Series GT 120GB RAID with 1.3.2 firmware
Crucial m4 64GB with 0009 firmware
Crucial m4 128GB with 0009 firmware
Crucial m4 256GB with 0009 firmware
Crucial m4 128GB RAID with 0009 firmware
Intel 320 Series 40GB with 4PC10362 firmware
Intel 320 Series 120GB with 4PC10362 firmware
Intel 320 Series 300GB with 4PC10362 firmware
Intel 320 Series 80GB RAID with 4PC10362 firmware
WD Caviar Black 1TB with 05.01D05 firmware
|Power supply||Corsair Professional Series Gold AX650W|
|OS||Windows 7 Ultimate x64|
Thanks to Asus for providing the systems’ motherboards and graphics cards, Intel for the CPUs, Corsair for the memory and PSUs, Thermaltake for the CPU coolers, and Western Digital for the Caviar Black 1TB system drives.
We used the following versions of our test applications:
- Intel IOMeter 1.1.0 RC1
- HD Tune 4.61
- TR DriveBench 1.0
- TR DriveBench 2.0
- TR FileBench 0.2
- Qt SDK 2010.05
- MiniGW GCC 4.4.0
- Duke Nukem Forever
- Portal 2
Some further notes on our test methods:
- To ensure consistent and repeatable results, the SSDs were secure-erased before almost every component of our test suite. Some of our tests then put the SSDs into a used state before the workload begins, which better exposes each drive’s long-term performance characteristics. In other tests, like DriveBench and FileBench, we induce a used state before testing. In all cases, the SSDs were in the same state before each test, ensuring an even playing field. The performance of mechanical hard drives is much more consistent between factory fresh and used states, so we skipped wiping the Caviar before each test—mechanical drives take forever to secure erase.
- We run all our tests at least three times and report the median of the results. We’ve found IOMeter performance can fall off with SSDs after the first couple of runs, so we use five runs for solid-state drives and throw out the first two.
- Steps have been taken to ensure that Sandy Bridge’s power-saving features don’t taint any of our results. All of the CPU’s low-power states have been disabled, effectively pegging the 2500K at 3.3GHz. Transitioning in and out of different power states can affect the performance of storage benchmarks, especially when dealing with short burst transfers.
The test systems’ Windows desktop was set at 1280×1024 in 32-bit color at a 75Hz screen refresh rate. Most of the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.
HD Tune — Transfer rates
HD Tune lets us present transfer rates in a couple of different ways. Using the benchmark’s “full test” setting gives us a good look at performance across the entire drive rather than extrapolating based on a handful of sample points. The data created by the full test also gives us fodder for line graphs. To make those more readable, we’ve busted out separate graphs for each SSD family.
All the results in our graphs are color-coded by SSD type. The lone mechanical drive has been greyed out to set it apart those few times it might crawl up from the bottom of the standings.
Funky. The line graphs paint a picture of consistency for the single-drive configurations. There’s very little flutter in the transfer rate across the extent of each SSD’s total capacity—and next to no difference in read speeds between the various capacity points in each family.
The RAID configs exhibit read speeds that oscillate within a span of about 75MB/s. This occurs at a higher frequency in the Corsair Force and Crucial m4 SSDs, while the Intel 320 Series’ peaks and valleys are more widely spaced.
Those saw-tooth patterns for the RAID configs average out to about the same read speeds as the single drives. The top three SSD groups are pretty evenly matched here. The Crucial m4 tops the standings, but it’s only marginally quicker than the Force SSDs.
The Intel 320 Series has much lower read speeds than its 6Gbps competition. At least the pack of Intel drives enjoys a commanding lead over our mechanical reference.
When we switch to writes, the Force SSDs are all over the map. Regardless of the configuration, their write speeds spike violently and regularly. The higher the capacity, the higher the peaks—and the shallower the valleys. We’ve seen this behavior from two generations of SandForce controllers, so it’s nothing new.
The write-speed profiles of the Crucial m4 and Intel 320 Series SSDs look much more sedate. Here, we get our first clear taste of the slower write performance offered at lower capacity points.
At least in HD Tune’s write speed test, the Crucial m4 and Intel 320 Series are more affected by capacity differences than the Corsair Force SSDs. The Force 3’s average write speed increases by 28% when you step up from 60GB to 240GB. Over the same span, the Force GT’s write speed jumps by 33%. Those numbers are a stark contrast with the Crucial m4, whose write speed more than doubles going from the 64GB model to 256GB. That gap pales to the colossal gulf between the extremes of the Intel 320 Series, though.
The SandForce-based Force SSDs are the fastest overall, so their lower capacity points have substantial advantages over the direct competition. The Crucial m4s lag behind the Force SSDs, while the Intel 320 Series brings up the rear. This time, the Intel 40GB variant is even slower than the Caviar Black.
So far, the RAID configs haven’t had much to offer. In this write speed test, all four of ’em are slower than single-drive setups that offer the same capacity.
HD Tune — Burst speeds
HD Tune’s burst speed tests are meant to isolate a drive’s cache memory.
With only a few exceptions, there’s little difference in burst performance between the different SSD capacity points. The Force drives have the edge overall and a sizable advantage in the write speed test.
HD Tune — Random access times
In addition to letting us test transfer rates, HD Tune can measure random access times.
I debated pulling the mechanical drive from these results, but it does a really good job of illustrating the differences in access times between mechanical and solid-state storage. Compared to the gap between HDDs and SSDs, the differences in access times between the solid-state solutions are negligible.
In HD tune’s 4KB random read test, the SSDs all fall between 0.03 and 0.07 ms. The drives are grouped by family, and there’s no difference in access times between the different capacity points.
Things change in the 1MB test, which has all the single-drive configs but the Intel 320 Series locked in a tie. The Intel SSDs are a little bit slower, while the RAID configs (the non-Intel ones, anyway) are a little bit faster.
What’s true for random reads is also true for writes, at least at the 4KB transfer size. In the 1MB test, the Force SSDs dominate but offer very little differentiation within their ranks. Only the RAID setups distance themselves from the single-drive configs.
The 1MB test is enough to coax some performance scaling out of the Crucial m4, whose access time drops with each step up the capacity ladder. The delta between the 64GB and 128GB drives is particularly substantial. The same is true for the transition between the 40GB and 120GB Intel 320 Series SSDs. That family, too, enjoys quicker access times as capacity increases.
TR FileBench — Real-world copy speeds
Concocted by our resident developer, Bruno “morphine” Ferreira, FileBench runs through a series of file copy operations using Windows 7’s xcopy command. Using xcopy produces nearly identical copy speeds to dragging and dropping files using the Windows GUI, so our results should be representative of typical real-world performance. We tested using the following five file sets—note the differences in average file sizes:
|Number of files||Total size||Average file size|
The names of most of the file sets are self-explanatory. The Mozilla set is made up of all the files necessary to compile the browser, while the TR set includes years worth of the images, HTML files, and spreadsheets behind my reviews.
To get a sense of how aggressively each SSD reclaims flash pages tagged by the TRIM command, we’ve run FileBench with the solid-state drives in two states. We first tested them in a fresh state after a secure erase. The SSDs were then subjected to a 30-minute IOMeter workload, generating a “tortured used” state ahead of another batch of copy tests. We haven’t found a substantial difference in the performance of mechanical drives between these states. However, because they don’t support TRIM, our RAID configs will be particularly challenged by the used-state tests.
Carnage ensues for the used-state RAID configs, which offer as little as one eighth the performance of their fresh states. The delta between fresh and used-state RAID configs is particularly wide with the larger files of the movie, RAW, and MP3 sets. Those gaps narrow considerably with the smaller and more numerous files of the TR and Mozilla sets, though.
If we just look at the single-drive results, it’s clear that capacity plays a major role in copy performance. For every SSD family in virtually every file set, the higher-capacity models post faster copy speeds. As in the RAID results, the magnitudes of the gaps track roughly with the sizes of the files.
In general, the Crucial m4 does well with larger files, while the Corsair Force SSDs dominate with smaller ones. The Force GT is consistently faster than its sibling. Another consistent trend is the sluggish performance of the Intel 320 Series, especially in the TR and Mozilla sets.
TR DriveBench 1.0 — Disk-intensive multitasking
TR DriveBench allows us to record the individual IO requests associated with a Windows session and then play those results back as fast as possible on different drives. We’ve used this app to create a set of multitasking workloads that combine common desktop tasks with disk-intensive background operations like compiling code, copying files, downloading via BitTorrent, transcoding video, and scanning for viruses. The individual workloads are explained in more detail here.
Below, you’ll find an overall average followed by scores for each of our individual workloads. The overall score is an average of the mean performance score with each multitasking workload. DriveBench doesn’t play nicely with RAID configurations, so our arrays will have to sit out this and the next round. They’ll be back.
SSD capacity has a big impact on overall DriveBench performance. The only real exceptions are the Crucial m4 128GB and 256GB drives, which are very closely matched. Remember, those two models both have 32 NAND dies. The other SSDs hit 120GB with fewer NAND dies than they use to serve higher capacity points.
Although the performance drop associated with the Crucial m4’s step down to 64GB is quite large, the budget Crucial drive is still faster than the 60GB Force SSDs. Overall, the 60-64GB drives aren’t that much slower than their 120-128GB counterparts. They certainly don’t suffer as much as the Intel 320 Series 40GB, which manages only about a third the performance of the 120GB drive.
Let’s break down DriveBench’s overall score into individual test results to see if we can find any interesting subplots.
The file copy test seems to be the most dependent on SSD capacity, which should come as no surprise after scrolling through our FileBench results. There’s virtually no difference between the 128GB and 256GB Crucial m4 drives in that test, and in the compile test, the 128GB model is the faster of the two.
All of the other results track closely with the overall averages. Note that in most cases, the 60-64GB Corsair Force and Crucial m4 SSDs are competitive with the 120GB Intel 320 Series.
TR DriveBench 2.0 — More disk-intensive multitasking
As much as we like DriveBench 1.0’s individual workloads, the traces cover only slices of disk activity. Because we fire the recorded I/Os at the drives as fast as possible, the SSDs also have no downtime during which to engage background garbage collection or other optimization algorithms. DriveBench 2.0 addresses both of those issues with a much larger trace that spans two weeks of typical desktop activity peppered with multitasking loads similar to those in DriveBench 1.0. We’ve also adjusted our testing methods to give solid-state drives enough idle time to tidy up after themselves. More details on DriveBench 2.0 are available on this page.
Instead of looking at a raw IOps rate, we’re going to switch gears and explore service times—the amount of time it takes drives to complete an I/O request. We’ll start with an overall mean service time before slicing and dicing the results.
Across the board, the higher-capacity SSDs offer lower mean service times in DriveBench2.0. Within each drive family, the performance delta between the two highest capacities is much smaller than it is between the two lowest capacities. Case in point: the Force GT, whose 240GB variant enjoys a 20-ms edge over the 120GB drive, which in turn sits a full 45 ms ahead of the 60GB model.
The Corsair Force SSDs come out on top overall, with the synchronous Force GT configuration leading the async Force 3 throughout. The Intel 320 Series surprisingly ekes out a victory over the Crucial m4 at every capacity point. Those two look especially undesirable in their lowest capacities, which have service times more than two times slower than the 60GB Force SSDs.
The higher-capacity SSDs aren’t just quicker with writes; they also have shorter read service times, although the differences there aren’t quite as large as the gaps in write service times. Dropping from a 240-300GB SSD to something in the 120-128GB sweet spot isn’t going to cost you as much performance as stepping from one of those mid-range drives down to a budget model.
The Crucial m4’s write service times are particularly slow, causing the 128GB drive to lag behind the 60GB Force SSDs and the 40GB Intel 320 Series. The 64GB m4’s read service times aren’t all that hot, either, but the higher-capacity models boasts read service times second only to those of the pack-leading Force GT.
How’s this for a shocker? Despite having a much higher read service time than any of the SSDs, the Caviar Black turns in quicker write service times than all of the low-capacity SSDs but the Force GT.
There are millions of I/O requests in this trace, so we can’t easily graph service times to look at the variance. However, our analysis tools do report the standard deviation, which can give us a sense of how much service times vary from the mean.
Our overall scaling trend continues. The higher-capacity SSDs offer more consistent service times than their smaller siblings, which are particularly prone to variance in their 40-64GB flavors. The write requests in this two-week trace really flummox the Crucial m4, whose comparatively slow service times are accompanied by wider variance than the competition.
Our IOMeter workloads feature a ramping number of concurrent I/O requests. Most desktop systems will only have a few requests in flight at any given time (87% of DriveBench 2.0 requests have a queue depth of four or less). We’ve extended our scaling up to 32 concurrent requests to reach the depth of the Native Command Queuing pipeline associated with the Serial ATA specification. Ramping up the number of requests gives us a sense of how the drives might perform in more demanding enterprise environments.
Things can get a little crowded with this many results, so we’ve sorted the SSD families into individual graphs. We’ll look at each family individually instead of mashing everything together.
The Corsair Force 3 Series offers higher transaction rates as capacity increases. Surprisingly, there’s a bigger difference in performance between the 120GB and 240GB models than there is between the two smaller sizes. The gaps are the widest in the file server, workstation, and database tests, all of which offer a mix of reads and writes. The web server access pattern is made up exclusively of read requests.
IOMeter marks the return of our RAID configs, which fare well in IOMeter overall. The 120GB Force 3 array boasts higher transaction rates than a single-drive config with the same capacity. In the web server test, the RAID array even tops the 240GB Force 3.
Curiously, the synchronous Force GT behaves differently in a similar RAID configuration. It challenges and even beats the 240GB drive across most of the load levels in the file server, database, and workstation tests. However, in the web server test, the RAID array has slightly lower transaction rates than the 120GB model.
Among single-drive configurations, the Force GT’s transaction rates increase as we’d expect them to. The 60GB and 120GB variants are closely matched when writes are a part of the workload, but the drives are more evenly spaced in the read-only web server test.
As we’ve seen a few times now, the Crucial m4’s 256GB and 128GB flavors offer similar performance—and a definitive edge over the 64GB model. That’s true throughout our IOMeter testing with one exception: the web server access pattern. When tasked only with random reads, the 64GB m4 actually edges out its higher-capacity counterparts.
Teaming 64GB drives in a RAID 0 array results in higher transaction rates in IOMeter’s web server test. The RAID results aren’t as impressive in the other tests, however. Although two 64GB drives offer higher transaction rates than one, they can’t match a single 128GB drive in the file server, database, and workstation tests.
When we asked Intel about how the number of NAND dies might influence the 320 Series’ performance, we were told that higher-capacity versions of the drive might actually be slower with random I/O due to the greater number of addresses required by their capacities. That appears to be true for the 300GB drive, which scores lower than the 120GB unit in three of four tests. The 120GB and 300GB models are both consistently faster than the 40GB model, though.
Combining a couple of the 40GB drives in a RAID 0 array will deliver higher transaction rates. Unless you’re only concerned with read performance, you’re still better off with a single 120GB drive.
Before timing a couple of real-world applications, we first have to load the OS. We can measure how long that takes by checking the Windows 7 boot duration using the operating system’s performance-monitoring tools. This is actually the first time we’re booting Windows 7 off each drive; up until this point, our testing has been hosted by an OS housed on a separate system drive.
The capacity of one’s SSD doesn’t appear to have much of an impact on the speed of the boot process. At their widest, the gaps between single-drive configurations amount to tiny fractions of a second.
The Intel 320 Series is an exception, of course. The 40GB drive loads Windows more than a second slower than the 120GB drive. However, the Intel 320 Series RAID config is the only array that’s not slower than its single-drive siblings.
Level load times
Modern games lack built-in timing tests to measure level loads, so we busted out a stopwatch with a couple of reasonably recent titles.
Given the hand-time nature of these tests, I don’t want to draw too many conclusions from what are ultimately very close results. Within each SSD family, much less than a second typically separates the fastest capacity from the slowest. The higher-capacity models are rarely the fastest, but they’re not exactly slow, either.
The Intel 320 Series largely trails the other SSDs. The 40GB drive is particularly slow loading Portal 2; it takes at least two seconds longer than the closest budget alternative.
We tested power consumption under load with IOMeter’s workstation access pattern chewing through 32 concurrent I/O requests. Idle power consumption was probed one minute after processing Windows 7’s idle tasks on an empty desktop.
For the most part, the lower-capacity SSDs consume less power. That’s true more under load than at idle, when the differences in power draw between the capacity points narrow.
The lower power draw of our 40-64GB drives isn’t enough to make the RAID configs more power-efficient than their like-sized counterparts. The arrays draw roughly double the power of one of their member drives, while the 120-128GB SSDs consume only slightly more wattage than their budget brethren.
The value perspective
Welcome to our famous value analysis, which adds capacity and pricing to the performance data we’ve explored over the preceding pages. We used Newegg prices to even the playing field, and we didn’t take mail-in rebates into account when performing our calculations.
Our remaining value calculations use a single performance score that we’ve derived by comparing how each drive stacks up against a common baseline provided by the Momentus 5400.4, a 2.5″ notebook drive with a painfully slow 5,400-RPM spindle speed. This index uses a subset of our performance data described on this page of an earlier SSD round-up. Some of the drives were actually slower than our baseline in a couple of the included tests, so we’ve fudged the numbers a little to prevent those results from messing up the overall picture.
Since the RAID configs couldn’t participate in DriveBench, a major component of our overall performance score, they’ll be forced to the sidelines for our value analysis. You’re not missing much. If we take all the DriveBench results out of our overall score, the picture looks like so:
Not pretty. Only the Force GT gets an overall boost out of RAID; doubling down on the other budget SSDs actually results in lower overall scores.
The solid-state arrays look worse here than they have in most of our individual tests thanks to FileBench. We use FileBench’s used-state copy times in our overall score because we think they represent long-term performance better than factory fresh results. Without TRIM support, the RAID arrays offer painfully slow used-state copy times, dragging down their overall scores.
We’re not inclined to completely revamp how we calculate our overall performance index just because RAID arrays can’t deal with TRIM, which is sort of a big deal. Instead, we’ll drop the RAID results to focus on the overall performance of our single-drive configs, DriveBench included.
Our overall score confirms the notion that size indeed matters when it comes to SSD performance. We knew as much from the manufacturer spec sheets, but it’s interesting to see how the different drive families and capacities compare across a wide range of tests.
The Force 3 has the most even gaps between the capacity points we tested. The Force GT and Crucial m4 have similar separation between their 120-128GB and 240-256GB variants, but their lower-capacity models are comparatively slower. So is the Intel 320 Series 40GB, which offers only half the performance of the 120GB drive. That’s still enough to come out ahead of the Caviar Black overall, though.
Now, for the real magic. We can plot this overall score on one axis and each drive’s cost per gigabyte on the other to create a scatter plot of performance per dollar per gigabyte.
Want to make the case for splurging on a high-capacity SSD? In addition to offering the best performance, they tend to have the lowest cost per gigabyte. That justification is sound for all but the Intel 320 Series, whose mid-range entry costs less per gigabyte than the high-end model.
At every capacity point, the Intel SSDs are simply too expensive to offer good value. The Corsair Force GT’s commanding performance certainly warrants the attached price premium, while the Force 3 looks like a better deal overall than the Crucial m4 at each and every capacity point.
Even with today’s flood-inflated prices, the Caviar Black offers a much lower cost per gigabyte than any SSD. It’s slower than the solid-state drives, of course, and by huge margins.
Although this analysis is helpful when evaluating drives on their own, what happens when we consider their cost in the context of a complete system? To find out, we’ve divided our overall performance score by the sum of our test system’s components. Those parts total around $800, which also happens to be a reasonable price for a modern notebook.
Despite their lower costs per gigabyte, the high-capacity SSDs still add more to the total cost of a complete system. If you connect the dots for each drive family, you’ll see they now drift off to the right rather than to the left. Even the Intel 320 Series is in step this time around.
As part of the cost of a full build, the minor price differences between the SSD families shrink to near-irrelevance, at least for the 40-64GB and 120-128GB models. Those capacities all line up vertically, making it easy to crown the Force GT at the top of each stack. The picture is a little more muddied for the higher-capacity drives, but the Force 3 and GT are particularly well placed.
Looking at the value equation from this angle puts mechanical storage in a different light. If capacity isn’t a concern, you can get a good SSD for less than the cost of a mid-sized hard drive.
While there are exceptions here and there, the overall results couldn’t be clearer: SSDs get faster as their capacities rise. Solid-state drives are essentially parallel arrays of NAND memory, so that outcome is to be expected. The more NAND in the array, the more parallelism can be exploited in the controller, the faster the drive. This dynamic is especially true for writes and random I/O, but sequential reads don’t see much benefit from higher capacities.
Based on our results with low-capacity SSDs, it’s better to have more parallelism in the drive than it is to have two drives in RAID. Although the striped arrays were all faster than one of their component SSDs, they didn’t impress versus higher-capacity drives that offer the same amount of total storage. FileBench nicely illustrated how the lack of TRIM support for RAID arrays can have a detrimental impact on performance.
I’m dubious whether the RAID situation would be improved with pairs of mid-range SSDs. RAID configs do have huge potential for read-dominated workloads, but increasing capacities won’t change the TRIM situation. The fact that higher-capacity drives tend to cost less per gigabyte makes the RAID proposition tenuous from a value perspective, too.
We observed the starkest differences in performance between single-drive capacity points in the Intel 320 Series, which is no surprise given the range of sizes we tested. The 40GB drive looks particularly underpowered versus other members of the family, and it’s largely trounced by the other budget drives. Although the 320 Series’ performance scales up dramatically from the 40GB starting point, this isn’t the drive family you want if performance—or cost—is of primary importance.
The Crucial m4 is more competitive overall, but its 64GB iteration is quite a bit slower than its direct rivals or the 128GB m4. There’s less of a performance gap between 128GB and 256GB versions of the m4, likely because the two have the same total number of NAND dies.
There’s more even spacing between members of the Corsair Force SSD family, perhaps because each capacity point represents a doubling in the number of 64Gb NAND dies. These are the fastest drives overall, and they’re the best values, too. With multiple vendors offering essentially the same synchronous and asynchronous SandForce configurations, drive makers have chosen to compete aggressively on price. The end result is a Force 3 line that’s incredibly affordable and a faster Force GT that’s still well priced.
At the end of the day, our results confirm that 120-128GB SSDs really do sit in the sweet spot. The lower-capacity drives can be much slower, and they’re just cheaper rather than being truly better values. You don’t have quite as much to gain stepping up to 240-300GB, either. That said, higher-capacity SSDs are both faster and cheaper per gigabyte. My advice? Buy the biggest one you can afford.