Value comparisons have become a bit of a tradition here at TR. We rolled out our first CPU value article way back in 2007 and followed up with a second look a year later. A quantitative assessment of the value proposition has since become an integral part of our CPU reviews, and we’ve even considered the value of graphics cards. What can I say? We loves us some scatter plots.
So should enthusiasts. A quest for the best value is deeply ingrained within the community’s collective psyche. We’re interested in getting the best bang for our buckfinding the proverbial sweet spot that delivers the fastest possible performance at a reasonable or at least justifiable cost. Our value scatter plots tackle performance and price at the same time, neatly mapping a landscape that we can then scour for the best combination of those elements.
If you’ve been keeping up with TR over the last few months, you’ll know that I’ve been hunkered down in the Benchmarking Sweatshop testing a torrent of solid-state drives on our brand-new pair of storage test rigs. The Twins have churned through a dozen SSD configurations in the last little while, neatly bringing us up to date with the latest drives, controllers, and firmware from all the usual suspects.
The SSD market doesn’t stagnate, so we’re going to make use of this data set while it’s still fresh, bringing our value perspective into the storage realm for the very first time. Join us as we pore over pages of scatter plots to determine which solid-state drives have the most convincing value proposition.
Before getting our hands dirty, let’s familiarize ourselves with the collection of SSDs that we’ll be considering today. The chart below neatly summarizes the key specifications of all of the contenders we’ve assembled. Since retail and e-tail pricing tends to change with great frequency, we’ve tried to simplify things some by relying on Newegg for all of our pricing data. That should give us a good idea of the price differences between the drives. For the Force F100, which is no longer being stocked and may be discontinued, we’ve gone with the drive’s last selling price.
|Flash controller||Cache size||Total capacity||Price|
|Corsair Force F100||SandForce SF-1200||NA||100GB||$410|
|Corsair Force F120||SandForce SF-1200||NA||120GB||$349|
|Corsair Nova V128||Indilinx Barefoot ECO||64MB||128GB||$349|
|Crucial RealSSD C300||Marvell 88SS9174||256MB||256GB||$660|
|Intel X25-M G2||Intel PC29AS21BA0||32MB||160GB||$405|
|Intel X25-V||Intel PC29AS21BA0||32MB||40GB||$110|
|Kingston SSDNow V+||Toshiba T6UG1XBG||128MB||128GB||$319|
|OCZ Agility 2||SandForce SF-1200||NA||100GB||$310|
|OCZ Vertex 2||SandForce SF-1200||NA||100GB||$325|
|Plextor PX-128M1S||Marvell 88SS8014||128MB||128GB||$389|
|WD SiliconEdge Blue||JMicron JMF612||64MB||256GB||$668|
The newest kids on the block are based on SandForce’s SF-1200 controller, which impressively makes do without an onboard DRAM cache. We have examples from both OCZ and Corsair covering the 100GB and 120GB capacity points. All four have 128GB of NAND flash onboard, but firmware with a lower overprovisioning percentage allows the F120 to offer 120GB of user capacity, while the others only serve up 100GB of usable storage. We’ve found the F120 to be slower than the F100, likely because of the difference in overprovisioning.
Indilinx’s Barefoot ECO controller is a slightly revised version of the original Barefoot design popularized by early enthusiast-targeted SSDs like the first-generation OCZ Vertex. Corsair’s Nova is a typical example of what has become quite a common and popular breed.
We’ve only seen Marvell’s 88SS8014 controller in Plextor’s PX-128M1S, and that’s probably a good thing. This older controller lacks TRIM support and is thus rather undesirable for anyone running Windows 7 or recent versions of Linux that support the command. Fortunately, the Marvell 88SS98174 in Crucial’s RealSSD C300 implements TRIM. The C300 also has a massive 256MB cache and support for 6Gbps SATA connectivity. For the sake of apples-to-apples comparisons, we’re using test results with the C300 running on the same 3Gbps SATA controller as all the other SSDs.
JMicron got a bad rap in the early days of consumer SSDs, but the SiliconEdge Blue’s JMF612 storage controller is much more competent than the company’s first efforts. Western Digital co-developed the Blue’s firmware with JMicron, and the enhancements it made aren’t being shared with other drive makers. As a result, the SiliconEdge may perform differently than other SSDs based on the JMF612. We’ve yet to see another SSD using the Toshiba controller inside Kingston’s SSDNow V+. Be careful confusing that drive with other members of the V series, which lack the + and use different controllers.
Finally, we have Intel’s entries in our value sweepstakes. The X25-M and X25-V are both powered by a second-gen controller that crucially adds TRIM support. Interestingly, the Intel drives have much less cache memory than the others. Well, with the exception of the cache-less SandForce SSDs, anyway. The X25-V is also by far the cheapest of the bunch, albeit with only 40GB of total capacity.
Although we don’t have an SSD from each and every drive maker, all of the contemporary controllers are represented. That’s important because a solid-state drive’s controller and associated flash memory largely determine its overall performance, much like a graphics card’s GPU and memory dictate its frame rates more than the sticker on the cooler.
We can only speak to the performance data we’ve collected ourselves, so we haven’t generalized the results to encompass every manufacturers’ implementation of each controller. Do keep in mind, though, that SSDs based on the same controller architecture should offer largely equivalent performance, regardless of which company’s name appears on the outside of the drive.
I should also note that storage capacity can play a role in SSD performance. Drives with fewer gigabytes on offer don’t always have enough flash memory chips to take advantage of all the parallelism available in multi-channel controllers. Lower-capacity flash chips can also be slower than higher-density ones that pack more dies per package. Again, we can only speak to the results that we’ve gathered; trying to extrapolate the performance of lower-capacity models based on differences in manufacturer specifications would be dodgy at best. However, we have attempted to compensate somewhat by considering value not only in terms of a drive’s absolute price, but also its associated cost per gigabyte. Lower rungs on the capacity ladder tend to have a comparable cost per gigabyte to their higher-capacity counterparts.
|Spindle speed||Cache size||Platter capacity||Total capacity||Price|
|Seagate Momentus 7200.4||7,200 RPM||16MB||250GB||500GB||$75|
|Seagate Momentus XT||7,200 RPM||32MB||250GB||500GB||$130|
|WD Caviar Black 2TB||7,200 RPM||64MB||500GB||2TB||$190|
|WD Scorpio Black||7,200 RPM||16MB||160GB||320GB||$60|
|WD Scorpio Blue||5,400 RPM||8MB||375GB||750GB||$115|
|WD VelociRaptor VR150M||10,000 RPM||16MB||150GB||300GB||$190|
|WD VelociRaptor VR200M||10,000 RPM||32MB||200GB||600GB||$280|
Speaking of cost per gigabyte, it’s no secret that SSDs offer rather poor value on the capacity front compared with mechanical hard drives. We have test results for a collection of traditional hard drives, including Western Digital’s flagship 7,200-RPM Caviar Black 2TB, the latest 10K-RPM VelociRaptor, and a handful of notebook drives that share the same 2.5″ form factor as our SSDs. All of those drives will appear in our value analysis to provide some additional context. Seagate’s mechanical/flash Momentus XT hybrid is coming along for the ride, as well.
The elephant in the room
We’ll be leaning on scatter plots when assessing value on the performance side of things, but that’s not necessary to convey a simple truth: SSDs have a much higher cost per gigabyte than their mechanical brethren. In some cases, the difference is several orders of magnitude.
Yeah, there’s just no touching mechanical drives if you’re looking for cost-efficient storage capacity. But then there’s nothing that says you have to house all your data on a single drive. For desktop PCs, we think the combination of a relatively low-capacity SSD, used as an OS and applications drive, and a high-capacity mechanical drive is the best approach. The minimum size of that system drive will depend on just how many applications you intend to keep on the SSD. By applications, I mainly mean games, which tend to have a gluttonous appetite for gigabytes.
As you can see, some SSDs have a more attractive cost per gigabyte than others. The Kingston SSDNow V+ leads the SSD pack, trailed closely by the X25-M, RealSSD, and SiliconEdge. Just behind those is the Nova V128, which is followed by Intel’s budget X25-V and the Force F120.
SandForce-based drives with the higher 28% overprovisioning percentage are the worst values of the lot, costing at least fifty cents more per gigabyte than our lead group of SSDs. The F100 is particularly expensive, or at least it was when the drive was still selling online. The F100 has since gone out of stock, and Corsair may not bring it back given higher demand for the F120.
We have some RAID results for the X25-V that have been included in the mix, but that’s the only SSD array we’ve tested on our new rigs. Since we used the RAID controller built into our motherboard’s core-logic chipset, there’s no additional cost associated with the array outside of the second drive. With RAID 0, the cost per gigabyte remains unchanged as drives are added to the array. Keep in mind that you can pair any of these SSDs in their own RAID array. Doing so will cost you TRIM and thus impact write peformance. As far as we know, TRIM isn’t currently supported by any RAID controllers.
A question of value
Just how do we represent value? It all starts with an ancient 4,200-RPM notebook hard drive that’s old enough to bear the IBM name. This geriatric 30GB Travelstar took its sweet time trudging through our benchmark suite, but it gives us a nice performance baseline against which to judge our contenders. Each drive’s score in a given benchmark is converted to a percentage using the Travelstar as the baseline. We can then quantify performance per dollar, which should give us a very basic representation of value. This calculation blunts the impact of a drive’s total cost, so it won’t penalize higher-capacity SSDs for having a higher asking price, nor will it reward low-capacity models just for being cheaper than everything else.
Performance per dollar figures are easy to line up with bar graphs. Our scatter plots take a slightly different look at the same data. Instead of calculating a performance-per-dollar score, we simply map price on the X axis and the performance percentage on the Y axis, like so:
In theory, the sweetest spot is in the top left corner, which denotes the highest performance at the lowest cost. The worst place to be is the exact oppositethe bottom right-hand corner.
Most of the drives will likely fall between those two extremes, although mixing mechanical and solid-state offerings with vastly different access times could make things a little more interesting. Or a complete mess. Either way, the scatter plots will reveal the most noteworthy intersections of performance and price. The performance-per-dollar bar graphs are meant to complement to these scatter plots; we believe the scatter plots paint a richer picture of the value vista.
There are far too many components to our exhaustive suite of storage benchmarks to include every test, so we’ve whittled things down to the most relevant ones. We’ve settled on a mix of targeted and real-world performance tests that covers sequential transfers, highly randomized access, and usage patterns that combine both elements.
Honestly, I hate leaving performance data on the table. Some level of simplification is necessary for these sorts of value comparisons, though. You can always refer back to individual reviews for a more detailed analysis of the drives we’re considering today. Those reviews also take value into consideration, although in a more general sense within the context of the drives’ features and differences, rather than relying on quantitative performance-per-dollar measures and exotic scatter plots. This value comparison isn’t meant to replace our in-depth reviews, but to provide a different look at the data we’ve collected.
Boiling value down to a couple of performance-per-dollar calculations has its share of problems. There are other factors that can affect a drive’s perceived worth, such as warranty coverage and end-user support. SSDs are usually a little short on differentiating features, but SandForce drives do have on-the-fly encryption built in, and their lower write-amplification factor should ensure greater longevity than contemporary rivals. These are the sorts of things that our SSD reviews will cover in greater detail.
Our testing methods
Due to performance scaling issues with Intel’s latest storage controller drivers, we’ve done our testing with the Microsoft AHCI drivers built into Windows 7. These drivers won’t work with our X25-V RAID config, so we’ve used Intel’s 126.96.36.1994 Rapid Storage Technology drivers with a the default 128KB stripe size for that particular setup.
The block-rewrite penalty inherent to SSDs and the TRIM command designed to offset it both complicate our testing somewhat, so I should explain our SSD testing methods in greater detail. Before testing the drives, each was returned to a factory-fresh state with a secure erase, which empties all the flash pages on a drive. Next, we fired up HD Tune and ran full-disk read and write speed tests. HD Tune runs on an unpartitioned drive, so TRIM won’t be a factor. TRIM is invoked when files are deleted, but with no file system in place, there are no files to delete on an unpartitioned drive.
After HD Tune, we partitioned the drives and kicked off our usual IOMeter scripts, which are now aligned to 4KB sectors. When running on a partitioned drive, IOMeter first fills it with a single file, firmly putting SSDs into a used state in which all of their flash pages have been occupied. We deleted that file before moving onto our file copy tests, after which we restored an image to each drive for some application testing. Incidentally, creating and deleting IOMeter’s full-disk file and the associated partition didn’t affect HD Tune transfer rates or access times.
Our methods should ensure that each SSD is tested on an even, used-state playing field. However, differences in how eagerly an SSD elects to erase trimmed flash pages could affect performance in our tests and in the real world. Testing drives in a used state may put the TRIM-less Plextor SSD at a disadvantage, but I’m not inclined to indulge the drive just because it’s using a dated controller chip.
With few exceptions, all tests were run at least three times, and we reported the median of the scores produced. We used the following system configuration for testing:
You can read more about the hardware that makes up our twin storage test systems on this page of our VelociRaptor VR200M review. Thanks to Gigabyte for providing the twins’ motherboards and graphics cards, OCZ for the memory and PSUs, Western Digital for the system drives, and Thermaltake for SpinQ heatsinks that keep the Core i5s cool.
We used the following versions of our test applications:
- WorldBench 6
- Intel IOMeter 2006.07.27
- Xbit Labs File Copy Test 0.3
- HD Tune 4.01
- Visual Studio 2008 with 03-23-2010 Firefox source
- Call of Duty: Modern Warfare 2
- Crysis Warhead
The test systems’ Windows desktop was set at 1280×1024 in 32-bit color at a 75Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.
Most of the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.
HD Tune Average transfer rates
We’ll kick things off with some sequential transfer rate tests courtesy of HD Tach. First up: the average read speed across the entirety of each drive’s storage capacity.
The X25-V array easily has the fastest read speed of the lot, but its cost per gigabyte is too high to take the performance-per-dollar crown. Still, the scatter plot nicely illustrates that the RAID array gives you a whole lot more performance at a price that sits right in the middle of the SSD pack.
Most of the SSDs are clustered in the middle of the scatter plot, offering similar performance at slightly different prices. The X25-M, Nova, RealSSD, SSDNow, and SiliconEdge look pretty good here. Unfortunately, the 100GB SandForce drives aren’t quite as appealing; they cost more per gigabyte than the other SSDs but aren’t any faster.
Obviously, there’s a huge jump up in cost per gigabyte when moving from one of the mechanical drives to any of our SSDs. The corresponding increase in performance isn’t as substantial, as a percentage, but you’re looking at roughly doubling your sequential read speed over a mechanical drive in most casesmore for the X25-V RAID array.
Like its read-speed test, HD Tune’s write-speed component measures the average speed across the entire length of a drive.
Once again, the mechanical drives dominate our performance-per-dollar-per-gigabyte scale. The SSDs aren’t even close to the VelociRaptors, let alone the Caviar Black.
Among the SSDs, the SSDNow and RealSSD move to the front of the pack, trailed by a couple of OCZ’s SandForce drives. The Intel SSDs fare particularly poorly here due to their comparatively sluggish write performance.
Our scatter plot puts the X25 series’ write performance nicely into perspective. The drives occupy the same price band as most of the other SSDs, but they’re not nearly as speedy. In fact, the X25-V turns in a slower sequential write performance than the mechanical notebook drives.
As they did in the bar graph, the RealSSD and SSDNow look particularly attractive when compared to their direct rivals. The RealSSD gets you a little more performance at a slightly higher cost per gigabyte. If you’re willing to kick in a little extra per gig, there’s even more performance to be had with the 100GB Agility, Vertex, and Force SSDs. The Agility has the lowest price of that trio, making it the best option among the SandForce drives.
HD Tune 4KB random access times
HD Tune can probe random access times with a number of different transfer sizes. Today, we’re going to focus on the 4KB transfer size, which matches the capacity of a typical flash page. First, some read tests.
The hybrid drive dominates in this one, folks, at least when the cost per gigabyte is in the denominator of our value calculations. The X25-M is in a very distant second place, but it’s still way ahead of the Nova and the rest of the pack.
Again, the scatter plot proves much more illuminating. The Momentus XT may have a huge edge over the purely mechanical drives thanks to the low access times of its 4GB NAND read cache, but it still has a ways to go to catch even the slowest SSD. That means it’s miles and miles behind the X25-M, whose ultra-low access times are the class of the field.
A random access time test with 4KB writes is up next. The Scorpio Blue actually had a higher access time than our performance baseline in this test by a couple of milliseconds. Since it’s not the focus of today’s value round-up, we fudged the Blue’s score to match the baseline for this test.
The near-instantaneous access times offered by flash memory propel the SSDs ahead of the mechanical drives even when we’re looking at the performance per gigabyte per dollar. Intel’s way out ahead here, and even the X25-V is getting in on the action. None of the other SSDs come close, and neither does the Momentus XT hybrid. Of course, with its flash used exclusively as a read cache, the XT behaves like a standard mechanical drive when servicing write requests.
One look at the scatter plot makes things crystal clear. The X25 series is much faster than its competition at roughly the same price per gigabyte. The other SSDs are still orders of magnitude faster than the mechanical drives.
File Copy Test
Since we’ve tested theoretical transfer rates, it’s only fitting that we follow up with a look at how each drive handles a more realistic set of sequential transfers. File Copy Test is a pseudo-real-world benchmark that times how long it takes to create, read, and copy files in various test patterns. We’re going to narrow our focus on FC-Test’s copy component because it stresses read and write performance at the same time.
Windows 7’s intelligent caching schemes make obtaining consistent and repeatable performance results rather difficult with FC-Test. To get reliable results, we had to drop back to an older 0.3 revision of the application and create our own custom test patterns. During our initial testing, we noticed that larger test patterns tended to generate more consistent file creation, read, and copy times. That makes sense, because with 4GB of system memory, our test rig has plenty of free RAM available to be filled by Windows 7’s caching and pre-fetching mojo.
For our tests, we created custom MP3, video, and program files test patterns weighing in at roughly 10GB each. The MP3 test pattern was created from a chunk of my own archive of ultra-high-quality MP3s, while the video test pattern was built from a mix of video files ranging from 360MB to 1.4GB in size. The program files test pattern was derived from, you guessed it, the contents of our test system’s Program Files directory.
The copy speed you see below is an average of all three test patterns. You didn’t want separate scatter plots for each, did you?
We’re back in sequential transfer territory, and the mechanical drives are out in front in our performance-per-dollar-per-gigabyte graph again. The difference in sequential transfer rates between mechanical and solid-state drives isn’t nearly as large as the cost-per-gigabyte gap between the two camps.
The RealSSD, SSDNow, SiliconEdge, and X25-M all have a similar enough cost per gigabyte to almost be able to draw a vertical line through the group in our scatter plot. At the top of that line sits the RealSSD, which is a good deal faster than its next-closest rival, the SSDNow. The SiliconEdge Blue isn’t much slower than the Kingston drive, and neither is Corsair’s Nova V128. However, the Nova does cost more per gigabyte, making it somewhat less attractive here.
SandForce-based drives have particular problems with this test, which is why the Agility, Vertex, and F100 are creeping towards the bottom right-hand corner. The F120 and Agility 2 at least manage to stay closer to the middle of the X axis, but they’re still slower than the mechanical drives.
File copy speed
Although FC-Test does a good job of highlighting how quickly drives read, write, and copy different types of files, the app is antiquated enough to completely ignore the command queuing logic built into modern hard drives and SSDs. FC-Test only uses a queue depth of one, while Native Command Queuing can stack up to 32 I/O requests when asked. To get a better sense of how these drives react when moving files around in Windows 7, we performed a set of hand-timed copy tests with 7GB worth of documents, digital pictures, MP3s, movies, and program files. These files were copied from the drive to itself to eliminate any other bottlenecks.
These tests were run with the drives in a tortured used state. To put them in that condition, I ran our IOMeter workstation access pattern with 256 concurrent I/O requests for 30 minutes before the copy tests.
IOMeter creates a massive test file that spans the entirety of a drive’s capacity, and deleting it to make room for our copy tests should gives us a glimpse at each SSD’s TRIM recovery strategy. What we’ve essentially done here is filled all of an SSD’s flash pages, subjected the drive to a punishing workload with a highly-randomized access pattern, and then marked all of the flash pages as available to be reclaimed by garbage-collection or wear-leveling routines.
Mechanical hard drives aren’t subject to the block-rewrite penalty that causes SSD performance degradation as flash pages become occupied, so you won’t see any difference between their fresh- and used-state performances below. We tested the mechanical drives in both states just to be sure, though.
This is a recent addition to our test suite, and since we had to return our PX-128M1S sample to Plextor after reviewing the drive, we were unable to include it here. You’re probably not missing out, though. The Plextor SSD doesn’t support TRIM, so it wouldn’t have fared well.
Hands up if you suspected the mechanical drives would pull out another performance-per-cost-per-gigabyte win in a sequential transfer rate test. This is becoming a bit of a trend, but it’s one we expected given the relative differences in price and performance between the solid-state and mechanical drives.
I prefer the scatter plots, and this one has some interesting stories to tell. The RealSSD offers a much better value proposition than any of the other SSDs, delivering higher transfer rates without the burden of an oppressive cost per gigabyte. Of course, “oppressive” pricing is relative in the SSD realm.
This is another strong performance from the SSDNow, confirming the drive’s prowess with sequential transfers. The X25-M costs about the same per gigabyte, but it doesn’t have the performance to keep up, and neither does the Nova. At least they’re doing better than the Agility, Vertex, and F100, which are even slower and more expensive. The F120 exists in a sort limbo between the two; it’s a little cheaper per gigabyte than the Agility but not really fast enough to be in contention.
That’s nothing compared to the problems facing the SiliconEdge and X25-V configs, though. The SiliconEdge doesn’t recover particularly well from our used-state torture test, and the X25-V’s slow write speeds hurt its copy performance. Even worse is the X25-V RAID array, whose lack of TRIM support yields incredibly poor copy performance in our simulated used state.
Our storage reviews usually tap three components of the WorldBench suite. However, we’ve dropped the WinZip test here because it’s not bottlenecked by the storage subsystem, at least on the systems we use for testing. The Photoshop test will take advantage of faster drives, but it has some consistency issues that give us pause. Photoshop is already included in our multitasking workloads, which are up in a moment, so we’re sticking with just WorldBench’s Nero test here.
The Nero test writes a DVD image file to disk, so it’s basically one big sequential transfer. Seeing the mechanical drives at the front of the pack in our value-per-dollar bar graph should come as no surprise, then.
We’ve gotten used to seeing the SSDNow and RealSSD fare well with sequential transfers, and here they’re at the head of the SSD class once more. The scatter plot makes a convincing argument for the C300, which costs only a little bit more per gigabyte than the V+ despite having a sizable performance advantage. Of course, the SSDNow isn’t alone; it’s closely shadowed by the SiliconEdge Blue, while the Nova and X25-M stalk from a distance.
These Nero results don’t paint a pretty picture for the SandForce drives, which are either too slow or too expensive to be more attractive than the competition. The X25-V’s slow write speeds hurt its value proposition, too.
System boot time
SSD users have long trumpeted boot times as an area where performance easily outstrips mechanical drives. We’ve not found that to be the case, perhaps because we measure boot times from the moment the power button is pressed. Motherboards take a while to initialize various storage controllers and other devices before the OS begins loading, and there’s nothing an SSD can do to speed up that process.
Because the SSDs don’t have much of a performance advantage in our boot time test, they have little hope of capturing the value crown. Our scatter plot looks decidedly different as a result, with the VelociRaptor inching dangerously close to that perfect sweet spot in the top-left corner. None of the SSDs look like good value here, although among them, it’s between the X25-M, the RealSSD, and the SiliconEdge Blue.
Game load times
Although we haven’t seen SSDs boot systems substantially faster than their mechanical competition, Modern Warfare 2 game levels load much quicker. In this test, we loaded the “O Cristo Redentor” special-ops mission with a stopwatch in hand.
The mechanical drives are way out ahead in the bar graph again. We’re looking at reasonable gaps in performance, but the difference in cost-per-gigabyte between the mechanical and solid-state camps remains daunting.
In our scatter plot, the RealSSD again finds itself atop the heap. The SSDNow is hot on its heels, but after that, you’re into a cluster of drives that offer less performance at higher cost per gigabyte.
By far the biggest loser in this test is the X25-M, whose relatively slow load times kill its chances. The SandForce SSDs don’t look so hot, either; they’re still on the pricey side and don’t have the performance to justify the premium.
Our second gaming test loads a save game from the very beginning of Crysis Warhead. There’s a little less separation between the SSDs here, and the mechanical drives aren’t that far off the pace.
By now you know the score with the bar graphs. I’m much more interested in the scatter plots, which nicely illustrate the costly step up to what really isn’t vastly superior performance in this test.
Most of the SSDs are arranged in a loose horizontal line that denotes a similar performance level. The most attractive points on that line are over to the left with the SSDNow, RealSSD, and SiliconEdge Blue. The Nova doesn’t look too bad, either, but it’s no faster than the cheaper solid-state options.
As it did in our first gaming test, the X25-M looks a little out of place. The Intel drive is slower than the quickest mechanical drives, yet it costs several times more per gigabyte. That’s not a good place to be, and the X25-V doesn’t look much better.
TR DriveBench is a new addition to our test suite that allows us to record the individual IO requests associated with a Windows session and then play those results back on different drives. We’ve used this app to create a new set of multitasking workloads that should be representative of the sort of disk-intensive scenarios folks face on a regular basis.
Each workload is made up of two components: a disk-intensive background task and a series of foreground tasks. The background task is different for each workload, but we performed the same foreground tasks each time.
In the foreground, we started by loading up multiple pages in Firefox. Next, we opened, saved, and closed small and large documents in Word, spreadsheets in Excel, PDFs in Acrobat, and images in Photoshop. We then fired up Modern Warfare 2 and loaded two special-ops missions, playing each one for three minutes. TweetDeck, the Pidgin instant-messaging app, and AVG Anti-Virus were running throughout.
For background tasks, we used our Firefox compiling test; a file copy made up of a mix of movies, MP3s, and program files; a BitTorrent download pulling seven Linux ISOs from 800 connections at a combined 1.2MB/s; a video transcode converting a high-def 720p over-the-air recording from my home-theater PC to WMV format; and a full-disk AVG virus scan.
DriveBench produces a trace file for each workload that includes all IOs that made up the session. We can then measure performance by using DriveBench to play back each trace file. During playback, any idle time recorded in the original session is ignoredIOs are fed to the disk as fast as it can process them. This approach doesn’t give us a perfect indicator of real-world behavior, but it does illustrate how each drive might perform if it were attached to an infinitely fast system. We know the number of IOs in each workload, and armed with a completion time for each trace playback, we can score drives in IOs per second.
DriveBench doesn’t produce reliable results with Microsoft’s AHCI driver, forcing us to obtain the following performance results with Intel’s 188.8.131.524 RST drivers. We couldn’t get DriveBench to play nicely with our the X25-V RAID config, either, which is why it’s not listed in the graphs below. The app will only run on unpartitioned drives, so we tested drives after they’d completed the rest of the suite.
Rather than busting out a series of value graphs for each DriveBench workload, we’re just going to use the overall score, which is an average of the mean performance score in each multitasking workload.
The RealSSD and X25-M manage to pull ahead of the last-gen VelociRaptor in our value bar graph. They’re the only SSDs to eclipse a mechanical drive here, though.
Moving to the scatter plot keeps the RealSSD in a positive light. The Crucial drive is quite a bit faster than most of the SSDs and doesn’t command a hefty cost-per-gigabyte premium. The X25-M isn’t too far behind, and it, too, has a sizable lead over the other contenders. You might want to avoid those 100GB SandForce drives, though. Their performance is at best good enough for fifth place, and even with the Agility, you’re still paying more per gigabyte than the other SSDs.
Notice how all the mechanical drives have moved to the lower left-hand corner? They might be cheaper, but they’re also a heck of a lot slower in our multitasking test.
Our IOMeter workloads are made up of randomized access patterns, presenting a good test case for both seek times and command queuing. The app’s ability to bombard drives with an escalating number of concurrent IO requests also does a nice job of simulating the sort of demanding multi-user environments that are common in enterprise applications.
Since you’re probably just about sick of graphs by now, we’ve consolidated our IOMeter scores with a single overall average. This score is the mean transaction rate across all load levels for the file server, web server, workstation, and database access patterns.
Finally, some redemption for the SandForce drives. The Agility, Vertex, and Force SSDs fare so well in IOMeter that even the 100GB drives find themselves near the front of the pack in our bar graph. Quelle surpris!
The SandForce SSDs are still more expensive on a cost-per-gigabyte basis, as the scatter plot nicely illustrates. However, the 100GB models offer higher transaction rates than the closest competition, which comes from the RealSSD C300. Interestingly, the F120 ends up getting the short end of the stick here. It’s not fast enough to keep up with the 100GB SandForce drives and costs too much per gigabyte to be more attractive than the C300.
Making a case for anything but the Agility 2 or the RealSSD is pretty difficult here. They’re just so much more appealing than the rest of the drives, including the X25-V array. And the mechanical drives? Well, they’re back in the cheap-but-slow corner.
Performance counts for a lot, but power consumption and efficiency are also important metrics to consider. In addition to looking at value purely in terms of performance, we’ve come up with a power efficiency rating based on the power consumption of each drive. We test each drive’s power consumption under an IOMeter load consisting of 256 outstanding I/O requests with the workstation access pattern. Put that power draw in the denominator under each drive’s transaction rate during that test, and you have a representation of power-efficient performance in IOps per watt. That value can then be treated like our other performance measures.
This measure may not be the most valuable one for desktop users, or even for folks considering running an SSD in their notebooks (there isn’t a huge difference in power draw between most SSDs), but it’s an important consideration for the sort of multi-drive arrays one might find in high-performance servers and workstations. Power consumption adds up when you’re running multiple drives in RAID, and you’ve gotta dissipate the heat generated by every watt consumed.
Because our performance baseline is a 4,200-RPM mobile drive, the baseline power consumption is quite low. Even with dismal performance, the ancient Travelstar still scores higher on the IOps-per-watt scale than the Caviar Black, which consumes several times more wattage. Rather than switching baselines, we’ve simply left the Black out of these results.
We shouldn’t be surprised by the strong showing of the SandForce drives here. After all, they do offer the highest IOMeter transaction rates. Their power consumption is also quite competitive with that of the other SSDs.
Interestingly, a number of SSDs score quite poorly here. The Scorpio Blue and SSDNow have painfully low IOMeter transaction rates to begin with, so their low power consumption is of little help. Conversely, it’s the relatively high power consumption of the Intel SSDs that hurts their standing on our efficiency scale.
If you discount the SandForce SSDs, the RealSSD and Nova are the next two in line. The C300 pushes more IOps at a lower cost per gigabyte.
For those who have diligently sifted through each page of scatter plots and hopefully not-too-tedious analysis, congratulations. Seriously, you deserve a medal or something. Those who skipped ahead to this page will receive no such praise. They will, however, get a nice summary of performance with a single overall score.
Using a single number to represent a drive’s performance across a range of different benchmark tests can be tricky business. After reading through numerous papers on the subject, we’ve settled on calculating a harmonic mean of all the results you’ve seen today. A harmonic mean can be useful for quantifying overall performance for a benchmark suite when individual test results can be compared to a reference baseline, and it’s not prone to being skewed by the fact that we have performance differences of several orders of magnitude in some cases. We just happen to have a full suite of results normalized to our ancient Travelstar baseline, and as you’ll see in a moment, the harmonic mean generates an overall score that nicely tracks with the value propositions we’ve observed thus far.
I should note that we originally intended to use an arithmetic average to calculate our overall score. However, this simple mean was skewed by some of the enormous performance gaps in IOMeter and HD Tune’s random access time tests, which are several orders of magnitude larger than the performance deltas in the other tests. The resulting overall score didn’t track with expectations based on the value we’ve already quantified in individual tests. Weighting the average to account for those orders-of-magnitude differences would have been arbitrary at best, so we’ve settled on a harmonic mean, which seems to provide useful results.
Our overall score includes individual results for DriveBench and IOMeter rather than the averages we presented in the first set of value graphs. There are five DriveBench multitasking loads and four IOMeter access patterns, giving us a total of 19 test results from which to calculate the harmonic mean. This collection of tests is a little biased towards random access patterns rather than sequential transfers, but we think that makes perfect sense for those contemplating an SSD for an OS and applications drive. The power-efficiency results have been left out to keep this a strictly performance-per-dollar affair.
Because they had to sit out at least one of the tests that make up our overall average, the PX-128M1S and X25-V RAID array haven’t been included in the graphs below. We wouldn’t recommend the former, anyway, and with two drives at its disposal, the RAID config would’ve had an unfair advantageyou know, like it’s had all day already.
The RealSSD easily offers the highest overall performance of the lot. Corsair’s Nova V128 sneaks into second place ahead of a trio of SandForce-based offerings, followed by the X25-M and the Force F120. How do things shake out when this overall score is combined with our value calculations?
Quite well, at least for the RealSSD. On a cost-per-gigabyte basis, the Crucial drive clearly offers better value than any of the other SSDs. It’s one of the least expensive drives on that scale, and its overall performance is unmatched by even the priciest options.
The X25-M looks pretty good here, as well. It’s not as fast as the RealSSD, but you’ll pay a little bit less per gigabyte. The Nova is also an interesting option, offering better performance for a little extra scratch.
Paying more per gigabyte for the SandForce drives doesn’t make a whole lot of sense in this context. They lie between the Nova and X25-M on the performance scale but cost more per gigabyte than both. Even the F120 has a higher cost-per-gigabyte than the Nova and X25-M, and it’s slower than both of them.
Despite strong showings in quite a few tests, the SSDNow doesn’t look all that attractive overall. Neither does the X25-V or the SiliconEdge Blue.
Next, we’ll mix things up with a look at performance per dollar in the context of total system cost. The aim here is to determine whether spending a little (or a lot) more on an SSD makes sense when the price premium is absorbed as part of the cost of a complete system. The step up from a $300 drive to a $400 one is daunting if that’s all you’re buying, but it’s relatively less imposing if you’re simply nudging a $1300 build up to $1400.
Unfortunately, the higher-capacity drives will unavoidably be at a disadvantage with these calculations due to their higher prices. With a street price dangerously close to $100, the X25-V won’t be adding much to the bill for our complete system, giving the Intel drive a distinct edge over the competition. Just remember that with only 40GB, the X25-V isn’t adding much to the system’s storage capacity, either.
For our system price calculations, we’ve used our test rig as the inspiration for a base config, to which the price of each drive will be added. Our example system includes a Core i5-750, a P55-based ASUS P755D-E motherboard, 4GB of DDR3-1333 memory, a Caviar Black 1TB drive for mass storage, a passively-cooled Radeon HD 4850, Antec’s Sonata III enclosure, and Windows 7. Its base price is $939.93.
Here’s the big payoff, folks. In the context of a larger system purchase, one can indeed justify stepping up to an SSD. In fact, the SSDs dominate this metric, which admittedly focuses on storage performance rather than overall system speed. The numbers speak for themselves, though: going with an SSD can make sense if you’re already plunking down a sizeable chunk of change on a system.
Having said that, one should note that using the total system price in our value calculations penalizes the RealSSD and the SiliconEdge because they more expensive drives due to their higher capacities. The SiliconEdge’s mediocre overall performance is much more of a hindrance, though. At least the C300 has chart-topping performance to fall back on.
The sweetest spot on this particular scale appears to be occupied by the Nova V128. This Indilinx-based drive has the second-best overall performance and a lower price than the X25-M. The Agility and Vertex are cheaper still, although they’re both a little bit slower overall.
One should keep differences in capacity in mind when comparing the X25-M to the Agility, Vertex, and Force F100. The SandForce drives score a little higher on our overall performance scale, and the OCZ models cost less than the Intel drive. However, the X25-M packs 160GB, while that lead group of SandForce offerings only serves up 100GB of storage capacity each.
In one more nefarious twist, we can attempt to correct for the capacity differences by factoring in cost per gigabyte, as we’ve done all along. By doing so in the context of system prices, we get a complicated metricperformance per dollar per gigabyte using the total system costthat gives us a rather different indicator of the value propositions offered by these SSDs.
Nothing has changed on the performance scale, but the scatter plot looks very different thanks to movement along the X axis. The RealSSD is very close to the ideal upper-left corner, offering easily the best performance at a lower price per gigabyte than any other SSD. None of the other SSDs even come close.
Behind the C300, the X25-M and Nova can both make a case for second best. The Nova is the faster of the two, while the Intel drive is the cheaper option. Both are preferable to the SandForce SSDs, which sit between the Nova and the X25-M on the performance scale but cost quite a bit more than both of them.
Way off to the right is the X25-V, whose paltry 40GB capacity does the drive no favors. The higher-capacity SSDs do have an advantage with this particular value calculation because we’re dividing the total system cost by the capacity of each drive.
As you’ve no doubt discovered by wading through pages and pages of scatter plots, value can be a tricky thing to evaluate. We may be able to quantify capacity or performance per dollar with some simple math, but that doesn’t tell the whole story. Truthfully, neither do our scatter plots. We’re bound by the data set that we have, and although it’s dominated by comparable drives between 100 and 160GB, our handful of lower- and higher-capacity entries creates some problems when we consider the cost of a complete system.
We can, however, draw some general conclusions about the value proposition offered by today’s solid-state drives. Because SSDs aren’t substantially faster than their mechanical counterparts with sequential transfers, it’s hard to make a convincing argument for them on that front. For the most part, SSDs do have a huge performance advantage with randomized access patterns, and it’s there that solid-state drives are able to overcome their comparatively high cost per gigabyte. As our multitasking workloads in particular show, there’s a lot of performance to be gained by moving to an SSD. That’s why we’ve been recommending them as OS and application drives for some time.
Entry into the next tier of storage performance definitely isn’t cheap. Expect to pay more than $300 for a decent 128GB SSD, and you’re probably going to need a mechanical drive on top of that to provide additional storage capacity for the applications and files that don’t require fast access or simply won’t fit on the SSD. Three bills is a lot to drop on a system drive for a budget build, but the more you spend on your rig, the more an SSD starts to make sense. Once you push into the thousand-dollar range for a basic build, you should probably be at least considering an SSD if you care about storage performance. Remember that, beyond our price-performance numbers, SSDs have some nice advantages in terms of power efficiencyas we’ve seenand they’re completely silent, too.
The value picture for SSDs in laptops is even more complicated than our scatter plots can map. On the plus side are the many virtues of solid-state storage: low power draw, shock tolerance, high performance, effective silence, and a common 2.5″ form factor suited for most mobile systems. On the negative side is a stark reality: few laptops have room for a second drive, so the SSD’s total capacityand thus its cost per gigabytemay be even more of a constraint than in a desktop. We still like SSDs for laptops, but you’ll want some form of networked or external storage to assist with the heavy lifting.
Several of the SSDs we’ve considered stand out as offering the most compelling value propositions. Crucial’s SSD is our most recent Editor’s Choice award winner, and though the drive’s 256GB capacity and correspondingly high price did it no favors when we looked at total system cost, the C300 is extremely fast and a solid deal overall thanks to a competitive price per gigabyte. I would expect the 128GB model to be similarly attractive, although its write performance will almost certainly be slower than that of the 256GB drive we tested.
Corsair’s Nova V128 didn’t top the field in any of our performance tests, but it didn’t stumble, either. As evidenced by its impressive overall score, the Nova offers excellent all-around performance (as should equivalent Indilinx-based drives from other brands). Couple that with a reasonable cost per gigabyte, and the V128 has a very appealing value proposition.
The Intel X25-M also looks good, thanks largely to its quick access times and strong showings in DriveBench and IOMeter. Intel has steadily lowered the X25-M’s price over time, allowing the drive to maintain competitive per-gigabyte pricing while offering a small step up in capacity to 160GB. Folks who don’t want to shell out for a 256GB drive but who doubt whether 128GB will be enough capacity would do well to consider the X25-M.
I’m a little torn on the SandForce drives because I think the technology behind the controller has real promise. However, this controller’s performance is only average in the tests we think are most indicative of desktop usage, making the higher cost per gigabyte associated with SandForce-based drives tough to swallow. If you’re looking for an SSD for server or workstation applications, our IOMeter results suggest any of the SandForce drives would be an excellent choice. The Agility 2 is clearly the best value of that lot thanks to its comparatively lower price.
Ultimately, I think the best SSD values are provided by the Nova V128 and other Indilinx-based drives, the RealSSD C300, and the X25-M. Personally, I’d go with the C300, not because it offers the most performance per dollar, but because it’s the fastest and most advanced of the three. A desire for value might be deeply ingrained within the enthusiast community, but that’s certainly not the only consideration that should guide your purchasing decisions.