SSD performance scaling across the spectrum

Let’s be honest. In the PC world, size matters. This is true not only for the height of ATX towers, but also for the thickness of ultrabooks. The size of one’s SSD is important, too. In addition to defining how many applications, games, and data can enjoy the speedy access times of solid-state storage, an SSD’s capacity plays a large role in determining its overall performance.

Drive makers admit as much on their datasheets, which routinely list slower performance specifications for lower rungs on the capacity ladder. Writes are affected more than reads, the MB/s and IO/s ratings say, and models around 256GB are typically the fastest of each respective breed. As one might expect, it’s these higher capacity points that are first sampled to the press. They’re not the capacities most folks end up buying, though.

The price of flash memory has fallen in recent years, yet high-capacity SSDs remain expensive luxuries at $300 and up. Drives in the 120-128GB range are much more attainable, with street prices comfortably below $200. 64GB variants are easily affordable at around $100, so they’re especially tempting as system drives for desktops and streamlined notebooks.

We’ve already explored how contemporary 120-128GB SSDs compare and how their performance scales up to higher-capacity models. Today, we’re moving in the opposite direction with a stack of 64GB and smaller SSDs. We’ve tested these Blue Light Specials to complete the performance scaling picture. Our test results now run the capacity gamut, from 64GB or less to 256GB or more. We’ve also thrown pairs of 64GB drives into RAID 0 arrays to see whether it’s worth doubling up on cheaper SSDs or splurging on a single, higher-capacity model.

So, yeah, we have a mountain of performance data spread across pages of pretty graphs, plus the usual value analysis to help make sense of it all. And thousands of dollars worth of SSDs photographed in pretentious poses. This article is the culmination of quite literally months of testing in the Benchmarking Sweatshop, and I’d really like to get on with it. Shall we?

And then there were four

There were five drive families represented in our first SSD scaling article, but we’ve had to cut back to four because the Intel 510 Series is only available in 120GB and 250GB capacities. Since it relies on older 34-nm flash memory, the 510 Series is overdue for a 25-nm replacement, anyway. That’s about all I can say on that subject for now—but stay tuned.

  Interface Controller NAND Cache Warranty
Corsair Force Series 3 6Gbps SandForce SF-2281 25-nm Micron async NA 3 years
Corsair Force Series GT 6Gbps SandForce SF-2281 25-nm Intel sync NA 3 years
Crucial m4 6Gbps Marvell 88SS9174 25-nm Micron sync 128MB 3 years
Intel 320 Series 3Gbps Intel PC29AS21BA0 25-nm Intel 64MB 5 years

Without the Intel 510 Series, we’re still left with a range of drives covering the most popular configurations available today. Two of these configs are exclusive: the 320 Series is only available from Intel, and the m4 uses a unique mix of chips offered by Crucial alone. Corsair’s Force SSDs are a little different. They represent a couple of SandForce-based configurations also offered by a number of other drive makers, including OCZ and Kingston.

The Force Series 3 is the slower of the two SandForce configs due to its asynchronous NAND, which isn’t as exotic as the synchronous stuff found in the GT model. Both drives rely on the same SandForce SF-2281 controller. We examined this chip in depth in our early look at the OCZ Vertex 3, so I won’t burden you with all of the details here. It’s worth noting that SandForce uses an on-the-fly compression scheme to speed write performance and reduce NAND wear. Unlike other SSD controllers on the market, the SF-2281 doesn’t make use of separate DRAM cache memory. Otherwise, the chip has a 6Gbps Serial ATA interface and eight memory channels.

Marvell provides the controller in Crucial’s m4. The 88SS9174 chip is familiar from the Intel 510 Series and the old Crucial C300, but in the Crucial m4, it’s paired with the latest 25-nm flash. This memory comes from the same synchronous class of NAND found in the Force GT, making the m4 a similarly premium solution. As we explained when we first looked at the m4, the Marvell chip matches the SandForce controller with a 6Gbps interface and eight memory channels. There’s no compression voodoo at work in it, though.

The Crucial m4 has double the cache memory of Intel’s 320 Series, whose 3Gbps SATA interface hails from the previous generation. Indeed, the origins of the Intel PC29AS21BA0 controller at the heart of the 320 Series can be traced all the way back to the original X25-M, which came out more than three years ago. The chip has ten memory channels, although due to their vintage, each one is likely slower than a modern equivalent. Nevertheless, the drive is outfitted with new 25-nm NAND that we suspect is of the asynchronous variety. (Intel keeps certain details about the 320 Series to itself.)

Since our storage test rigs feature 6Gbps SATA controllers, the Intel 320 Series has a pretty big handicap right out of the gate. Intel isn’t selling the drive as a performance leader, instead focusing on reliability. This is the only drive of the lot with five years of warranty coverage—two years more than the industry norm.

Countin’ dies

With 8-10 memory channels each, the controllers behind our stack of SSDs have plenty of internal parallelism. Saturating those parallel data pathways is the key to exploiting each controller’s performance potential. We couldn’t get SSD makers to go into too many specifics about what’s required to keep each controller’s memory channels at full utilization, but the number of NAND dies is an integral component of the equation.

Solid-state drives split their NAND dies between multiple physical packages. The size of the NAND dies can vary, as can the number of dies per package. To help you get a sense of how the various SSDs and capacity points stack up, we’ve whipped up a handy chart detailing each model’s die configuration.

  Size NAND dies Dies/package Price
Corsair Force Series 3 60GB 8 x 64Gb 1 $95
120GB 16 x 64Gb 1 $170
240GB 32 x 64Gb 1 or 2 $315
Corsair Force Series GT 60GB 8 x 64Gb 1 $110
120GB 16 x 64Gb 1 $190
240GB 32 x 64Gb 1 or 2 $355
Crucial m4 64GB 16 x 32Gb 2 $105
128GB 32 x 32Gb 2 $180
256GB 32 x 64Gb 2 $345
Intel 320 Series 40GB 6 x 64Gb 1 $93
120GB 16 x 64Gb 1 or 2 $200
300GB 40 x 64Gb 2 $530

Let’s start with the easy ones: the Corsair Force 3 and Force GT, which use the same die configurations at each capacity point. All of the dies weigh in at 64Gb, so the number of them doubles with each step up the ladder. We’ll be making stops at 60GB, 120GB, and 240GB. Both of these 240GB drives come in two configurations: one with 32 dies spread across the same number of physical packages, and another with two dies per package. Corsair assures us the performance of these die configs is identical. For what it’s worth, our Force 3 240GB sample has one die per package, while the Force GT we tested has two.

Despite slight differences in packaging, the Corsair Force SSDs should give us a good sense of how the SandForce controller’s performance scales up with the number of NAND dies. Clearly, there’s something to be gained from having more than one die per memory channel. The 60GB Force SSD has enough NAND dies to match the eight channels in the SandForce controller, but it’s tagged with lower performance ratings than the 120GB and 240GB drives.

The scaling picture will be a little more complicated with the Crucial m4. This drive uses 32Gb NAND dies to serve the 64GB and 128GB capacity points, but the 256GB unit is equipped with 64Gb dies. As a result, the 64GB drive has 16 dies, while its higher-capacity brothers have 32 dies each. Any performance deltas between the 128GB and 256GB versions of the m4 will be due to differences in the NAND dies themselves rather than their number. All of the m4s squeeze two dies per package, so those higher-capacity models also have the same package counts.

Admittedly, our selection of Intel 320 Series SSDs doesn’t map perfectly to the capacity points we’ve collected for the others. We’ve reached all the way down to a 40GB model at the low end and up to a 300GB monster at the high end. The 40GB drive is only marginally cheaper than its 60GB and 64GB competition, though. While the 300GB model costs considerably more than our 240GB and 256GB examples, it’s the only 320 Series north of 180GB.

Like the Force drives, the Intel 320 Series uses 64Gb NAND dies throughout. The fact that the 40GB model sports four fewer NAND dies than the controller has memory channels probably won’t help performance. The 120GB version has an additional 10 NAND dies and a slightly unconventional configuration. There are 10 NAND packages on the chip but 16 flash dies, so some of the packages have one die, while others pack two.

We’d feel worse about throwing the Intel 320 Series into the cage with a bunch of 6Gbps rivals if Intel were offering its drives at substantial discounts. Despite being based on an older controller architecture that uses a dated SATA interface, the 320 Series 120GB costs more than the competition.

Because of the capacity differences involved, it’s better to look at each drive’s cost per gigabyte. In the chart below, we’ve combined Newegg prices with the amount of storage capacity available to end users. We’ve included Western Digital’s Caviar Black 1TB, one of our favorite 7,200-RPM desktop drives, for reference.

From a cost-per-gigabyte perspective, the Intel 320 Series is a pretty lousy deal. The 40GB drive is the most expensive of the bunch by a fair margin over the next-closest alternative, a 60GB Force GT that should be quite a bit faster. The higher-capacity flavors of the Intel 320 Series don’t look all that good on this scale, either.

Surprisingly, the highest-capacity Force 3 and Crucial m4 models offer the most storage per dollar. The Corsair SSD is just 31 cents shy of the elusive dollar-per-gigabyte threshold, while the Crucial m4 runs four cents more. In both cases, the 120-128GB variants will set you back about $1.40 per gigabyte.

Although the Intel 320 Series and the Force GT don’t follow the same behavior, they aren’t excepted from the other trend our cost-per-gigabyte analysis reveals. Across the board, the lowest-capacity SSDs cost more per gigabyte than their higher-capacity counterparts. Budget SSDs may offer lower costs of entry, but their value proposition isn’t quite as strong, at least from a capacity perspective. To see how the cheaper drives shake out overall, we’ll now move on to performance.

Test notes and methods

Before dropping you into a deluge of graphs, we’ll take a moment to highlight our testing methods. If you’re already familiar with how we do things around here, feel free to skip ahead to the performance analysis.

We used the same testing methods here as in other recent storage reviews, so the results on the following pages are comparable to the larger data set on display in our OCZ Octane 512GB review. To narrow our attention on performance scaling across multiple capacities, we’ve trimmed all but our 3.5″ desktop reference, the Caviar Black 1TB.

In addition to testing the lower-capacity drives on their own, we combined two of each into RAID 0 arrays using the RAID feature of the P67 storage controllers on our test systems. The arrays were configured with 128KB stripe sizes, which is the default for the Intel RAID controller.

We should note that the TRIM command used to combat the block-rewrite penalty associated with flash memory doesn’t work with RAID arrays—at least not yet. Intel is working on a driver update that will bring TRIM support to SSD RAID configurations, but a timeline for its release hasn’t been made public. Fortunately, RAID doesn’t affect the garbage collection routines inherent to each SSD controller.

We used the following system configuration for testing:

Processor Intel Core i7-2500K 3.3GHz
Motherboard Asus P8P67 Deluxe
Bios revision 1850
Platform hub Intel P67 Express
Platform drivers INF update 9.2.0.1030

RST 10.6.0.1022

Memory size 8GB (2 DIMMs)
Memory type Corsair Vengeance DDR3 SDRAM at 1333MHz
Memory timings 9-9-9-24-1T
Audio Realtek ALC892 with 2.62 drivers
Graphics Asus EAH6670/DIS/1GD5 1GB with Catalyst 11.7 drivers
Hard drives Corsair Force Series 3 60GB with 1.3.2 firmware

Corsair Force Series 3 120GB with 1.3 firmware

Corsair Force Series 3 240GB with 1.3.2 firmware

Corsair Force Series 3 120GB RAID with 1.3.2 firmware

Corsair Force Series GT 60GB with 1.3.2 firmware

Corsair Force Series GT 120GB with 1.3 firmware

Corsair Force Series GT 240GB with 1.3.2 firmware

Corsair Force Series GT 120GB RAID with 1.3.2 firmware

Crucial m4 64GB with 0009 firmware

Crucial m4 128GB with 0009 firmware

Crucial m4 256GB with 0009 firmware

Crucial m4 128GB RAID with 0009 firmware

Intel 320 Series 40GB with 4PC10362 firmware

Intel 320 Series 120GB with 4PC10362 firmware

Intel 320 Series 300GB with 4PC10362 firmware

Intel 320 Series 80GB RAID with 4PC10362 firmware

WD Caviar Black 1TB with 05.01D05 firmware

Power supply Corsair Professional Series Gold AX650W
OS Windows 7 Ultimate x64

Thanks to Asus for providing the systems’ motherboards and graphics cards, Intel for the CPUs, Corsair for the memory and PSUs, Thermaltake for the CPU coolers, and Western Digital for the Caviar Black 1TB system drives.

We used the following versions of our test applications:

Some further notes on our test methods:

  • To ensure consistent and repeatable results, the SSDs were secure-erased before almost every component of our test suite. Some of our tests then put the SSDs into a used state before the workload begins, which better exposes each drive’s long-term performance characteristics. In other tests, like DriveBench and FileBench, we induce a used state before testing. In all cases, the SSDs were in the same state before each test, ensuring an even playing field. The performance of mechanical hard drives is much more consistent between factory fresh and used states, so we skipped wiping the Caviar before each test—mechanical drives take forever to secure erase.

  • We run all our tests at least three times and report the median of the results. We’ve found IOMeter performance can fall off with SSDs after the first couple of runs, so we use five runs for solid-state drives and throw out the first two.

  • Steps have been taken to ensure that Sandy Bridge’s power-saving features don’t taint any of our results. All of the CPU’s low-power states have been disabled, effectively pegging the 2500K at 3.3GHz. Transitioning in and out of different power states can affect the performance of storage benchmarks, especially when dealing with short burst transfers.

The test systems’ Windows desktop was set at 1280×1024 in 32-bit color at a 75Hz screen refresh rate. Most of the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

HD Tune — Transfer rates

HD Tune lets us present transfer rates in a couple of different ways. Using the benchmark’s “full test” setting gives us a good look at performance across the entire drive rather than extrapolating based on a handful of sample points. The data created by the full test also gives us fodder for line graphs. To make those more readable, we’ve busted out separate graphs for each SSD family.

All the results in our graphs are color-coded by SSD type. The lone mechanical drive has been greyed out to set it apart those few times it might crawl up from the bottom of the standings.

Funky. The line graphs paint a picture of consistency for the single-drive configurations. There’s very little flutter in the transfer rate across the extent of each SSD’s total capacity—and next to no difference in read speeds between the various capacity points in each family.

The RAID configs exhibit read speeds that oscillate within a span of about 75MB/s. This occurs at a higher frequency in the Corsair Force and Crucial m4 SSDs, while the Intel 320 Series’ peaks and valleys are more widely spaced.

Those saw-tooth patterns for the RAID configs average out to about the same read speeds as the single drives. The top three SSD groups are pretty evenly matched here. The Crucial m4 tops the standings, but it’s only marginally quicker than the Force SSDs.

The Intel 320 Series has much lower read speeds than its 6Gbps competition. At least the pack of Intel drives enjoys a commanding lead over our mechanical reference.

When we switch to writes, the Force SSDs are all over the map. Regardless of the configuration, their write speeds spike violently and regularly. The higher the capacity, the higher the peaks—and the shallower the valleys. We’ve seen this behavior from two generations of SandForce controllers, so it’s nothing new.

The write-speed profiles of the Crucial m4 and Intel 320 Series SSDs look much more sedate. Here, we get our first clear taste of the slower write performance offered at lower capacity points.

At least in HD Tune’s write speed test, the Crucial m4 and Intel 320 Series are more affected by capacity differences than the Corsair Force SSDs. The Force 3’s average write speed increases by 28% when you step up from 60GB to 240GB. Over the same span, the Force GT’s write speed jumps by 33%. Those numbers are a stark contrast with the Crucial m4, whose write speed more than doubles going from the 64GB model to 256GB. That gap pales to the colossal gulf between the extremes of the Intel 320 Series, though.

The SandForce-based Force SSDs are the fastest overall, so their lower capacity points have substantial advantages over the direct competition. The Crucial m4s lag behind the Force SSDs, while the Intel 320 Series brings up the rear. This time, the Intel 40GB variant is even slower than the Caviar Black.

So far, the RAID configs haven’t had much to offer. In this write speed test, all four of ’em are slower than single-drive setups that offer the same capacity.

HD Tune — Burst speeds

HD Tune’s burst speed tests are meant to isolate a drive’s cache memory.

With only a few exceptions, there’s little difference in burst performance between the different SSD capacity points. The Force drives have the edge overall and a sizable advantage in the write speed test.

HD Tune — Random access times

In addition to letting us test transfer rates, HD Tune can measure random access times.

I debated pulling the mechanical drive from these results, but it does a really good job of illustrating the differences in access times between mechanical and solid-state storage. Compared to the gap between HDDs and SSDs, the differences in access times between the solid-state solutions are negligible.

In HD tune’s 4KB random read test, the SSDs all fall between 0.03 and 0.07 ms. The drives are grouped by family, and there’s no difference in access times between the different capacity points.

Things change in the 1MB test, which has all the single-drive configs but the Intel 320 Series locked in a tie. The Intel SSDs are a little bit slower, while the RAID configs (the non-Intel ones, anyway) are a little bit faster.

What’s true for random reads is also true for writes, at least at the 4KB transfer size. In the 1MB test, the Force SSDs dominate but offer very little differentiation within their ranks. Only the RAID setups distance themselves from the single-drive configs.

The 1MB test is enough to coax some performance scaling out of the Crucial m4, whose access time drops with each step up the capacity ladder. The delta between the 64GB and 128GB drives is particularly substantial. The same is true for the transition between the 40GB and 120GB Intel 320 Series SSDs. That family, too, enjoys quicker access times as capacity increases.

TR FileBench — Real-world copy speeds

Concocted by our resident developer, Bruno “morphine” Ferreira, FileBench runs through a series of file copy operations using Windows 7’s xcopy command. Using xcopy produces nearly identical copy speeds to dragging and dropping files using the Windows GUI, so our results should be representative of typical real-world performance. We tested using the following five file sets—note the differences in average file sizes:

  Number of files Total size Average file size
Movie 6 4.1GB 701MB
RAW 101 2.32GB 23.6MB
MP3 549 3.47GB 6.48MB
TR 26,767 1.7GB 64.6KB
Mozilla 22,696 923MB 39.4KB

The names of most of the file sets are self-explanatory. The Mozilla set is made up of all the files necessary to compile the browser, while the TR set includes years worth of the images, HTML files, and spreadsheets behind my reviews.

To get a sense of how aggressively each SSD reclaims flash pages tagged by the TRIM command, we’ve run FileBench with the solid-state drives in two states. We first tested them in a fresh state after a secure erase. The SSDs were then subjected to a 30-minute IOMeter workload, generating a “tortured used” state ahead of another batch of copy tests. We haven’t found a substantial difference in the performance of mechanical drives between these states. However, because they don’t support TRIM, our RAID configs will be particularly challenged by the used-state tests.

Carnage ensues for the used-state RAID configs, which offer as little as one eighth the performance of their fresh states. The delta between fresh and used-state RAID configs is particularly wide with the larger files of the movie, RAW, and MP3 sets. Those gaps narrow considerably with the smaller and more numerous files of the TR and Mozilla sets, though.

If we just look at the single-drive results, it’s clear that capacity plays a major role in copy performance. For every SSD family in virtually every file set, the higher-capacity models post faster copy speeds. As in the RAID results, the magnitudes of the gaps track roughly with the sizes of the files.

In general, the Crucial m4 does well with larger files, while the Corsair Force SSDs dominate with smaller ones. The Force GT is consistently faster than its sibling. Another consistent trend is the sluggish performance of the Intel 320 Series, especially in the TR and Mozilla sets.

TR DriveBench 1.0 — Disk-intensive multitasking

TR DriveBench allows us to record the individual IO requests associated with a Windows session and then play those results back as fast as possible on different drives. We’ve used this app to create a set of multitasking workloads that combine common desktop tasks with disk-intensive background operations like compiling code, copying files, downloading via BitTorrent, transcoding video, and scanning for viruses. The individual workloads are explained in more detail here.

Below, you’ll find an overall average followed by scores for each of our individual workloads. The overall score is an average of the mean performance score with each multitasking workload. DriveBench doesn’t play nicely with RAID configurations, so our arrays will have to sit out this and the next round. They’ll be back.

SSD capacity has a big impact on overall DriveBench performance. The only real exceptions are the Crucial m4 128GB and 256GB drives, which are very closely matched. Remember, those two models both have 32 NAND dies. The other SSDs hit 120GB with fewer NAND dies than they use to serve higher capacity points.

Although the performance drop associated with the Crucial m4’s step down to 64GB is quite large, the budget Crucial drive is still faster than the 60GB Force SSDs. Overall, the 60-64GB drives aren’t that much slower than their 120-128GB counterparts. They certainly don’t suffer as much as the Intel 320 Series 40GB, which manages only about a third the performance of the 120GB drive.

Let’s break down DriveBench’s overall score into individual test results to see if we can find any interesting subplots.

The file copy test seems to be the most dependent on SSD capacity, which should come as no surprise after scrolling through our FileBench results. There’s virtually no difference between the 128GB and 256GB Crucial m4 drives in that test, and in the compile test, the 128GB model is the faster of the two.

All of the other results track closely with the overall averages. Note that in most cases, the 60-64GB Corsair Force and Crucial m4 SSDs are competitive with the 120GB Intel 320 Series.

TR DriveBench 2.0 — More disk-intensive multitasking

As much as we like DriveBench 1.0’s individual workloads, the traces cover only slices of disk activity. Because we fire the recorded I/Os at the drives as fast as possible, the SSDs also have no downtime during which to engage background garbage collection or other optimization algorithms. DriveBench 2.0 addresses both of those issues with a much larger trace that spans two weeks of typical desktop activity peppered with multitasking loads similar to those in DriveBench 1.0. We’ve also adjusted our testing methods to give solid-state drives enough idle time to tidy up after themselves. More details on DriveBench 2.0 are available on this page.

Instead of looking at a raw IOps rate, we’re going to switch gears and explore service times—the amount of time it takes drives to complete an I/O request. We’ll start with an overall mean service time before slicing and dicing the results.

Across the board, the higher-capacity SSDs offer lower mean service times in DriveBench2.0. Within each drive family, the performance delta between the two highest capacities is much smaller than it is between the two lowest capacities. Case in point: the Force GT, whose 240GB variant enjoys a 20-ms edge over the 120GB drive, which in turn sits a full 45 ms ahead of the 60GB model.

The Corsair Force SSDs come out on top overall, with the synchronous Force GT configuration leading the async Force 3 throughout. The Intel 320 Series surprisingly ekes out a victory over the Crucial m4 at every capacity point. Those two look especially undesirable in their lowest capacities, which have service times more than two times slower than the 60GB Force SSDs.

The higher-capacity SSDs aren’t just quicker with writes; they also have shorter read service times, although the differences there aren’t quite as large as the gaps in write service times. Dropping from a 240-300GB SSD to something in the 120-128GB sweet spot isn’t going to cost you as much performance as stepping from one of those mid-range drives down to a budget model.

The Crucial m4’s write service times are particularly slow, causing the 128GB drive to lag behind the 60GB Force SSDs and the 40GB Intel 320 Series. The 64GB m4’s read service times aren’t all that hot, either, but the higher-capacity models boasts read service times second only to those of the pack-leading Force GT.

How’s this for a shocker? Despite having a much higher read service time than any of the SSDs, the Caviar Black turns in quicker write service times than all of the low-capacity SSDs but the Force GT.

There are millions of I/O requests in this trace, so we can’t easily graph service times to look at the variance. However, our analysis tools do report the standard deviation, which can give us a sense of how much service times vary from the mean.

Our overall scaling trend continues. The higher-capacity SSDs offer more consistent service times than their smaller siblings, which are particularly prone to variance in their 40-64GB flavors. The write requests in this two-week trace really flummox the Crucial m4, whose comparatively slow service times are accompanied by wider variance than the competition.

IOMeter

Our IOMeter workloads feature a ramping number of concurrent I/O requests. Most desktop systems will only have a few requests in flight at any given time (87% of DriveBench 2.0 requests have a queue depth of four or less). We’ve extended our scaling up to 32 concurrent requests to reach the depth of the Native Command Queuing pipeline associated with the Serial ATA specification. Ramping up the number of requests gives us a sense of how the drives might perform in more demanding enterprise environments.

Things can get a little crowded with this many results, so we’ve sorted the SSD families into individual graphs. We’ll look at each family individually instead of mashing everything together.

The Corsair Force 3 Series offers higher transaction rates as capacity increases. Surprisingly, there’s a bigger difference in performance between the 120GB and 240GB models than there is between the two smaller sizes. The gaps are the widest in the file server, workstation, and database tests, all of which offer a mix of reads and writes. The web server access pattern is made up exclusively of read requests.

IOMeter marks the return of our RAID configs, which fare well in IOMeter overall. The 120GB Force 3 array boasts higher transaction rates than a single-drive config with the same capacity. In the web server test, the RAID array even tops the 240GB Force 3.

Curiously, the synchronous Force GT behaves differently in a similar RAID configuration. It challenges and even beats the 240GB drive across most of the load levels in the file server, database, and workstation tests. However, in the web server test, the RAID array has slightly lower transaction rates than the 120GB model.

Among single-drive configurations, the Force GT’s transaction rates increase as we’d expect them to. The 60GB and 120GB variants are closely matched when writes are a part of the workload, but the drives are more evenly spaced in the read-only web server test.

As we’ve seen a few times now, the Crucial m4’s 256GB and 128GB flavors offer similar performance—and a definitive edge over the 64GB model. That’s true throughout our IOMeter testing with one exception: the web server access pattern. When tasked only with random reads, the 64GB m4 actually edges out its higher-capacity counterparts.

Teaming 64GB drives in a RAID 0 array results in higher transaction rates in IOMeter’s web server test. The RAID results aren’t as impressive in the other tests, however. Although two 64GB drives offer higher transaction rates than one, they can’t match a single 128GB drive in the file server, database, and workstation tests.

When we asked Intel about how the number of NAND dies might influence the 320 Series’ performance, we were told that higher-capacity versions of the drive might actually be slower with random I/O due to the greater number of addresses required by their capacities. That appears to be true for the 300GB drive, which scores lower than the 120GB unit in three of four tests. The 120GB and 300GB models are both consistently faster than the 40GB model, though.

Combining a couple of the 40GB drives in a RAID 0 array will deliver higher transaction rates. Unless you’re only concerned with read performance, you’re still better off with a single 120GB drive.

Boot duration

Before timing a couple of real-world applications, we first have to load the OS. We can measure how long that takes by checking the Windows 7 boot duration using the operating system’s performance-monitoring tools. This is actually the first time we’re booting Windows 7 off each drive; up until this point, our testing has been hosted by an OS housed on a separate system drive.

The capacity of one’s SSD doesn’t appear to have much of an impact on the speed of the boot process. At their widest, the gaps between single-drive configurations amount to tiny fractions of a second.

The Intel 320 Series is an exception, of course. The 40GB drive loads Windows more than a second slower than the 120GB drive. However, the Intel 320 Series RAID config is the only array that’s not slower than its single-drive siblings.

Level load times

Modern games lack built-in timing tests to measure level loads, so we busted out a stopwatch with a couple of reasonably recent titles.

Given the hand-time nature of these tests, I don’t want to draw too many conclusions from what are ultimately very close results. Within each SSD family, much less than a second typically separates the fastest capacity from the slowest. The higher-capacity models are rarely the fastest, but they’re not exactly slow, either.

The Intel 320 Series largely trails the other SSDs. The 40GB drive is particularly slow loading Portal 2; it takes at least two seconds longer than the closest budget alternative.

Power consumption

We tested power consumption under load with IOMeter’s workstation access pattern chewing through 32 concurrent I/O requests. Idle power consumption was probed one minute after processing Windows 7’s idle tasks on an empty desktop.

For the most part, the lower-capacity SSDs consume less power. That’s true more under load than at idle, when the differences in power draw between the capacity points narrow.

The lower power draw of our 40-64GB drives isn’t enough to make the RAID configs more power-efficient than their like-sized counterparts. The arrays draw roughly double the power of one of their member drives, while the 120-128GB SSDs consume only slightly more wattage than their budget brethren.

The value perspective

Welcome to our famous value analysis, which adds capacity and pricing to the performance data we’ve explored over the preceding pages. We used Newegg prices to even the playing field, and we didn’t take mail-in rebates into account when performing our calculations.

Our remaining value calculations use a single performance score that we’ve derived by comparing how each drive stacks up against a common baseline provided by the Momentus 5400.4, a 2.5″ notebook drive with a painfully slow 5,400-RPM spindle speed. This index uses a subset of our performance data described on this page of an earlier SSD round-up. Some of the drives were actually slower than our baseline in a couple of the included tests, so we’ve fudged the numbers a little to prevent those results from messing up the overall picture.

Since the RAID configs couldn’t participate in DriveBench, a major component of our overall performance score, they’ll be forced to the sidelines for our value analysis. You’re not missing much. If we take all the DriveBench results out of our overall score, the picture looks like so:

Not pretty. Only the Force GT gets an overall boost out of RAID; doubling down on the other budget SSDs actually results in lower overall scores.

The solid-state arrays look worse here than they have in most of our individual tests thanks to FileBench. We use FileBench’s used-state copy times in our overall score because we think they represent long-term performance better than factory fresh results. Without TRIM support, the RAID arrays offer painfully slow used-state copy times, dragging down their overall scores.

We’re not inclined to completely revamp how we calculate our overall performance index just because RAID arrays can’t deal with TRIM, which is sort of a big deal. Instead, we’ll drop the RAID results to focus on the overall performance of our single-drive configs, DriveBench included.

Our overall score confirms the notion that size indeed matters when it comes to SSD performance. We knew as much from the manufacturer spec sheets, but it’s interesting to see how the different drive families and capacities compare across a wide range of tests.

The Force 3 has the most even gaps between the capacity points we tested. The Force GT and Crucial m4 have similar separation between their 120-128GB and 240-256GB variants, but their lower-capacity models are comparatively slower. So is the Intel 320 Series 40GB, which offers only half the performance of the 120GB drive. That’s still enough to come out ahead of the Caviar Black overall, though.

Now, for the real magic. We can plot this overall score on one axis and each drive’s cost per gigabyte on the other to create a scatter plot of performance per dollar per gigabyte.

Want to make the case for splurging on a high-capacity SSD? In addition to offering the best performance, they tend to have the lowest cost per gigabyte. That justification is sound for all but the Intel 320 Series, whose mid-range entry costs less per gigabyte than the high-end model.

At every capacity point, the Intel SSDs are simply too expensive to offer good value. The Corsair Force GT’s commanding performance certainly warrants the attached price premium, while the Force 3 looks like a better deal overall than the Crucial m4 at each and every capacity point.

Even with today’s flood-inflated prices, the Caviar Black offers a much lower cost per gigabyte than any SSD. It’s slower than the solid-state drives, of course, and by huge margins.

Although this analysis is helpful when evaluating drives on their own, what happens when we consider their cost in the context of a complete system? To find out, we’ve divided our overall performance score by the sum of our test system’s components. Those parts total around $800, which also happens to be a reasonable price for a modern notebook.

Despite their lower costs per gigabyte, the high-capacity SSDs still add more to the total cost of a complete system. If you connect the dots for each drive family, you’ll see they now drift off to the right rather than to the left. Even the Intel 320 Series is in step this time around.

As part of the cost of a full build, the minor price differences between the SSD families shrink to near-irrelevance, at least for the 40-64GB and 120-128GB models. Those capacities all line up vertically, making it easy to crown the Force GT at the top of each stack. The picture is a little more muddied for the higher-capacity drives, but the Force 3 and GT are particularly well placed.

Looking at the value equation from this angle puts mechanical storage in a different light. If capacity isn’t a concern, you can get a good SSD for less than the cost of a mid-sized hard drive.

Conclusions

While there are exceptions here and there, the overall results couldn’t be clearer: SSDs get faster as their capacities rise. Solid-state drives are essentially parallel arrays of NAND memory, so that outcome is to be expected. The more NAND in the array, the more parallelism can be exploited in the controller, the faster the drive. This dynamic is especially true for writes and random I/O, but sequential reads don’t see much benefit from higher capacities.

Based on our results with low-capacity SSDs, it’s better to have more parallelism in the drive than it is to have two drives in RAID. Although the striped arrays were all faster than one of their component SSDs, they didn’t impress versus higher-capacity drives that offer the same amount of total storage. FileBench nicely illustrated how the lack of TRIM support for RAID arrays can have a detrimental impact on performance.

I’m dubious whether the RAID situation would be improved with pairs of mid-range SSDs. RAID configs do have huge potential for read-dominated workloads, but increasing capacities won’t change the TRIM situation. The fact that higher-capacity drives tend to cost less per gigabyte makes the RAID proposition tenuous from a value perspective, too.

We observed the starkest differences in performance between single-drive capacity points in the Intel 320 Series, which is no surprise given the range of sizes we tested. The 40GB drive looks particularly underpowered versus other members of the family, and it’s largely trounced by the other budget drives. Although the 320 Series’ performance scales up dramatically from the 40GB starting point, this isn’t the drive family you want if performance—or cost—is of primary importance.

The Crucial m4 is more competitive overall, but its 64GB iteration is quite a bit slower than its direct rivals or the 128GB m4. There’s less of a performance gap between 128GB and 256GB versions of the m4, likely because the two have the same total number of NAND dies.

There’s more even spacing between members of the Corsair Force SSD family, perhaps because each capacity point represents a doubling in the number of 64Gb NAND dies. These are the fastest drives overall, and they’re the best values, too. With multiple vendors offering essentially the same synchronous and asynchronous SandForce configurations, drive makers have chosen to compete aggressively on price. The end result is a Force 3 line that’s incredibly affordable and a faster Force GT that’s still well priced.

At the end of the day, our results confirm that 120-128GB SSDs really do sit in the sweet spot. The lower-capacity drives can be much slower, and they’re just cheaper rather than being truly better values. You don’t have quite as much to gain stepping up to 240-300GB, either. That said, higher-capacity SSDs are both faster and cheaper per gigabyte. My advice? Buy the biggest one you can afford.

Comments closed
    • kamikaziechameleon
    • 8 years ago

    I wonder if we’ll ever see trim support for raid and or PCI E SSDs.

    • ghjtdge
    • 8 years ago
    • sdghjyukty
    • 8 years ago
    • safghtjrtj
    • 8 years ago
    • moriz
    • 8 years ago

    i think you screwed up the RAID setups, since my RAID0 intel 320 120GB array hit around 500 MB/s read in pretty much every benchmark, and a burst rate of 3732.8 MB/s.

    there are a ton of benchmarks out there, all of them concluding that SSDs in RAID0 see very large improvements in most areas. sequential speeds effectively double. the only thing that doesn’t get improved are the 4K transfers, because they fall under the stripe size, so those are being done on one drive effectively.

      • indeego
      • 8 years ago

      [url=http://www.storagereview.com/intel_ssd_320_raid_review<]About matches TR's review[/url<]. Your drive array is outclassed by basically top drive SSDs on 6 Gbps controllers, RAID or not. The only reason would be the "reliability" angle of the 320 series, but you reduce any benefit by RAID0'ing that. In mid-2011 your setup made sense. Today it doesn't. The 320 series also hasn't been the most reliable of Intel's drives to date.

        • moriz
        • 8 years ago

        that’s interesting. your link matches my results, but does not match TR’s. your link shows the 320 series reaching 500MB/s sequential read, whereas the TR review barely reached 400MB/s. unless the 80GB version is THAT much slower than the 120/160GB versions, something’s wrong with TR’s review.

        this isn’t about which setup is more worthwhile; it’s about the accuracy of the review. right now, TR’s RAID0 setup is looking very suspect.

    • Ryszard
    • 8 years ago

    Am I the only one who tripped over the use of ‘dies’ as the plural, instead of dice?

      • Firestarter
      • 8 years ago

      “There are three commonly used plural forms: dice, dies, and die.”
      [url<]http://en.wikipedia.org/wiki/Die_(integrated_circuit)[/url<] So, apparently not, but I disagree with you anyway :p

    • sschaem
    • 8 years ago

    Any point to show IO vs time to compile for fire fox?

    TR showed previously that using an SSD or an HHD, compile time are exactly the same.

    [url<]https://techreport.com/articles.x/21672/11[/url<] Also 2TB 7200 RPM 64MB Cache SATA 6.0Gb/s drive are at $169 , so 8 cents per GB Thats ~16 time less per GB then the average 128GB SSD. One thing that would have been neat is to show SSD/HDD hybrid, and solution like Intel SRT as reference. I personally plan to get a 128GB SSD if I can create 2 partition to be used as so: ~100GB for boot/app and a ~28GB for cache over a classic HHD raid1 Anyone done this and care to share results?

    • donkeycrock
    • 8 years ago

    I think a interesting article would be to use different CPU’s(atom, core2, sandy, AMD, fusion, so forth) with one SSD and see if that has much to do with speed.

    • Arclight
    • 8 years ago

    A song to go together with the article:
    [url<]http://www.youtube.com/watch?v=kVLiw8kvBRg[/url<]

    • OneArmedScissor
    • 8 years ago

    Thanks for doing this, especially the power test. I’d been wondering for a long time if putting a high capacity SSD in a laptop was detrimental or not.

    Now can someone please find the SSD that gets you the most battery life? 😉

      • indeego
      • 8 years ago

      None of them contribute significantly to battery life one way or another, they are such an insignificant portion of total system power compared to GPU/CPU/display.

        • Mourmain
        • 8 years ago

        They would if they were installed on a laptop, which is attractive for other reasons too (shock resistance, noise).

          • indeego
          • 8 years ago

          We replace all mechanicals with SSDs and there is very little difference in battery time under real-world use.

        • UberGerbil
        • 8 years ago

        …and the delta compared to the power draw of [url=https://techreport.com/r.x/intel-320-series/controller.jpg<]modern 2.5" HDs[/url<] is minimal (and in some cases favors the HDs, especially among the 5400rpm drives).

      • Firestarter
      • 8 years ago

      Crucial M4 looks pretty good, but make sure that it’s not too thick!

      • Lans
      • 8 years ago

      Well, depending on what you have and which drive you are swapping to… it could actually go both ways…

      I am not sure if there are even lower power consumption for SSDs but to answer your question directly, if all you are about is power than I found this:

      [url<]https://techreport.com/articles.x/16848/13[/url<] Corsair P256 .23 W idle / .87 W load Samsung PB22-J .25 W idle / .92 W load Of course those sell for over $400 (it seems)... I haven't been paying attention to every development of SSDs so maybe others might be able to find something that is power efficient and not as expensive... The others in the review also look pretty good but to lazy to list them all. From current corp, idle of .5 to 1.4 W and 1.3 to 4.5 W load, you may or may not do better than mechanical drives: power usage for some 5400 RPM drives: [url<]https://techreport.com/articles.x/17010/13[/url<] power usage for some 7200 RPM drives: [url<]https://techreport.com/articles.x/20037/9[/url<] As others pointed out, the difference isn't that large anyhow. For example, 0.5W difference might push 5.5 hours of battery life on 55 Wh battery to 5.8hrs... I would just play around with: (battery size: Wh = voltage * mAh / 1000) / (average battery life) = average power (W) used per hour so you can see how much lowering average power usage per hour affects battery life to see if upgrade is worth it (being battery life = battery size / average power usage per hour).

    • WhatMeWorry
    • 8 years ago

    Sexy…Sexy SSDs. Whose the tramp in red?

      • entropy13
      • 8 years ago

      Obviously the Corsair Force GT’s. lol

    • Bensam123
    • 8 years ago

    Weird as far as raid is concerned. It would almost be better to put two or three small mechanical hard drives in raid then use a SSD when it comes to that, although they can’t compete on random access and random I/Os.

    To do a bit of conspiracy theorizing, how much you want to bet Intel is holding off on transferring trim commands to raid arrays so they can continue to segment their drives? They’ve known about the issue for quite a bit of time. When they see a problem, they’re usually right on top of it. All their drivers are usually extremely well polished, yet they seem to have omitted this.

    That is strange in itself though… Does anyone know if there are some HBAs that pass trim commands like from 3Ware, LSI, or Adaptec?

    • Krogoth
    • 8 years ago

    I never understood why you need to RAID SSDs.

    RAIDs were engineered around HDDs not SSDs.

    I don’t believe current generation of RAID controllers and software RAID solutions can properly handle most SSD failures, since typically will die gracefully as the silicon starts to burn-in. Where as RAIDs were meant to address complete hardware failure.

    Performance-wise it is simple why SSDs aren’t so hot with being in a RAID. Again RAID controllers and solutions were built around HDDs. HDD RAIDs typically use on-disk cache and if available on-controller memory buffer cache as buffer to help reduce the lag/performance loss due to synchronization between the disks in the array. SSDs don’t have on-disk cache and the TR bench use a software RAID with motherboard disk controller which didn’t have any memory buffer that is common on dedicated RAID controllers.

    I suspect as SSDs become commonplace in the enterprise market. The major RAID vendor will begin to develop RAID solutions that are better suited to deal with SSDs in terms of fault tolerance and performance.

      • Bensam123
      • 8 years ago

      Hardware raid controllers have on board memory for caching as do software solutions. Intels own onboard raid uses system memory.

      This may seem quite the opposite of your logic, but from what I’ve heard and experienced it’s the complete opposite. Mechanical HDs show many signs before they die. They start making noises, data access becomes sporadic, sometimes they’re blue screen you… lots of different kinds of noises… before they finally give up. SSDs on the other hand just simply cease working without any sort of sign as they have no part to signify their longevity. You simply wake up and can’t access your data anymore.

        • stdRaichu
        • 8 years ago

        Most SSD’s have a metric to tell you how many writes the cells have done, and how many cycles it expects to have left. Intel’s SSD toolbox even has a handy “life expectancy” graph:

        [url<]http://media.bestofmicro.com/Intel-Intel_SSD-Intel-Solid-State-Drive-Toolbox,1-Z-313703-13.png[/url<] SSD's don't simply cease working. Occasionally you'll get one that's just DOA (or dies due to electrical failure), but none of our SSD's at work have had anywhere near enough data written to them to run out of write cells yet.

          • Bensam123
          • 8 years ago

          Aye… I understand that that’s the way they’re supposed to work. In practice they just die well before their write limit. I’m not talking about them dying due to write limits either. That’s similar to HDs dying after their mean time failure, which usually doesn’t happen. Go read some reviews on Newegg.

            • stdRaichu
            • 8 years ago

            Reviews are always biased to report the failures from the first slope of the bathtub curve. I’ve got about 15-20 SSD’s at home and a couple of hundred at work; so far their failure rate during burn-in is about equal (about 1% DOA), but the operational failure rate for our current crop of SSD’s is about 0.5% over three years, whereas platter drives are about 8%.

            Sometimes electronics just die, and that’s true of SSD’s and platter drives both.

            • Waco
            • 8 years ago

            In practice I have yet to see an SSD do anything but fail catastrophically and without warning. Not once has a drive ever died and gone to a read-only state as intended.

            Either you’re extremely lucky or everyone I know is extremely unlucky.

            • Bensam123
            • 8 years ago

            I’m sure you have 15-20 SSDs at home for the last four years to make a statistically informed decision on too!

            That aside, I agree. They spontaneously stop working. Mechanicals almost always give signs to their ending. Not just that, but you can sometimes revive mechanicals long enough to get data off of them (freezer method). Once SSDs go, they’re gone.

            • Waco
            • 8 years ago

            15-20 at home? No, just 10. But even enterprise level drives die in this manner…and I’ve got plenty of experience with those second-hand.

            • Bensam123
            • 8 years ago

            ‘Twas sarcasm in relation to the guy saying he has 15-20 SSDs spinning at home.

            • Bensam123
            • 8 years ago

            I bet you have 15-20 SSDs at home…

            Of course reviews are biased, but they would be just as biased across the board. So it isn’t just SSDs that get biased reviews, but all HDs. As long as you take that into account you can compare their failures.

          • chuckula
          • 8 years ago

          I’ve never heard of any modern SSD that died due to the NAND cells actually reaching the end of their useful lives. It may happen in some synthetic test done on purpose to wear out the cells, but not in real-world use, even in heavily used systems. I *have* heard more stories than I can count about SSDs dying though… it happens all the time. The reason they are dying has nothing to do with the NAND cells being worn out. Usually the deaths happen very suddenly as well too.

        • Krogoth
        • 8 years ago

        Software-based RAID does not have any memory buffer. It relies entirely on disk cache and the CPU has to drive everything.

        Software RAID advantage has never been performance. It always has been portability, since the array’s configuration is not tied to the hardware. In other words, you can move the array around from system to system as long as the platform can support software RAID. The array can survive from disk controller failure Where in a hardware RAID you are forced to replace the controller, if it fails.

          • Bensam123
          • 8 years ago

          I don’t think that’s entirely true on a multitude of levels. Depending on the software implementation. Intel raid for instance uses a memory buffer and it can easily be seen through synthetic benchmarks, where you get around 2.5GB/s burst through HD Tach.

          • Waco
          • 8 years ago

          Software RAID is where performance is at these days. Hardware RAID is dying, and quick.

            • Bensam123
            • 8 years ago

            To clarify; Raid on Nix. Unless windows server has made some serious leaps I wasn’t aware of.

          • indeego
          • 8 years ago

          Most mid to high end Hardware RAID stores controller information in EFI/BIOS, on the Disks themselves or can store it on a USB key, another Disk Controler of the same model, or elsewhere; so that makes it fully portable. This would be a massive oversight were they not to include this, since RAID controllers are a SPOF in themselves.

            • Bensam123
            • 8 years ago

            I don’t think portability has as much to do with configuration settings as it does with the arrays themselves. There isn’t a standardized format for making arrays on drives. A controller of the same type and usually the same brand can almost always import arrays that it has already made. The problem comes from when you have to transfer the array to a different brand that does things slightly differently.

            A keychain wont help in this case unless there is some sort of standard, which there unfortunately isn’t.

    • Duck
    • 8 years ago

    Wait… so basically raid does nothing to increase performance. When did that happen?

    EDIT: So why does a 120GB Force GT loose to the 60GB version in Portal 2 load times. The data is not reliable?

    Also it’s not clear looking at just the results what drives are used in raid. When it says raid 80GB, that could be read as 2x 40GB or 2x 80GB.

      • stdRaichu
      • 8 years ago

      RAID adds overhead, and therefore latency – especially when it’s being used on chipset-based RAID rather than a “proper” RAID controller. That said, there are lots and lots of “proper” RAID controllers that can’t cope with the data rates offered by SSD’s and you generally need to spend serious spondoolix. One of the not-here-any-more idiots at work set up a bunch of SSD’s in the SANs as RAID5 and wondered why access times and throughput were only marginally improved from the platter-based SAS.

        • Krogoth
        • 8 years ago

        The problem is that RAID controllers were engineer around HDD ecology.

        SSDs are a different beast from ground up.

        You can develop RAID solutions that better suited for SSDs in terms of performance and fault-tolerance. I suspect the major RAID controller vendors are already working on it.

          • stdRaichu
          • 8 years ago

          Dunno why you were downvoted, you’re right on all counts. There’s lots of hackery involved in both operating systems and RAID controllers to take the rotational latency of platter-based storage into account; that’s all essentially moot with SSD’s where access times are orders of magnitude lower.

          Most of the flash SAN’s already have specialised hardware/ASICs for their “RAID” functions deigned from the ground up to do SSD’s, which allows them to reach colossally high I/O; the tech will trickle down to RAID cards in a year or so.

            • Kurlon
            • 8 years ago

            Eh, the best RAID setups are software, ZFS on a cluster of spinning rust with a couple SSDs for ARC and ZIL is the way to fly these days. If you need faster, toss the SSDs for dedicated flash cards.

            • Krogoth
            • 8 years ago

            Software RAID is best for overall fault tolerance, portability and flexibility.

            Hardware RAID solutions still trump in terms of pure performance.

            • Kurlon
            • 8 years ago

            That hasn’t been true for awhile. There are three ‘tiers’ of RAID:

            1) Crappy Software RAID – This is what everyone thinks of when you say the term SW RAID. If you can boot Windows directly on it, it’s likely a pretty cheesy setup that is biased to consume minimal resources so as not to impact apps on the box.

            2) HW RAID – A CPU and RAM on a stick that abstracts storage to the local OS. Usually the primary speed boost comes from a dedicated XOR engine.

            3) Real SW RAID – A dedicated box feeding storage to a separate system. Think SANs. You want crazy IOPs, you ditch the restrictive embedded CPUs on most traditional slot based RAID cards and instead use the full power of a BIG CPU ala Xeon/etc, along with the BIIIIIG RAM addressing available, would sir like 192GB of cache? When you’re talking this scale, RAID on a stick, even a x16 PCIe stick can’t keep up.

            • Krogoth
            • 8 years ago

            Both of those are “software” RAID.

            The only difference is that latter version is scaled up to 100+ devices over several arrays.

            Basically, you have turn a server-grade CPU and motherboard into a supped-up hardware RAID controller. 😉

            Besides, MS OS isn’t where software RAID shines. It only shines in the *nix world.

        • Bensam123
        • 8 years ago

        You can get Perc 5/is and Perc 6/is through eBay for around $150. The 6i from what I’ve heard is capable of throughput up to 1.4GB/s

    • Mourmain
    • 8 years ago

    Pleeease add radial lines (emanating from the origin) to the performance vs $/GB graphs. Radial lines are lines of equal value. Two points on the same line have equal value, with proportionally increased prices and performances.

      • yogibbear
      • 8 years ago

      Get a ruler… draw a line from (0,0) to every data point… pretty easy to visualize, but yeah I have been thinking about this myself… however it starts to get a bit messy…. they could just have standard gradients that get plotted as thin grey lines (i.e. that don’t intentionally track to any specific data point) allowing you to interpolate yourself.

        • Mourmain
        • 8 years ago

        Yes, a background radial grid is what I’m thinking of too. The cartesian grid is only partially helpful, and doesn’t guide the reader to properly read the graph.

        • ew
        • 8 years ago

        I use a piece of paper. 🙂

      • ew
      • 8 years ago

      Came here to say the same thing. I can see the lines making the graphs look messy but that can probably be countered by making them lightly colored or partially transparent.

    • yogibbear
    • 8 years ago

    Looks at recently purchased and installed Corsair Force GT 120 GB…. cause I’m sexy and I know it. Wiggle wiggle wiggle wiggle wiggle….

    • integer
    • 8 years ago

    The Crucial m4 (128 GB) appears to be quite close to the Corsair Force GT (120 GB), however lags in write operations, especially DriveBench 2.0. Would this difference be noticeable in real-world operations?

      • swampfox
      • 8 years ago

      My question as well, since otherwise I think the m4 128 is a pretty attractive option.

    • MadManOriginal
    • 8 years ago

    I think this article should be titled ‘How many different ways can Geoff pose SSDs to take cool pictures’ 🙂

    I think this is one of the most useful SSD articles I’ve read in a while. It always kind of bothered me when hard drive reviews were often of the highest capacity drives in a family, that trend carried over to SSDs because that’s what the manufacturers send out, so an article that compares different sizes and ones that people are more likely to buy to each other is very revealing. It also shows that different SSDs maintain their relative rankings at comparable capacities – another useful piece of information which means that one can actually use even single-capacity reviews and comparisons to decide which controller type to buy. (The undersized Intel 40GB being the exception – it’s not quite comparable in useful capacity to even a 64GB drive though.)

      • stdRaichu
      • 8 years ago

      Personally, I’m quite upset that there was no attempt made at a black’n’red [i<]V for Vendetta[/i<] domino rally. As a more serious aside, I've been using SSD's at home and at work since the first-gen vertex (verticies?) came out, and unless you're doing *serious* I/O work there's very little to differentiate between that and the high-end models, especially for generic desktop workloads. My general advice is that you'll get a huge bump in performance just from switching to any mediocre SSD as a boot drive, and spending £$€ on an "extra-fast" model is a false economy. FWIW, I still think the Crucial M4's have the best balance of speed, reliability and customer support and they also have the advantage of being one of the cheapest per gig.

        • ludi
        • 8 years ago

        Personally, I’m disappointed there was no Stonehenge mockup.

    • guardianl
    • 8 years ago

    I can’t be the only one dissapointed by Intel’s choice to use SF for their next gen.

    Geoff, I know you can’t talk about Cherryville right now, but if possible, I’d love to see you dig at Intel a little to find out why they seem to have given up on the controller front.

    The only thing I can think of is that they pulled the hardware guys away to work on the Atom SoC…

      • Airmantharp
      • 8 years ago

      I’d love to see what Intel can do with Sandforce.

      Sandforce’s issues haven’t been design or chipset manufacturing centered, but more of a firmware issue- something that Intel excels at.

      Given that the Sandforce controllers are the fastest on the market (for good reason), I don’t think that anyone would be able to best that until SATA4 comes out.

      Also remember that Sandforce based SSDs are some of the cheapest to manufacture due to their ability to function without a cache.

        • UberGerbil
        • 8 years ago

        The RAM cache doesn’t make much difference to the price of SSDs. That Hynix chip [url=https://techreport.com/r.x/intel-320-series/controller.jpg<]shown[/url<] in the 320 Review (H55S5162DFR I think?) is going to be less than $5 in the quantities ordered by the SSD mfrs. So it tacks on a few bucks to the BOM, but given the price range 120+GB currently sell in, it's not a huge factor.

          • Palek
          • 8 years ago

          [quote<]That Hynix chip shown in the 320 Review (H55S5162DFR I think?) is going to be less than $5 in the quantities ordered by the SSD mfrs.[/quote<] A single 2Gb DDR3 SDRAM device trades for around $1 these days, so the impact is even smaller than what you suggest. Have a look: [url<]http://www.dramexchange.com/[/url<]

            • Ihmemies
            • 8 years ago

            And Samsung’s 830 series use Samsung’s own 256MB chips.. which probably cost “next to nothing” when considering the disk’s price 🙂

            • UberGerbil
            • 8 years ago

            Well, $1 is less than $5, right? And while it’s true some SSDs use commodity DDR3 RAM, that Hynix chip in the Intel shot (if I’m reading the part # correctly) isn’t a commodity DDR3 chip; it’s a low-power chip optimized for mobile applications. I’m not going to spend a lot of time looking for a quote it in 10K quantities, but whatever — whether it’s closer to $1 or $5 the point remains: it’s not going to add much to the overall price of the drive.

            • Palek
            • 8 years ago

            I was trying to add weight to your point that the cost of adding SDRAM was negligible, that is all!

            EDIT: You are indeed right. The memory chip in the picture is the H55S5162EFR which is a 512Mb mobile [b<]SDR[/b<] model. This is the intel controller showing its age, I guess. Maybe nobody makes commodity SDR SDRAM anymore so intel is forced to use a mobile model?

    • Dizik
    • 8 years ago

    Any reason why the Samsung 830 Series drives weren’t included?

      • Dissonance
      • 8 years ago

      Because we only have the 256GB drive. For this article, we wanted to focus only on the drives we had at three capacity points. That said, I’m working on a separate 830 Series review right now. Stay tuned 😉

        • Dizik
        • 8 years ago

        I figured there was a good reason for it. The reason why I brought it up was that I just received and installed my 128 GB drive to replace my dead 120GB Vertex 2. Seeing as it’s a SATA 6 Gbps drive versus the SATA 3 Gbps Vertex 2, I’m obviously seeing a marked improvement. The main reason why I bought it was that it doesn’t use a SandForce controller. I’m quite happy with my purchase, provided that the drive lasts longer than 7 months unlike my experience with OCZ, though I’m curious how it stacks up against other similarly specced drives.

        I await your review!

      • jwilliams
      • 8 years ago

      Missing the Samsung 830 from this article really was a glaring error. Makes the article almost useless, since the 64GB Samsung 830 is arguably the best 64GB SSD you can buy.

      Another 64GB SSD that should have been included is the Plextor M3 which just released a 64GB model (but is still rather expensive at the moment…hopefully the price will come down):

      [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16820249019[/url<]

        • ludi
        • 8 years ago

        No, it wasn’t a “glaring error”. A lot of work and a [i<]lot[/i<] of good data were provided using the hardware they had available. Peak Hyperbole will be upon us soon, so try to save some reserves for when you really need it.

    • Compton
    • 8 years ago

    Since everyone is still under NDA concerning Cherryville, let me just say that it’s nothing to get in a lather over. I actually canceled my order already, as they’ve been shipping in small amounts for a while.

    It’s not as awesome as I was hoping. Getting the SF processor whipped into shape must have been difficult and time consuming. I’ll buy one anyway, but it’s not so awesome that I have to have one now — especially not for the price it was going for over the past 10 days.

    • Deanjo
    • 8 years ago

    Something is really wonky with those WD Black copy speeds.

    I happen to have a couple right here in the system and copying from one to the other is around the 125 MB/s mark sustained.

      • Dissonance
      • 8 years ago

      FileBench uses a single drive as both the source and the target for each file copy. Try copying files from one drive to itself, and you’ll see things slow down.

        • Deanjo
        • 8 years ago

        Well that is a pretty useless benchmark if it is supposed to reflect “real world use”. Move operations are usually done on a single device, copy operations are usually done device to device. Who the hell makes multiple copies of an avi on the same drive?

          • Firestarter
          • 8 years ago

          Video editing enthusiasts probably appreciate that benchmark.

            • Deanjo
            • 8 years ago

            Video editing enthusiasts usually use two drives, one for read, one for write.

          • Dissonance
          • 8 years ago

          We want the drives being tested to be the bottleneck for performance. If we copy from one drive to another, there’s two potential bottlenecks. So, we’d either need two of each drive we test (so files can be copied between them) or an ultra-fast drive that was guaranteed to be quicker than everything else, and thus wouldn’t slow down anything we’d be testing (difficult to pull off when your new test subjects are often faster than anything that you’ve had in the lab previously).

          We’re constantly looking at new ways to test. If I can get my hands on a couple of high-end PCIe SSDs to serve as source/target drives, I think you’ll probably see filebench expand to include separate copy to and copy from components.

            • Deanjo
            • 8 years ago

            You really wouldn’t need to go through all that expense. Just pop enough ram into the system to create a ram disk of reasonable size which could be used for read or write tests. Setting up an 8GB ram disk is pretty cheap.

            • indeego
            • 8 years ago

            Then you are exiting the realm of “common user-repeatable” scenarios.

            • Deanjo
            • 8 years ago

            Not really as your limiting factor will be the actual performance of the drive/controller itself and you could be assured that a bottleneck does not exist outside of the drive/controller.

            • flip-mode
            • 8 years ago

            As you would be by using the high end PCIe SSD that Geoff was talking about. Deanjo’s RAM disk idea is a good one. The point here is not to be user repeatable, but, as Geoff said, the point is to remove any bottlenecks that don’t belong to the drive that is the subject of the test.

            • indeego
            • 8 years ago

            [quote<]"If we copy from one drive to another, there's two potential bottlenecks. "[/quote<] Three or more if you count the storage controller(s).

            • Bensam123
            • 8 years ago

            You designed systems to do such things before… I don’t think it would be that hard to get a PCI-E raid card and a couple mechanicals and hook them up together to raise the bar.

    • indeego
    • 8 years ago

    Poor man’s RAID it is then, image+file based to a RAIDed mechanical, then offsite somewhere in a rainstorm…

    • juampa_valve_rde
    • 8 years ago

    Excelent article for the average buyer, thanks Geoff!

      • guardianl
      • 8 years ago

      Agreed!

Pin It on Pinterest

Share This