We last published an SSD round-up a month and a half ago, squaring off a six-pack of new drives against Intel’s well-established X25-M. In the end, we recommended the Intel drive, due to its all-around performance leadership, along with the homegrown and Corsair-branded versions of Samsung’s PB22-J, on the strength of that design’s solid performance, low cost per gigabyte, and modest power consumption. We weren’t particular fans of the Super Talent and OCZ SSDs based on Indilinx’s “Barefoot” SSD controller, though. The drives didn’t fare well overall, and they were painfully slow in our real-world file creation and copy tests.
Those conclusions ruffled the feathers of more than a few forum fanboys, who sprung into action to defend the upstart Indilinx controllerand, more importantly I suspect, the honor of the Vertex SSDs inside their very own systems. Insults were hurled, flames raged, and through all the noise, a few relevant concerns emerged. Exploring them has largely monopolized my benchmarking time since.
First, we started with the easy stuff: the Vertex’s firmware was updated, various drivers were freshened, and our test system’s operating system was brought up to date with the latest patches. But the Vertex’s performance didn’t improve. Curious to see whether our decision to test drives in a “used” state could shed some light on the issue, we next probed differences in fresh versus used performance with the Indilinx, Intel, and Samsung drives. The Indilinx controller proved plenty quick when the drive was in its factory-fresh state with plenty of empty flash pages available for writing, but its performance dropped by a much greater margin than the Intel and Samsung drives when tested in a used state. It looked like we’d found our culprit.
All this testing was still being conducted on an older system that, while fast enough to stress even an X25-E Extreme, wasn’t representative of the latest and greatest hardware. The system’s Windows XP operating system was perhaps the greatest concern, because its default 63-sector partition offset starts the first partition in the middle of an SSD flash page rather than at the beginning of one. This misalignment apparently creates problems for the Indilinx controller, although it’s not an issue for all SSDs. Intel says the X25-M isn’t picky about such offsets, for example.
Still, since Windows Vista drops XP’s default partition offset, we decided to test with it next. We updated our test system’s hardware, as well, combining a Core 2 processor with 4GB of memory and Intel’s most recent ICH10R south bridge SATA controller. But does this drastic overhaul change the SSD performance picture any? We’ve subjected drives based on controllers from Indilinx, Intel, and Samsung to a battery of tests in an attempt to find out.
And then there were three
We’ve narrowed our focus to three drives today because the results of our last SSD round-up nicely illustrated the fact that storage controllers (and firmware revisions) largely define SSD performance. Only a handful of different storage controllers are available, with numerous drives based on each. Intel’s X25-M, for example, is available not only directly from the chip giant, but also through partner Kingston. Samsung makes its own drives, too, and its latest controller can be found in new models from Corsair and OCZ. Indilinx’s SSD silicon may be the most promiscuous of the bunch. The company doesn’t sell its own drives, but it has numerous partners, including OCZ, Super Talent, and Patriot, just to name a few.
Today, we’ll be focusing our attention on the Intel X25-M, an OCZ Summit based on Samsung’s latest controller, and the Indilinx-based OCZ Vertex. The performance of each drive should be representative of competing models and otherwise re-badged versions of the same underlying designs. However, we should note that SSD performance is somewhat dependant on overall drive capacity. A 30GB Indilinx-based design won’t necessarily be as quick as a 120GB one. Keep that in mind if you’re looking to save a little cash on a low-capacity unit.
|Capacity||Cache||Controller||Max reads||Max writes||Warranty||Street price|
|80GB||16MB||Intel PC29AS21AA0||250MB/s||70MB/s||3 years||
|120GB||128MB||Samsung S3C29RBB01-YK40||220MB/s||200MB/s||3 years||
|120GB||64MB||Indilinx IDX110M00-LC||250MB/s||180MB/s||3 years||
The X25-M has been around longer than its direct competition, and it shows on the spec sheet. With only 16MB of cache, the drive has substantially less RAM onboard than either the Indilinx or Samsung designs, which pack 64 and 128MB, respectively. The X25-M’s write speed rating is quite a bit lower than the other drives, as well, despite the fact that all three drives use the same kind of multi-level cell (MLC) flash memory common in consumer-grade SSDs. Interestingly, Intel only clams a write speed of 170MB/s for the X25-E Extreme, which is based on pricier single-level cell (SLC) flash memory. Perhaps Intel is simply more conservative with performance ratings than its competition.
Of course, it’s important not to put too much stock into these theoretical peak ratings. We have a full suite of tests that will more clearly illustrate how these drives perform in the real world. Before getting into those results, however, I should note one important difference between these three SSD designs. While drives based on the Indilinx and Intel controllers can have their firmware upgraded by end users, SSDs based on Samsung silicon cannot. It’s unclear whether this is a hardware limitation specific to the Samsung design or whether the company simply prefers not to let users flash their own drives.
Support for user firmware flashing might not seem like an important consideration for a storage product. After all, apart from the last batch of broken Barracudas, I can’t remember the last time a mechanical hard drive maker even issued a public firmware update. New firmware releases aren’t uncommon in solid-state disks, though. We’ve already seen new firmware revisions dramatically improve the performance of some drives, and a few updates have also added new features. Enthusiasts won’t want to miss out on either.
Samsung’s lack of support for user firmware updates probably won’t bother users of Samsung-branded drives, which only seem to be available in pre-built systems from the likes of Dell and HP. However, PC enthusiasts buying versions of the drive sold by the likes of Corsair and OCZ will surely want to be able to flash their drives. What’s more, there’s currently no way for those shopping on sites like Newegg to determine which firmware revision a given drive is running. Samsung really needs to change its policy on firmware updates, if not for its own drive models, then at least for those of its partners.
The big, bad block-rewrite penalty
The block-rewrite penalty associated with flash memory is the scourge of SSD performance. This penalty arises from the very nature of flash-based memory, so it’s a tough one to avoid. Flash cells are typically arranged in 4KB pages organized into 512KB blocks. If a cell is empty, pages can be written to directly in 4KB chunks. Simple. If a cell is occupied, however, a rewrite of the entire block must be performed, even if only a single page is being written.
Before a block can be rewritten, its contents must first be read and then modifiedextra steps that need not be performed when dealing with empty cells. The block write that follows also weighs in at 512KB, or 128 times the size of a 4KB page write, so there’s more to actually write. The performance loss resulting from these factors is the block-rewrite penalty.
At first blush, one might assume that simply ensuring an SSD has plenty of free capacity should avoid this calamity. But that doesn’t work because of how Windows deals with deleted files. When a file is deleted, the flash pages it occupies are marked as available, but their contents aren’t actually emptied or otherwise cleared. As a result, a solid-state drive can show plenty of available storage capacity yet still have all of its flash pages occupied. In that case, the block-rewrite penalty will hamper each and every write request.
To try to combat the block rewrite penalty’s impact on long-term drive performance, Indilinx, Intel, and Samsung have all updated their SSD firmware in the last few months. Intel released an 8820 firmware revision for the X25-M that tweaked the drive’s self-cleaning mechanism and dramatically improved file copy performance. Indilinx has been busier, issuing firmware updates for its design seemingly every other month. The latest release, version 1370, apparently improves the drive’s internal “garbage collection” scheme. Few details on this feature are available, although OCZ is promising to release a white paper on the technology soon. What we do know is that garbage collection is targeted at improving long-term drive performance.
Garbage collection has also come to Samsung-based drives via a new 18C1 firmware revision that the company will begin shipping in drives starting July 1. We’re still awaiting specifics on howand wheterSamsung’s approach differs from Indilinx’s. From what little has been revealed thus far, it appears the Indilinx and Samsung garbage collection schemes run automatically when the drive is idling. There’s no way to invoke these features manually, and no way to determine whether a drive has had its, er, garbage collected.
I expect there are many similarities between the self-cleaning schemes employed by the Indilinx, Intel, and Samsung SSD controllers, but we can’t be sure until more details are disclosed. There does appear to be one difference in Intel’s approach, though. The X25-M’s self-cleaning mechanism runs constantly, while the Indilinx and Samsung SSDs appear to fire up their garbage collection routines only when drive is idle.
Systems don’t idle for long in the Benchmarking Sweatshop, making it difficult to test the garbage collection schemes on the Indilinx and Samsung drives. The fact that there’s no way to verify when or even if a self-cleaning routine has completed further complicates the matter. However, a potentially better solution to the block-rewrite problem looms just over the horizon in the form of the TRIM command.
Due to be supported by Windows 7, the TRIM command promises deal with the block-rewrite penalty by emptying pages when data is deleted rather than simply marking them as available. The spec has yet to be finalized, though, and no drives currently support the command in the Windows 7 Release Candidate. Indilinx has pledged to add Windows 7-compliant TRIM support in a future firmware update. Samsung has also promised TRIM support, but Intel has been largely mum on the subject. We’d expect it to follow suit, either with entirely new drives or firmware updates for existing ones.
At the moment, those with Indilinx-powered SSDs can use a “wiper” utility to effectively TRIM their drives manually. The latest 0525 revision of this app is supposed to be more robust than an older version that didn’t work for us earlier, but problems persist. The wiper tool apparently works quickly with some configurations, taking a matter of minutes to freshen a drive, and extremely slowly with others, requiring more than a day to complete a run through a 120GB SSD. Unfortunately, our test system is one of those slow configs. More specifically, the wiper tool runs slowly when combined with Intel’s AHCI drivers.
OCZ says switching to Vista’s own AHCI drivers resolves the issue, and based on our own testing, that appears to be true. However, OCZ has been unable to tell us why the wiper utility has problems with Intel’s drivers. Lest you think Intel’s sandbagging the wiper to protect X25-M sales, OCZ notes that the app doesn’t play nicely with storage controller drivers from AMD or Nvidia, either. Indeed, it appears the only drivers that work properly with the wiper utility are the ones Microsoft built into Vista.
Switching storage controller drivers should be an easy task for any enthusiast. Still, the fact that the wiper tool doesn’t work with common, current, and WHQL-certified drivers is a major problem. This latest version may get along with very specific system configurations, but it’s not yet ready for mass consumption. As a result, we haven’t used it in our testing today.
Because the block-rewrite penalty can severely impact SSD performance, we’ve elected to test the drives in a simulated used state, with all their flash pages occupied. We don’t believe that testing SSDs in a factory-fresh state accurately represents their long-term performance, and we’re far more interested in seeing how drives handle a more typical scenario than chasing higher benchmark scores with SSDs that have been manually freshened with secure-erase tools that clear the contents of all flash pages.
Our testing methods
We’ve put together an all-new test platform for this latest SSD round-up. The most important changes here are the move to Intel’s latest ICH10R south bridge chip and Windows Vista x64 with Service Pack 2.
You’ll notice on the following pages that we’ve tested the Summit in two configurations. The first uses the 1801 firmware revision and is representative of what’s on store shelves today. We’ve also tested with the newer 18C1 firmware revision, which Samsung should be shipping to customers now.
Intel Core 2 Duo E6700 2.66GHz
|System bus||1066MHz (266MHz quad-pumped)|
Intel P45 Express
|South bridge||Intel ICH10R|
OCZ PC2-6400 Platinum Edition at 800MHz
|CAS latency (CL)||
|RAS to CAS delay (tRCD)||4|
|RAS precharge (tRP)||4|
|Cycle time (tRAS)||15|
Realtek ALC889A with 2.24 drivers
Gigabyte GeForce 8600 GT 256MB with ForceWare 185.85 drivers
Intel X25-M 80GB with 8820 firmware
OCZ Summit with 1801 and 18C1 firmware
OCZ Vertex 120GB with 1370 firmware
Windows Vista Ultimate x64
|OS updates||Service Pack 2|
Our test system was powered by an OCZ GameXStream power supply unit.
We used the following versions of our test applications:
- WorldBench 6 Beta 2
- Intel IOMeter v2006.07.27
- Xbit Labs File Copy Test v0.3
- HD Tach v3.01
- Far Cry 2 v1.3
- Call of Duty 4 v1.4
The test systems’ Windows desktop was set at 1280×1024 in 32-bit color at an 85Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.
All the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.
Quantifying the block-rewrite penalty
Before digging into our benchmark results, it’s worth taking a moment to quantify the magnitude of the block rewrite penalty associated with each drive. As we’ve seen in the past, some SSDs deal with this issue better than others.
First, we used an IOMeter workload consisting exclusively of 4KB random write requests to measure the response time of each drive in its factory-fresh state, with no occupied flash pages. We then subjected each SSD to several runs through HD Tach’s “full” disk benchmark, whose write speed test fills drives with a single, contiguous file. This test neatly occupies all available flash pages, forcing a block rewrite for every subsequent write request.
With our SSDs now in a simulated used state, we ran our IOMeter random writes test once more to gather response time data.
All the drives have much quicker response times when fresh than in a simulated used state. The performance differences vary from one drive to another, though. The Vertex’s response time rises by nearly an order of magnitude, while the X25-M is only about five times slower. Neither compares to the Summit, whose used-state response times are roughly 20 times higher than with a fresh drive.
In addition to exhibiting the greatest disparity between fresh and used response times, the Summit is easily slower in each state than the Vertex and X25-M. Those two drives offer nearly identical used-state response times, although the fresh Vertex is quicker than the Intel drive in the same condition.
Based on these results, the Summit and its underlying Samsung controller seem to be the most adversely affected by the block-rewrite penalty, at least in this synthetic test. The Vertex and X25-M, on the other hand, look pretty evenly matched when in a used state. Keep in mind that the benchmark results on the following pages were all obtained with the drives running in our simulated used state.
WorldBench uses scripting to step through a series of tasks in common Windows applications. It then produces an overall score. WorldBench also spits out individual results for its component application tests, allowing us to compare performance in each. We’ll look at the overall score, and then we’ll show individual application results.
Only four points separate the fastest drive from the slowest in WorldBench overall. The X25-M and latest Summit revision share the lead, with the Vertex trailing by three points. Samsung’s latest firmware improves the Summit’s performance by a full four points here. Let’s break down the individual test results to see which applications made the difference.
Among WorldBench’s multimedia editing and encoding tests, Photoshop shows the biggest spread. There, the Vertex turns in the quickest completion time, followed nearly half a minute later by the Summit with its latest firmware. The X25-M settles for third, just ahead of the Summit with Samsung’s initial firmware release.
The Vertex enjoys a lead in the Movie Creator test, as well, but it’s a slim one at best. Note that, again, Samsung’s most recent 18C1 firmware revision improves the Summit’s performance.
The completion times in WorldBench’s office and multitasking tests are too close to call. Only a few seconds separate the fastest drives from the slowest in all three tests.
This is where things get interesting. WorldBench’s Nero test proves problematic for the Vertex, which falls well behind the leaders. The X25-M sits in second place, the meat in a Summit sandwich. Once more, Samsung’s newest firmware revision yields a decent performance boost.
In WorldBench’s WinZip test, the Vertex just edges out the X25-M for top honors. The Summits fill out the back of the pack, with the latest 18C1 firmware again delivering a notable performance advantage over the initial release.
Boot and load times
To test system boot and game level load times, we busted out our trusty stopwatch.
There’s only a one-second difference between the fastest and slowest SSD in our system boot test. That’s too close to call.
There isn’t much to see in our level load tests, either. Far Cry 2 loads slower on the Vertex than on the other drives, but the difference is only about a second. Not even that separates the pack in Call of Duty 4. Given the manual nature of our stopwatch timing, I wouldn’t get too worked up about differences of a fraction of a second here or there.
File Copy Test
File Copy Test is a pseudo-real-world benchmark that times how long it takes to create, read, and copy files in various test patterns. We’ve converted those completion times to MB/s to make the results easier to interpret.
Vista’s intelligent caching schemes make obtaining consistent and repeatable performance results rather difficult with FC-Test. To get reliable results, we had to not only drop back to an older 0.3 revision of the application and also create or own custom test patterns. During our initial testing, we noticed that larger test patterns tended to generate more consistent file creation, read, and copy times. That makes sense, because with 4GB of system memory, our test rig has plenty of free RAM available to be filled by Vista’s caching and pre-fetching mojo.
For our tests, we created custom MP3, video, and program files test patterns weighing in at roughly 10GB each. The MP3 test pattern was created from a chunk of my own archive of ultra-high-quality MP3s, while the video test pattern was built from a mix of video files ranging from 360MB to 1.4GB in size. The program files test pattern was derived from, you guessed it, the contents of our test system’s Program Files directory.
Even with these changes, we noticed a little more variability in FC-Test performance than we’d like to see. Normally we run tests three times and average the results, but for FC-Test, we’ve run each test five times before averaging. We also had to perform some additional test runs to replace obviously erroneous results that cropped up occasionally.
The Summit absolutely dominates our file creation tests. The underlying Samsung controller seems particularly well-suited to handling the larger files that make up our video test pattern, although the Summit’s performance is impressive with the program files and MP3 test patterns, as well.
Intel’s X25-M finishes a distant or, well, not quite so distant second place in all three test patterns. Only in the program files test pattern is the Intel drive threatened by the Indilinx-powered Vertex. With the video and MP3 test patterns, the Vertex trails by greater margins.
As we switch to read tests, the X25-M assumes the lead across all three test patterns. The Intel drive has a notable edge over the Summit, which in turn has an even greater advantage over the Vertex.
We’ve seen the newer 18C1 firmware improve the Summit’s performance in most tests thus far, but it doesn’t do much for the drive here. It seems the firmware’s special sauce helps with writes but not so much with reads.
Copy tests combine read and write operations. With this cocktail, the Summit leads in the program files and video test patterns. This isn’t a sweep for the Samsung controller, though. The X25-M fares quite well with the MP3 test pattern, delivering transfer rates that are 50% faster than its closest rival.
Although it eclipses the performance of both Summit configs in the MP3 test pattern, the Vertex falls to the back of the pack with the program files and video test patterns. Its transfer rates are half those of the X25-M with both test patternsand several times slower than the leading Summit drives.
IOMeter presents a good test case for both seek times and command queuing.
Well, you certainly wouldn’t want to throw the Summit into a demanding multi-user environment. The Samsung controller doesn’t cope gracefully with our ramping IOMeter loads, as the Summit turns in much lower transaction rates than the X25-M and Vertex. Poor random write performance appears to be the culprit here, at least in part; the Summit is much worse off in the file server, database, and workstation test patterns, which are the only ones that contain write requests. With these test patterns, I’d expect mechanical hard drives to offer transaction rates in the same ballpark as the Summit.
It’s difficult to see in the graphs, but both firmware revisions for the Summit turn nearly in identical IOMeter performances.
The X25-M and Vertex are closely matched with the file server access pattern, but the Vertex pulls ahead when we switch to simulated database and workstation loads. Interestingly, the Intel drive dominates in the web server access pattern, which is made up exclusively of read operations.
The Vertex and X25-M consume more CPU cycles than the Summit, but they’re also completing more transactions. To get a clearer picture, let’s quantify IOMeter efficiency in terms of transactions per percent CPU utilization.
Although the results are certainly mixed overall, the web server results are fairly easy to interpret. With that access pattern, the X25-M delivers more transactions per CPU cycle than the Vertex and Summit.
We tested HD Tach with the benchmark’s full variable zone size setting.
These tests should give us a good sense of each drive’s maximum achievable sustained transfer rates. The X25-M proves the fastest in the read speed test, followed closely by the Vertex, as the Summit trails more than 20MB/s off the pace.
The X25-M falls to last place in the sustained write speed test, which shouldn’t come as a surprise given the drive’s relatively sluggish 70MB/s write speed rating. The Summit is nearly 100MB/s faster here, and the Vertex is a further 25MB/s quicker.
It’s interesting to see how much these transfer rates conflict with the results of our real-world file creation, read, and copy tests. The Vertex looks like the bee’s knees in HD Tach, but it doesn’t fare nearly as well when creating, reading, or copying actual files. This is why we don’t rely solely on synthetic benchmarks to evaluate drive performance.
The X25-M has a slight edge over the Summit in HD Tach’s burst speed test. There’s no difference in performance between Summit firmware revisions here, and there wasn’t with the sustained read and write speed tests, either.
Curiously, the Vertex only manages 186MB/s in this test. The drive’s 64MB cache should be able to keep up with a 300MB/s Serial ATA interface, suggesting that the Indilinx controller is slower than the others when handling burst transfers.
As far as HD Tach is concerned, all these drives deliver instantaneous seek times.
HD Tach’s CPU utilization scores are nearly within the app’s +/- 2% margin of error for this test. I’m going to call it a wash.
For our power consumption tests, we measured the voltage drop across a 0.1-ohm resistor placed in line with the 5V and 12V lines connected to each drive. We were able to calculate the power draw from each voltage rail and add them together for the total power draw of the drive. Drives were tested while idling and under an IOMeter load consisting of 256 outstanding I/O requests using the workstation access pattern.
At idle, the Summit draws half the power of the Vertex and about a third of the juice required by the X25-M. However, when subjected to a punishing IOMeter load, the OCZ drives must share the low-power limelight.
The X25-M proves to be the power-hungriest drive of the bunch, which is notable considering that it’s also packing fewer gigabytes (80 as opposed to 120) than the others. Keep that in mind if you’re looking for an SSD to extend notebook battery life.
In the last two months, I’ve tested more SSDs and firmware revisions than I’d care to remember. It’s been a grueling process, and I’ve no doubt sprouted a few grey hairs along the way. But this latest batch of performance results sheds new light on the SSD landscape, and we’ve learned some interesting things as a result.
Let’s start with the Vertex and its underlying Indilinx controller. This design isn’t at its best when faced with Windows XP’s default 63-sector partition offset, but the move to Windows Vista doesn’t appear to have helped the drive much. The Vertex is still competitive in IOMeter, again trumping the X25-M in the database and workstation test patterns. And it still boasts impressive sustained transfer rates in HD Tach’s synthetic benchmarks. However, those transfer rates don’t translate to quick file creation, read, or copy speeds in the real world. In FC-Test, the Vertex was the slowest drive in eight of nine tests, often by substantial margins.
Now, keep in mind that we tested drives in a used statea condition that seems to be problematic for the Indilinx controller. Indilinx does have a wiper utility that can quickly restore drives to close to their factory-fresh form, provided your system’s running the right drivers. However, the wiper tool’s apparent compatibility issues with AMD, Intel, and Nvidia storage controller drivers feels sloppy for what is supposedly a finished product fit for public consumption. The Indilinx controller surely has potential, especially with TRIM support promised in the next firmware update, but it’s still very much a work in progress.
Intel’s X25-M is considerably more mature, which is to be expected from a drive that’s been selling for nearly 10 months now. The X25-M easily dominated its competition when it was launched last September, but it’s now facing considerably faster rivals. Across the range of tests we’ve explored today, I still think the X25-M has the best overall performance, but its grip on that crown is tenuous at best.
Of greater concern for prospective customers is the fact that Intel has not committed to adding TRIM support to the X25-M. Given the drive’s age, it’s entirely possible Intel will introduce an all-new SSD with TRIM under the hood rather than updating existing drives. Of course, it could also do both.
The X25-M’s toughest competition comes from the Summit and other drives based on the latest Samsung controller. The Samsung-based drive’s random write performance is more harshly affected by the block-rewrite penalty than the others. In fact, the Summit is almost an order of magnitude slower than the fastest mechanical drives in this test. Still, with peak write times below 30 milliseconds, it’s an order of magnitude faster than SSDs based on the catastrophically poor JMicron controller (which we didn’t even consider here.) In nearly all of our other real-world tests, the Summit fared quite well, even in a used state. In FC-Test, the Summit easily registered the fastest real-world file creation speeds, and it also performed well in the read and copy tests. What’s more, with the latest firmware, the Summit tied the X25-M for the lead in WorldBench. (Samsung’s latest firmware improved performance in most of our tests, so it’s a shame end users won’t be able to upgrade themselves.) Only in IOMeter did the Summit fall flat, which suggests it’s poorly suited for deployments in servers or multi-tasking-heavy workstations where multiple outstanding disk I/O requests are the norm.
The prospect of TRIM support is cramping my style a little here, because in just a few short months, Windows 7 looks set to change the SSD performance landscape, perhaps drastically. Indilinx should have a TRIM-capable firmware update soon, we’re told, but it’s not here yet. There’s no telling whether the X25-M will ever get a TRIM-capable firmware of its own. Samsung has promised to add TRIM support in another firmware update, presumably due before Windows 7 hits, but without user-applicable upgrades, TRIM support for existing drives seems doubtful at best.
If you absolutely must go out and buy an SSD today, I’d recommend the X25-M if you know you’re going to be dealing with the sort of random access patterns seen in multi-user or even heavy multitasking environments: servers, high-end workstations, and the like. Otherwise, the Summit looks like your best bet for desktops and notebooks; it delivers strong performance with typical desktop applications and real-world file operations, and it’s cheaper than the X25-M on a cost-per-gigabyte basis. The Samsung controller’s poor performance in our 4KB random write test is worrying, though. I’ll be putting a Samsung-based drive into my primary desktop to see whether the comparatively slow random writes affect real-world usage; look for an update in a week or two. That leaves us with the Vertex, which is an intriguing option if you’re willing to deal with the wiper utility’s spotty compatibility, but the sort of product I’d only recommend to seasoned enthusiasts who are looking to tinker.
Honestly, though, I wouldn’t recommend picking up any SSD until we have a clearer picture of which drives will support the TRIM command and how they’ll perform in Windows 7.