Intel’s X25-V solid-state drive on its own and in RAID

The rise of solid-state drives has been one of the most exciting developments in PC hardware over the last few years. With essentially instantaneous seek times, SSDs can access data an order of magnitude quicker than even the most exotic high-RPM hard drives. With no moving parts, SSDs are impervious to mechanical failure, able to withstand shocks that would destroy traditional hard drives. Their silicon roots also make SSDs completely silent and quite power-efficient compared to their mechanized brethren. And the prospects for SSDs look even brighter when one considers the performance potential of what are essentially parallel arrays of flash memory chips.

Really, it’s no wonder many of us are eyeing SSDs for our desktops and notebooks. In a high-performance PC, an SSD is perfect for an OS and applications drive, augmented by a larger pool of mechanical storage. Such a configuration offers excellent performance for the data you want to access quickly and loads of capacity for the rest. Those who opt for a low-power mechanical hard drive to serve as secondary storage will be treated to a virtually silent setup, as well.

Few notebooks can accommodate multiple hard drives, but rugged shock tolerance and the ability to prolong battery life makes SSDs particularly tempting for portables. Solid-state drives have an even greater performance advantage over the 2.5″ mechanical drives typically found in notebooks, which are considerably slower than full-grown 3.5″ desktop models.

Of course, all this awesomeness comes at a cost—quite a high one, in fact. SSD prices have plunged dramatically in recent years, but drives are still far from cheap, especially when one considers the cost per gigabyte.

The relatively high price of solid-state storage is certainly not lost on Intel. At the Consumer Electronics Show back in January, the company said its SSD plans for 2010 would focus more on driving down costs and lowering prices than on increasing performance. Indeed, Intel kicked off the year with its very first value-oriented SSD, the X25-V 40GB. Low-cost, low-capacity SSDs have existed for some time, of course, but the X25-V marks Intel’s first foray into a market that’s sure to receive an increasing amount of attention. The drive launched at $130 and has already dropped to just $115 online.

Compared to a mechanical drive, 40GB for $115 may not seem like much of a value. But this 40GB isn’t saddled with the baggage associated with spinning platters; it’s spread across 34-nm flash memory chips hooked up to the same controller Intel uses in its standard-setting X25-M SSDs. Controller architectures tend to dictate SSD performance, and Intel’s design has already proven itself to be more robust than other solutions on the market.

The X25-V’s relatively low asking price opens the door to folks who might otherwise have been unable to afford a higher-capacity SSD. Given the fact that the X25-V’s cost per gigabyte is competitive with most other SSDs, enterprising users can also combine multiple drives in a RAID 0 array without exceeding the cost of single drives that offer similar capacities. With zero chance of mechanical failure, striping is a lot safer with SSDs than it is with mechanical hard drives.

Naturally, we had to test the X25-V for ourselves. And then Intel sent two, so we’ve run the X25-V through our new gauntlet of storage tests both on its own and in a striped RAID 0 array. Read on to see how Intel’s value SSD fared.


As you may have gathered, the X25-V looks to be a stripped down version of the X25-M, with the same storage controller and the same 34-nm flash memory chips. Because the X25-V only has five flash chips onboard, it can only exploit half of the controller’s ten memory channels. That translates to lower performance ratings, as summarized in the chart below.

X25-M G2 X25-V
Controller Intel PC29AS21BA0 Intel PC29AS21BA0
Flash fabrication process 34nm 34nm
Capacity 80, 160GB 40GB
Cache 32MB 32MB
Max sequential reads 250MB/s 170MB/s
Max sequential writes 70MB/s (80GB)
Read latency 65 µs 65 µs
Write latency 85 µs 110 µs
Max 4KB read IOPS 35,000 25,000
Max 4KB write IOPS 6,600 (80GB)
Active power consumption 150 mW 150 mW
Idle power consumption 75 mW 75 mW
Warranty length Three years Three years

The X25-V’s maximum sustained read speed is only 32% slower than the X25-M’s, but Intel’s value SSD pulls up quite a bit shorter with writes. The X25-V’s 35MB/s sustained write speed rating is half what Intel quotes for the 80GB X25-M and a little more than a third the speed of a 160GB drive. Plus, 35MB/s just sounds, well, sluggish.

In addition to slower sustained transfer rates, the X25-V carries lower performance ratings for random 4KB reads and writes. Again, the drive is much further behind the X25-M with writes than it is with reads. The X25-V also has a longer write latency than the X25-M, although read latencies are equal between the two.

I’m going to go out on a limb here and bet that the power consumption figures Intel has published for the X25-V aren’t entirely accurate. With fewer flash chips than the X25-M, I’d expect the V to pull less power both at idle and under load. Intel’s datasheets suggest differently, but we’ll test power consumption for ourselves in a moment.

The X25-V may not measure up to the X25-M’s performance levels, but it still inherits quite a few perks from the elder members of the G2 family. Like Intel’s other second-gen drives, the X25-V has built-in garbage-collection and wear-leveling algorithms, native support for the TRIM command built into Windows 7, and an SSD Optimizer application capable of clearing erased flash pages in Windows XP and Vista. Some combination of these measures is necessary to keep an SSD operating in top shape. The Optimizer can be set to run on a schedule, which is much more convenient than having to invoke the process manually. However, Intel notes that system use should be minimized while the Optimizer is running. The application’s support docs suggest an Optimizer pass should only take a few minutes, which should be easy enough for most users to sit through or schedule around.

Intel packs the X25-V into the same 2.5″, 9.5-mm mobile hard drive form factor being used by just about every other SSD maker. The drive’s metal casing actually measures closer to 6.5 mm thick, but a black spacer on the top of the drive brings it up to 9.5 mm. For those looking to install the X25-V in a desktop enclosure, a 3.5″ bay adapter is included in the retail box. 2.5″ drive bays have slowly started creeping into enthusiast-oriented cases, but they’re few and far enough between to make the adapter a nice addition to the overall package.

Three years of warranty coverage is the de facto standard for consumer-grade hard drives, and the X25-V doesn’t budge from that mark. The X25-M is also covered for three years, as are most of the SSDs we’ve reviewed recently.

Our testing methods

If you’re unfamiliar with The Twins, our new duo of storage test platforms, I recommend checking out this page from our recent VelociRaptor VR200M review. These systems pack potent hardware and have been furiously testing hard drives and SSDs for weeks now. Unfortunately, Intel still hasn’t resolved the performance scaling issue we found in its latest storage controller drivers for the P55 chipset. As a result, The Twins are still running the Microsoft AHCI driver built into Windows 7.

Before dipping into pages of benchmark graphs, let’s set the stage with a quick look at the players we’ve assembled to take on the X25-V. Below is a chart highlighting some of the key attributes that can affect drive performance.

Interface speed Spindle speed Cache size Platter capacity Total capacity
Caviar Black 2TB 3Gbps 7,200 RPM 64MB 500GB 2TB
Nova V128 3Gbps NA 64MB NA 128GB
PX-128M1S 3Gbps NA 128MB NA 128GB
SiliconEdge Blue 3Gbps NA 64MB NA 256GB
SSDNow V+ 3Gbps NA 128MB NA 128GB
VelociRaptor VR150M 3Gbps 10,000 RPM 16MB 150GB 300GB
VelociRaptor VR200M 6Gbps 10,000
32MB 200GB 600GB
X25-M G2 3Gbps NA 32MB NA 160GB




The X25-V will do battle with a slew of SSDs based on a range of different controllers from Indilinx, Intel, JMicron, Marvell, and Toshiba. All of those drives cost about three times what you’ll pay for the X25-V—an important consideration to keep in mind when looking at the test results on the following pages.

What about newer SSDs like Crucial’s RealSSD C300 and all that SandForce-based hotness that was on display at CES? Well, I’ve actually been busy testing a trio of SandForce-based drives. We should have a review of those published very soon. As for the C300, it’s been sitting on the shelf awaiting a firmware update to address performance issues. That firmware update has finally been released, and by the time you read this, the RealSSD should be mid-way through our test suite.

For some additional perspective, we’ve included performance data from a trio of mechanical hard drives. Western Digital’s 10k-RPM VelociRaptor VR200M is the fastest mechanical hard drive that plugs into a Serial ATA interface, so it has perhaps the best chance of challenging SSDs on the performance front. We’ve also added the original VelociRaptor VR150M to the mix, since it still has quicker access times than most desktop drives. Finally, we’ve thrown in a two-terabyte Caviar Black to represent the best performance 7,200-RPM mechanical drives have to offer.

Since the X25-V is relatively inexpensive, we couldn’t resist combining two of them in a RAID 0 array for some additional testing. Besides, it would have been cruel to pit the budget X25-V against a stack of more expensive competitors without throwing the value drive a bone. We crafted our array using the Serial ATA RAID controller built into Intel’s P55 chipset. The array was configured with a 128KB stripe size—the default in Intel’s RAID BIOS—and we used the company’s Rapid Storage Technology drivers. Unfortunately, although the Intel drivers support TRIM on SSDs acting as single drives, they can’t pass the command to solid-state drives that are members of a RAID array. We have yet to encounter a RAID controller that claims to support TRIM for SSDs running in RAID.

The lack of TRIM support could prove problematic for our RAID config given how much the block-rewrite penalty inherent to flash memory can curtail long-term SSD performance. This penalty and the TRIM command designed to offset it both complicate our testing somewhat, so I should explain our SSD testing methods in greater detail. Before testing the drives, each was returned to a factory-fresh state with a secure erase, which empties all the flash pages on a drive. Next, we fired up HD Tune and ran full-disk read and write speed tests. The TRIM command requires that drives have a file system in place, but since HD Tune requires an unpartitioned drive, TRIM won’t be a factor in those tests.

After HD Tune, we partitioned the drives and kicked off our usual IOMeter scripts, which are now aligned to 4KB sectors. When running on a partitioned drive, IOMeter first fills it with a single file, firmly putting SSDs into a used state in which all of their flash pages have been occupied. We deleted that file before moving onto our file copy tests, after which we restored an image to each drive for some application testing. Incidentally, creating and deleting IOMeter’s full-disk file and the associated partition didn’t affect HD Tune transfer rates or access times.

Our methods should ensure that each SSD is tested on an even, used-state playing field. However, differences in how eagerly an SSD elects to erase trimmed flash pages could affect performance in our tests and in the real world.

With few exceptions, all tests were run at least three times, and we reported the median of the scores produced. We used the following system configuration for testing:


Intel Core i5-750 2.66GHz
Motherboard Gigabyte GA-P55A-UD7
Bios revision F4
Chipset Intel P55 Express
Chipset drivers

Memory size
(2 DIMMs)
Memory type

OCZ Platinum DDR3-1333
at 1333MHz
Memory timings 7-7-7-20-1T

Realtek ALC889A with 2.42

Gigabyte Radeon HD 4850 1GB
with Catalyst 10.2 drivers
Hard drives

Western Digital VelociRaptor VR200M 600GB

Western Digital Caviar Black 2TB

Western Digital VelociRaptor VR150M 300GB

Corsair Nova V128 128GB with 1.0 firmware

Intel X25-M G2 160GB
with 02HD firmware
Intel X25-V
with 02HD firmware

Kingston SSDNow V+ 128GB
with AGYA0201 firmware

Plextor PX-128M1S 128GB
with 1.0 firmware

Western Digital SiliconEdge Blue 256GB
with 5.12 firmware
Power supply

OCZ Z-Series 550W

Windows 7 Ultimate x64

We used the following versions of our test applications:

The test systems’ Windows desktop was set at 1280×1024 in 32-bit color at a 75Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.

Most of the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

HD Tune

We’ll kick things off with HD Tune, which replaces HD Tach as our synthetic benchmark of choice. Although not necessarily representative of real-world workloads, HD Tune’s targeted tests give us a glimpse of each drive’s raw capabilities. From there, we can explore which drives live up to their potential.

Intel may say that the X25-V can only read at 170MB/s, but the drive does a little better than that here. The X25-V averages out to an impressive 185MB/s, which puts it right behind the SiliconEdge Blue and the SSDNow V+.

Of course, that’s just running as a single drive. Combine two X25-Vs in RAID, and you’re looking at sustained read speeds of well over 300MB/s. The RAID config’s read rate does oscillate quite a bit, between 325 and 375MB/s, but that’s a very good place to be considering the X25-M only manages 216MB/s in this test.

The X25-V exceeds Intel’s conservative performance ratings in HD Tune’s write speed test, as well. Still, with an average write speed of only 39MB/s, Intel’s value SSD is easily the slowest of the lot.

Adding a second drive in RAID does improve write speeds by nearly a factor of two, but that’s still not enough to catch the X25-M, let alone the rest of the field. As you can see by the X25-M’s place in the standings, Intel SSDs have comparatively weak write performance, at least in synthetic tests such as this one.

Next up: some burst-rate tests that should test the cache speed of each drive.

This test should measure cache speed, at least—except for when Intel’s RAID driver secures a slice of system memory for use as cache. Obviously, the RAID results aren’t directly comparable to the others here. On its own, the X25-V’s burst speed isn’t terribly impressive. The drive only manages to hit 115MB/s, which is 11MB/s shy of the X25-M, but still slower than everything else.

Our HD Tune tests conclude with a look at random access times, which the app separates into 512-byte, 4KB, 64KB, and 1MB transfer sizes.

All of the SSDs have freakishly low read access times with smaller transfer sizes, and the X25-V is among the best of the bunch. However, Intel’s value SSD starts to falter as we hit the 64KB and 1MB transfer sizes. RAID helps much more with the latter than it does with the former.

The X25-V finds itself at the front of the pack through our first two random-write transfer sizes. Again, though, the X25-V starts to have problems when we step up to larger transfer sizes. The drive stumbles first with the 64KB transfer size before being completely tripped up by the 1MB test, in which its writes times are even slower than the 7,200-RPM mechanical drive.

Our RAID config cuts write access times by roughly half, at least for transfer sizes that take more than a tenth of a millisecond. Striping the X25-V also leads to quicker access times with smaller transfer sizes, but the differences are much smaller.

File Copy Test
Since we’ve tested theoretical transfer rates, it’s only fitting that we follow up with a look at how each drive handles a more realistic set of sequential transfers. File Copy Test is a pseudo-real-world benchmark that times how long it takes to create, read, and copy files in various test patterns. We’ve converted those completion times to MB/s to make the results easier to interpret.

Windows 7’s intelligent caching schemes make obtaining consistent and repeatable performance results rather difficult with FC-Test. To get reliable results, we had to drop back to an older 0.3 revision of the application and create our own custom test patterns. During our initial testing, we noticed that larger test patterns tended to generate more consistent file creation, read, and copy times. That makes sense, because with 4GB of system memory, our test rig has plenty of free RAM available to be filled by Windows 7’s caching and pre-fetching mojo.

For our tests, we created custom MP3, video, and program files test patterns weighing in at roughly 10GB each. The MP3 test pattern was created from a chunk of my own archive of ultra-high-quality MP3s, while the video test pattern was built from a mix of video files ranging from 360MB to 1.4GB in size. The program files test pattern was derived from, you guessed it, the contents of our test system’s Program Files directory.

Even with these changes, we noticed obviously erroneous results pop up every so often. Additional test runs were performed to replace those scores.

We’ve already seen the X25-V struggle in HD Tune’s write-speed drag race, so it’s no surprise to see the drive at the back of the field in our file creation tests. The X25-V still manages to eclipse Intel’s 35MB/s write-speed rating, but that’s only good enough to come close to the Plextor PX-128M1S SSD, which lacks TRIM support.

Even without TRIM working its magic for our RAID config, the striped X25-Vs manage to push file creation speeds into the 70MB/s range. Performance doesn’t scale up as nicely from one drive to two as it did in HD Tune, suggesting that the lack of TRIM support is costing the X25-V array here. It would take a lot more than a few MB/s to bring the X25-V’s file creation speed into contention with even a 7,200-RPM mechanical hard drive, though.

Like the X25-M, the V stumbles a bit when asked to read our MP3 file set. However, Intel’s value SSD is much more competitive with our collections of program and video files.

The big winner in this test is the X25-V RAID config, which proves to be much faster than everything else we’ve tested. Not even the pesky MP3 file set can hold our pair of X25-Vs out of first place.

The X25-V’s relatively weak write performance hurts the drive in our copy tests, which combine read and write operations. Even the TRIM-less Plextor SSD manages higher copy speeds with all three file sets, although the X25-Vs at least get some revenge in RAID.

Application performance

We’ve long used WorldBench to test performance across a range of common desktop applications. The problem is that few of those tests are bound by storage subsystem performance—a faster hard drive isn’t going to improve your web browsing or 3ds max rendering speeds. A few of WorldBench’s component tests have shown favor to faster hard drives in the past, though, so we’ve included them here.

The good news is that whatever seems to hold back the X25-M’s performance in WorldBench’s Photoshop test isn’t a problem for the X25-V. The bad news is that the drive is still a little slow in both Photoshop and Nero. Teaming a couple of X25-Vs in RAID does improve performance some, but not by the same sorts of margins we saw with sequential transfers.

Although source-code compiling isn’t a part of the WorldBench suite, we’ve often been asked to add a compile test to our storage reviews. And so we have. For this test, we built a dump of the Firefox source code from March 23, 2010 using Visual Studio 2008. This process writes over 22,000 files totaling about 840MB, so there’s plenty of disk activity. However, we had to restrict compiling to a single thread because using multiple threads in Windows 7 proved to be unstable.

Our compiling test doesn’t tease out any differences between the various storage configs, but a compiling job that makes effective use of multiple processor cores might. If you have any suggestions for a Windows 7 compiling test that won’t be bound by our CPU and preferably uses open-source code available to the general public, please shoot me an email.

Boot and load times

Our trusty stopwatch makes a return for some hand-timed boot and load tests. When looking at the boot time results, keep in mind that our system must initialize multiple storage controllers, each of which looks for connected devices, before Windows starts to load. You’ll want to focus on the differences between boot times rather than the absolute values.

This boot test starts the moment the power button is hit and stops when the mouse cursor turns into a pointer on the Windows 7 desktop. For what it’s worth, I experimented with some boot tests that included launching multiple applications from the startup folder, but those apps wouldn’t load reliably in the same order, making precise timing difficult. We’ll take a look at this scenario from a slightly different angle in a moment.

The X25-V proves to be among the fastest boot drives we’ve ever tested. However, the additional time needed to initialize our RAID array slows the boot process by four seconds.

A faster hard drive is not going to improve frame rates in your favorite game (not if you’re running a reasonable amount of memory, anyway), but can it get you into the game quicker?

The X25-V’s Modern Warfare 2 level load times are within a second of the leaders. Interestingly, the X25-V is much more consistent in this test than the X25-M, whose load times were all over the map between 14 and 40 seconds.

RAID doesn’t help much in Modern Warfare 2, but in Crysis, striping shaves more than seven seconds off the X25-V’s load time. Even more interesting is the fact that the X25-V loads our Crysis level more than a second faster than the X25-M. However, a single X25-V only saves a couple of seconds over our high-performance 7,200-RPM drive.

Disk-intensive multitasking

TR DriveBench is a new addition to our test suite that allows us to record the individual IO requests associated with a Windows session and then play those results back on different drives. We’ve used this app to create a new set of multitasking workloads that should be representative of the sort of disk-intensive scenarios folks face on a regular basis.

Each workload is made up of two components: a disk-intensive background task and a series of foreground tasks. The background task is different for each workload, but we performed the same foreground tasks each time.

In the foreground, we started by loading up multiple pages in Firefox. Next, we opened, saved, and closed small and large documents in Word, spreadsheets in Excel, PDFs in Acrobat, and images in Photoshop. We then fired up Modern Warfare 2 and loaded two special-ops missions, playing each one for three minutes. TweetDeck, the Pidgin instant-messaging app, and AVG Anti-Virus were running throughout.

For background tasks, we used our Firefox compiling test; a file copy made up of a mix of movies, MP3s, and program files; a BitTorrent download pulling seven Linux ISOs from 800 connections at a combined 1.2MB/s; a video transcode converting a high-def 720p over-the-air recording from my home-theater PC to WMV format; and a full-disk AVG virus scan.

DriveBench produces a trace file for each workload that includes all IOs that made up the session. We can then measure performance by using DriveBench to play back each trace file. During playback, any idle time recorded in the original session is ignored—IOs are fed to the disk as fast as it can process them. This approach doesn’t give us a perfect indicator of real-world behavior, but it does illustrate how each drive might perform if it were attached to an infinitely fast system. We know the number of IOs in each workload, and armed with a completion time for each trace playback, we can score drives in IOs per second.

Below, you’ll find an overall average followed by scores for each of our individual workloads. The overall score is an average of the mean performance score in each multitasking workload.

DriveBench doesn’t produce reliable results with Microsoft’s ACHI driver, forcing us to obtain the following performance results with Intel’s RST drivers. We couldn’t get DriveBench to play nicely with our the X25-V RAID config, either, which is why it’s not listed in the graphs below. DriveBench will only run on unpartitioned drives, so we tested drives after they’d completed the rest of the suite.

The X25-V sits almost exactly between the SiliconEdge Blue and the Plextor PX-128M1S overall. That’s not a bad place to be; the mechanical drives have much lower throughput here. Let’s break down DriveBench’s individual test to see how the X25-V fared with each workload.

No surprise here: the file copy workload proves to be the most difficult for the X25-V. However, Intel’s value SSD turns in stronger performances with the other workloads, which write much less data.

Curious to see whether removing the multitasking element of these tests would have any bearing on the standings, I recorded a control trace without a background task.

Although the pack shuffles a bit in this control test, the X25-V’s relative position is unchanged.

DriveBench also lets us start recording Windows sessions from the moment the storage driver loads during the boot process. We can use this capability to take another look at boot times, again assuming our infinitely fast system. For this boot test, I configured Windows to launch TweetDeck, Pidgin, AVG, Word, Excel, Acrobat, and Photoshop on startup.

Again, the X25-V finds itself in fifth place between the Plextor SSD and the rest of our silicon-based drives. The value SSD still churns out about four times the IOps of a the fastest mechanical drive.

Our IOMeter workloads are made up of randomized access patterns, presenting a good test case for both seek times and command queuing. The app’s ability to bombard drives with an escalating number of concurrent IO requests also does a nice job of simulating the sort of demanding multi-user environments that are common in enterprise applications.

When you consider how much more the other drives cost, the lone X25-V’s IOMeter performance really is quite impressive. The X25-V’s transaction rates are substantially higher than those of the Kingston, Plextor, and Western Digital SSDs, not to mention all the mechanical hard drives.

IOMeter transaction rates scale even higher with our RAID config, which makes quick work of the X25-M on its way to a dominating overall performance. If you judge SSDs in terms of IOps per dollar, the X25-V definitely lives up to its value branding.

Those worried that our software RAID config might chew through an inordinate number of CPU cycles need not fret. The X25-V RAID array’s IOMeter CPU utilization is quite reasonable, giving us a perfect opportunity to bust out some efficiency graphs that depict IOps per percent CPU utilization.

A single X25-V looks to be on par with the X25-M, but our RAID array clearly crunches more transactions for every CPU cycle it uses. Impressive.

Noise levels

Our acoustic and power consumption tests were conducted with the drives connected to the motherboard’s P55 storage controller. Noise levels were measured with a TES-52 Digital Sound Level meter 1″ from the side of the drives at idle and under an HD Tune seek load. Drives were run with the PCB facing up.

I’ve consolidated the solid-state drives here because they’re all completely silent. The SSD noise level depicted above is a reflection of the noise generated by the rest of the test system, which has a passively-cooled graphics card, a very quiet PSU, and a nearly silent CPU cooler.

As you can see, SSDs can lower noise levels when quite a bit when compared to performance-oriented mechanical hard drives. This is true particularly under seek loads, which is where mechanical drives are at their loudest.

Power consumption
For our power consumption tests, we measured the voltage drop across a 0.1-ohm resistor placed in line with the 5V and 12V lines connected to each drive. We were able to calculate the power draw from each voltage rail and add them together for the total power draw of the drive. Drives were tested while idling and under an IOMeter load consisting of 256 outstanding I/O requests using the workstation access pattern.

Remember that limb I went out on? Yeah, it just snapped. Despite having half the memory chips of the X25-M, the V pulls about the same amount of power at idle and only a fraction of a watt less under our IOMeter load. The X25-V’s power draw is still impressively low, but it’s not as power-efficient as the SSDNow or Nova drives, both of which pack considerably more capacity.

Capacity per dollar

After spending pages rifling through a stack of performance graphs, it might seem odd to have just a single one set aside for capacity. After all, the amount of data that can be stored on a hard drive is no less important than how fast that data can be accessed. Yet one graph is really all we need to express how these drives stack up in terms of their capacity, and more specifically, how many bytes each of your hard-earned dollars might actually buy.

We took drive prices from Newegg to establish an even playing field for all the contenders. Mail-in rebates weren’t included in our calculations. Rather than relying on manufacturer-claimed capacities, we gauged each drive’s capacity by creating an actual Windows 7 partition and recording the total number of bytes reported by the OS. Having little interest in the GB/GiB debate, I simply took that byte total, divided by a Giga (109), and then by the price. The result is capacity per dollar that, at least literally, is reflected in gigabytes.

On a pure cost-per-gigabyte basis, SSDs remain almost offensively expensive when compared with their mechanical counterparts. The SSDs cluster between around 320 and 400 megabytes per dollar, with the X25-Vs sitting close to the middle of that range.

Since we’ve limited our RAID testing to a chipset implementation that comes free with every P55-based motherboard, the only additional cost is a second SSD. The RAID config offers almost exactly double the capacity of a single drive.


With a street price hovering around $115, the X25-V is easily one of the most affordable solid-state drives on the market. Whether the drive lives up to its value billing depends entirely on what you, well, value.

If it’s storage capacity, of course, the answer is no. The X25-V’s cost per gigabyte might be competitive with other SSDs, but mechanical hard drives offer substantially more capacity for the same money. Plus, 40GB simply isn’t a lot of storage for a modern PC, especially if you want to run recent games. Our test system’s Windows 7 x64 installation, plus a handful of applications and only two games, consumes just about 39GB. 11GB of that is for Modern Warfare 2 alone. Fortunately, desktop users can can easily string together multiple drives in RAID or rely on mechanical hard drives for secondary storage.

The X25-V’s true appeal is its performance per dollar. This drive’s performance with highly randomized access patterns is nothing short of stellar, especially when one considers how many more expensive SSDs and hard drives it leaves in the dust. The X25-V’s seqential read speeds are plenty quick, too. However, sequential writes are quite slow, as are random-write times with 1MB transfer sizes. In both of those cases, you’ll see substantially better performance from other storage solutions, including a 7,200-RPM mechanical hard drive.

When it’s running as an OS and applications drive, slower write speeds probably won’t hinder the X25-V too much. The drive’s relatively low capacity means it’s likely to be nearly full in such a configuration, which doesn’t leave much room for large, sequential writes. One generally spends more time reading from an OS and applications drive than writing to it, anyway.

So the X25-V is tempting as an OS and applications drive for folks who want to roll a hybrid storage subsystem on the cheap. An X25-V will get you blissful silence, responsive multitasking under most scenarios, and quick load times with Windows 7 and Modern Warfare 2. You’ll just have to be careful when picking and choosing what to install to the drive, since space is likely to be rather tight. Also, the X25-V’s slow Crysis level load time and its relatively poor showing in WorldBench’s Photoshop test are a little worrying.

One could compensate for the X25-V’s pokey write speeds by combining a couple of drives in a RAID 0 array. Such a configuration would cost a little more than an X25-M 80GB but offer much faster sequential reads and better performance with the sort of highly random access patterns simulated by our IOMeter workloads. I expect the X25-M 80GB would deliver similar sequential write speeds based on its 70MB/s write speed rating and Intel’s tendency to be conservative on that front (the 160GB X25-M we tested today is rated for 100MB/s writes).

I’m even more intrigued by the potential of three- and four-drive X25-V arrays. Assuming performance continues to scale up as more drives are added to an array, one could end up with jaw-dropping performance in areas where the X25-V already excels and competitive sequential write speeds, all for the about the same price as single-drive configs that offer equivalent capacities.

On the notebook front, the X25-V’s pokey write speeds are harder to avoid. That limitation might not be an issue if you won’t be moving files around much locally, but it’s one to keep in mind given that Indilinx-based budget SSDs from Corsair and OCZ claim to offer double the write performance of the X25-V. I’m a firm believer in SSD upgrades for thin-and-lights and ultraportables, including budget models. However, for a notebook’s only source of mass storage, I’d prefer a drive with more balanced read and write performance.

Comments closed
    • indeego
    • 10 years ago

    One very small niggling bug I’ve noticed. Your graph lines don’t exactly match your legend. i.e. it shows a “square” for the Intel V-RAID but it’s diamond shaped on the actual chart. Yes a diamond is a square rotated, but then you also have diamonds in the legend AND the graph. Again, very small, but I’ve noticed it on several recent reviewsg{<.<}g

    • Chloiber
    • 10 years ago

    Obviously, something is wrong with your X25-M G2. REALLY strange IOMeter results.

      • LeGoulu
      • 10 years ago

      Some X25-M G2 results puzzle me too (not the IOMeter ones, though), especially when compared to those new X25-V results, as well as older X25-M G2 results obtained with TR’s previous rig (which, even if they’re not directly comparable, should be in the same ballpark anyway). Namely:

      1) WorldBench – Photoshop CS2: seriously, 870s ?! That’s twice as slow as any other drive! How come the X25-V would be twice as fast? The article does mention that “The good news is that whatever seems to hold back the X25-M’s performance in WorldBench’s Photoshop test isn’t a problem for the X25-V”, but does not really dig into why such a discrepancy would occur (come on, TR, you’ve accustomed us to deeper analyses!). A quick look at a previous review ( shows the X25-M G2 completing the same test in only 380s; Nero 7 and Winzip 10 tests provide comparable results between the old & new rigs, so that all in all the latest figure provided for the X25-M G2 seems dubious.

      2) Level load time – Modern Warfare 2: 26.8s ?! That’s out of all other SSD’s range, and would make the X25-V twice as fast. Once again, the article mentions that “Interestingly, the X25-V is much more consistent in this test than the X25-M, whose load times were all over the map between 14 and 40 seconds”, but does not provide any insight about why two drives that are so similar would behave so differently, and so counter-intuitively (the best spec’ed one being the slower one).

      Since those surprising results have been reported in all subsequent SSD reviews (last one I saw was the Momentus XT’s), but now without any kind of warnings, it would be great if the TR team could check them again, and ideally identify and report the reasons behind them. At the moment, I don’t see any reason why an X25-M G2 would perform worse than an X25-V in any test.

      If anything relevant is identified (and fixed) in the test rig that would explain the previous results, I would go so far as to recheck also the “File copy Test – Read – MP3”: even though it shows similar results between the X25-V and X25-M G2, those results are far worse than the corresponding ones using the old rig (66 MB/s vs 195 MB/s), whereas the “Read Program Files” and “Read Video” show comparable results between old & new rigs (220+ MB/s).

    • GreatGooglyMoogly
    • 10 years ago

    It would be nice with a 3-drive RAID0 followup. I no longer care about data integrity on the OS drive since I only keep the OS and programs on it, and imaging backup is really good these days. I use scheduled Windows Backup (via wbadmin.exe) every night to keep 4 days’ worth of images on my server (those images are then copied over to two more drives for redundancy).

    • Chrispy_
    • 10 years ago

    Would other SSD controllers scale in RAID0 with the same sort of behaviour?

    I contemplated buying a sandforce drive to replace my Indilinx drive, but the cost is enough to put me off for now. Using onboard soft-RAID and another indilinx drive, would two 120GB Indilinx drives be better for general workstation usage than a single 200GB Sandforce drive?

    For the price of a 200GB Sandforce drive I could almost afford to buy three old OCZ Agilities and run a 4-drive RAID0. I’d be paranoid about regular backups but I’d imagine AMAZING performance for anything needing sequential throughput.

    Also, I have a P45 chipset, not P55. Not sure intel’s storage controllers have changed much from the ICH8 days, but that’s another unknown to worry about….

    • LoneWolf15
    • 10 years ago

    This looks like it could be the perfect SSD for improving the performance and battery life of a netbook at a reasonable cost.

    • srg86
    • 10 years ago

    These results and the issue of CPU capping makes the compile test much better for the CPU benchmarks imho.

    • Bensam123
    • 10 years ago

    I’m curious as to why 15k SAS drives are no where in these reviews. Raptors are in here which were originally 10k SCSI drives. You can find 15k SAS drives used, refurbed, or sometimes even new on eBay for ridiculously cheap and they blow Raptors out of the water, especially Cheetahs.

    I know if you buy them brand new even now SAS drives are coming down in price due to SSDs, but there is a golden market on eBay for these. It’s the only reason I mention them. Raptors are a item everyone knows about and looks for by name, 15k models aren’t something people normally shop around for, but are in surplus.

    It’s not hard to build a array of them with a perc 5/i or a perc 6/i for quite cheap and for quite comparative performance. The newer generation 15k SAS drives run quite quiet as well at idle.

      • d0g_p00p
      • 10 years ago

      Buying used HDD’s? LOL. On a serious note. I am sure they are not included because SAS drives are nowhere near “mainstream” Raptor drives are and are much more common for enthusiasts. Plus the price associated with SAS drives. The drives are expensive and you need a 3rd party controller. Not something most people want to deal with when you can get standard SATA drives and use the onboard RAID options.

        • Chrispy_
        • 10 years ago

        I used to use a couple of 146GB 15K Cheetahs in RAID0 as leftovers from a workstation upgrade at the office.

        Yes, the controller card requirement is a downside and they are definitely noisy when seeking – but a single, budget indilinx drive blew them out of the water. So much of ‘why SSD’s are good’ is the near-zero access time and amazing 4K read/write performance.

        The Cheetahs failed pretty hard at 4K read, though write was reasonable.
        Access times for a single drive were supposedly 5ms but in Crystal Diskmark and HDTune I was never seeing less than 9ms. Adaptec controller overhead maybe?

        The only things I think they are valuable for are situations where huge amounts of data need high sequential throughput. At any rate, I forsee SSD’s killing off enterprise 15K disks in the near future. SAN’s are already speccing SSD’s for throughput and IOPS reasons. If you don’t need IOPS you need capacity, which 7.2K SATA drives are perfect for.

        • Bensam123
        • 10 years ago

        I got two recertified 146Gb 15k.6 HDs for $65 a piece and a new 15k.6 146Gb for $90. I purchased a perc 6/i for $125. All three were under warranty for 4 more years.

        Raptors and SSDs are hardly mainstream either. Even Raptors are extremely pricey, thriving solely off the name rather then the performance. You can get 10k SAS drives on eBay for a dime a dozen that outperform Raptors.

        It’s your choice to buy HDs that aren’t under warranty or thinking recertified drives really aren’t from Seagate. What do you think they send you when one of your new HDs breaks?

        I don’t believe people ‘wouldn’t’ deal with it, it’s just that they don’t know ‘how’ to deal with it. Looking past the corporations that would be looking at these benchmarks and that WOULD buy SAS drives, these drives are more aimed towards enthusiasts and enthusiasts are willing to go a extra five feet for better or comparable performance for a cheaper price.

        I believe SAS drives off eBay are aimed at the heart of this article. These SSD articles aren’t aimed at Joe Schmoe who is going to buy the cheapest thing of Newegg that is relatively good for the price.

    • oz_smurf
    • 10 years ago

    #20 It’s certainly possible to make reasonable assumptions based on the data provided but my point was simply that including a laptop hard drive would have been more useful for anyone considering upgrading a laptop hard drive, and certainly more valuable than showing the VR150M in addition the VR200M. I suggested a 7200rpm 2.5″ HDD simply to provide a rough upper bound for laptop HDD performance but agree that a 5400rpm 2.5″ HDD would be useful as a indication of typical laptop hard drive performance.

    • phez
    • 10 years ago

    Your review is pretty confusing. Some tests have both single drive and raid numbers, and others just have one X25-V and I don’t know if its raid or single drive? I’m assuming its RAID ….

      • Dissonance
      • 10 years ago

      Only the DriveBench results omit the X25-V RAID config, as noted in the text.

      “We couldn’t get DriveBench to play nicely with our the X25-V RAID config, either, which is why it’s not listed in the graphs below.”


    • wibeasley
    • 10 years ago

    l[<"had to restrict compiling to a single thread because using multiple threads in Windows 7 proved to be unstable."<]l(p. 5) Do you mean the OS/IDE was crashing, or the results were inconsistent?

      • Dissonance
      • 10 years ago

      Crashing rather than result inconsistency.

    • oldDummy
    • 10 years ago

    Trim is a must, IMO.

    After two weeks of use, without trim, it degraded my MS user experience by .4 with an Intel 160Gb G2.
    Don’t know what that means other than the loss of performance was measureable at some level.

      • Farting Bob
      • 10 years ago

      You lost 0.4 epeens, thats what it means!

      • mesyn191
      • 10 years ago

      Anand found the built in GC that these drives have is good enough even in RAID configs, but you have to leave 10-20% free space for them to work properly.

        • oldDummy
        • 10 years ago

        hmm…That hurts on a 160G drive and takes the useable Gb/$ to even worse levels.
        While it’s good to know [thanks] for raid setups, setting a schedule to run Intels toolbox optimizer would seem the way to go for single drives. ( Although it might be worthwhile testing it on my 80G G1…hm)

      • pedro
      • 10 years ago

      Slightly off-topic but what’s the status of TRIM-like functionality in OS X? I’m curious because so far as I know it hasn’t been incorporated, yet Apple still ships notebooks with SSDs on board.

      Do these units just slow down over time with no way of fixing the problem? This would seem odd, particularly in light of the ridiculous prices they charge for BTO SSD-equipped MBPs.

        • oldDummy
        • 10 years ago

        Good question, I don’t know. Different format might not be relevant.

        • OneArmedScissor
        • 10 years ago

        Most people will never notice because they don’t sit there running programs to write 200GB of nothingness followed by an entire benchmark suite every day…or ever.

        Whether the drive has been completely written to or not, it’s still going to have virtually instantaneous access time and blow any HDD out of the water.

          • pedro
          • 10 years ago

          Good to know, because I’m thinking about popping one in my MacBook later in the year.

    • LeGoulu
    • 10 years ago

    Slight error in the table on page 1: the numbers for “Max 4KB write IOPS” have been swapped between the 80Go and 160Go X25-M.

    • TechNut
    • 10 years ago

    I actually have a pair of these in RAID 0 on my machine. I picked up my drives for $98 each from at the end of March. I used to have a 74GB Raptor as my main drive. The performance difference between them is VERY noticable!

    You need to do a couple things to make the write performance even better in RAID. I don’t think Geoff did this in his review, or at least it is not mentioned.

    – When creating the Volume, set the stipe size to 64KB. The smaller the stripe size, the more the more the data is spread across the SSD’s, meaning you get better performance. Setting it too low, is a problem, but, 64K is ok (at least for desktop workloads)
    – Install the RST Toolbox, and not just the drivers.
    – In the RST toolbox, in “Advanced” set the “Write-back Cache” to enable, it is disabled by default. This will let your Windows system use write caching on the SSD RAID array.
    – If you can, create your Windows volume a little bit smaller than the capacity available. I gave the array an extra 5GB for scratch space, as the Intel controller prefers more room for better performance. So instead of a 74GB volume, I created a 69GB one, the same as my Raptor.

    After those tweaks, my performance is excellent. My desktop loads in a couple seconds. No need to worry about TRIM. I’ve been using this set up for 6 weeks, and the performance is snappy.

    Even used Acronis True Image 2010 Home to migrate from the Raptor to the SSD RAID without issue. It took only 8 minutes to transfer the 69GB image.

    Very worth it, especially at the $98 CDN / drive pricepoint. When set up properly, the only real limitation these drives have is space. Otherwise, the performance is just fine.

      • SomeOtherGeek
      • 10 years ago

      That is good to know as I’m seriously contemplating getting 2 X25-M 80 GB for my desktop. Thanks for sharing.

        • TechNut
        • 10 years ago

        Glad to help out.. and I’d say the 80GB model would give even better performance in RAID 🙂

        • d0g_p00p
        • 10 years ago

        Do it. I did and have never regretted that purchase. I was stunned on how fast file management and day to day operations were when I switched my boot disk over to a pair of SSD’s in RAID 0

          • SomeOtherGeek
          • 10 years ago

          Cool, just did! Ordered 2 X25-M 80GBs today and looking forward to their arrivals. This will be a first for me in several ways: RAID 0 use and SS Drives. Will be a good night playing with them!

      • ew
      • 10 years ago

      *[<- If you can, create your Windows volume a little bit smaller than the capacity available. I gave the array an extra 5GB for scratch space, <]* I'm calling placebo effect here. How exactly does the drive know the last 5GB contain no data? Disk drives and RAID controllers don't know anything about partition tables or file systems?

        • TechNut
        • 10 years ago

        The drive tracks which blocks have data in them, how often they are accessed, how many times a block has been written. The algorithms use this data to determine which blocks can be “recycled”, aka. spare area.

        Anand proved this in his RAID 0 article on the X25-V drives in RAID 0,

        §[<<]§ It's important if you want optimum performance to recover from the lack of TRIM pass-through on RAID arrays with SSD's.

        • UberGerbil
        • 10 years ago

        They don’t have to. They map logical sectors to physical blocks, and they know which ones have never been written to. Unpartitioned space won’t be written to, hence the blocks that represent it will be free.

        What they don’t know is which blocks that have been written to are now logically free because while they still contain data at the physical level, that data has been deleted in the filesystem. This is is where TRIM comes in.

        Having unpartitioned space doesn’t replace TRIM, however, because empty space that the controller thinks is still in use will continue to pile up. A controller that does background garbage collection can make use of the blocks in the unpartitioned space while it consolidates partially-filled blocks, but that doesn’t help it find more space in blocks that still contain “deleted” data. Still, a nearly-full drive tends to be limited in write-performance by the need to erase blocks (which takes much longer than the writes themselves). By ensuring that there are always already-erased blocks by partitioning the drive so that it can never be completely full (thanks to the unpartitioned space), you reduce that impact. Add a utility that you run periodically which can scrub the blocks that correspond to logically deleted data, and you can mostly go without noticing the lack of TRIM (at least for typical desktop loads, ie something that doesn’t represent near-continuous random writes and deletes)

        The Anandtech article TechNut linked goes into more detail.

      • Arbie
      • 10 years ago

      TechNut – what did you mean “no need to worry about TRIM”? I thought
      that was a must-have. What operating system are you using?

      Thanks for the other tips


        • TechNut
        • 10 years ago

        I’m running Windows 7 Ultimate x64 here.

        If you take a look at the link,

        §[<<]§ it shows that, for the Intel G2-based drives, that lack of TRIM support in a RAID configuration, does not seriously degrade the performance over time, as long as you follow some of the rules mentioned in the article, i.e. allocating more unpartitioned space, etc. As UberGerbil rightly points out, on a full drive, you will see some performance loss without TRIM, however, the extra spare area helps allieviate it, plus, as Anand in his article shows, sequential writes will bring the drive performance back up. In real-life usage, the lack of TRIM in a RAID 0 array, as far as I can tell has no real affect, at least with my desktop workload. I would not go back to my Raptor drive after experiencing this performance :)

    • oz_smurf
    • 10 years ago

    Good review but I agree with Palek that a 2.5″ HDD would have been a useful comparison, particularly when it was noted in the introduction how the speed improvement for a SSD is even more significant for a laptop than a desktop. May I suggest replacing the VelociRaptor VR150M (which adds little additional information relative to the VR200M) with a 7200rpm 2.5″ HDD?

      • SomeOtherGeek
      • 10 years ago

      Well, I don’t know… You can make some pretty good assumptions with the info provided. But I thought most laptops come with 5,400 RPM drives. So, taking the data of the 7,200 drive and comparing to SSD, the 5,400s are gonna suck big time, no?

        • OneArmedScissor
        • 10 years ago

        “You can make some pretty good assumptions with the info provided.”

        No, you can’t. I’d go as far as to say that it’s horribly misleading because people will do what you just did.

        Without some realistic laptop 2.5″ HDDs included, the power comparison is just about meaningless.

        5,400 RPM and even 7,200 RPM 2.5″ drives use very little power, even less than some of the very newest SSDs in this article.

    • dragmor
    • 10 years ago

    I wish Intel would price properly in Oz. The G2’s (80gb for $400) are more expensive than the sandforce drives (100gb for $420).

      • SomeOtherGeek
      • 10 years ago

      Huh? I don’t think so, looking at newegg: §[<<]§ - they can be had for 240 bucks. Maybe it is the 160 GB you are thinking?

        • Veerappan
        • 10 years ago


          • mattthemuppet
          • 10 years ago

          not much, they’re mostly walk in stores (in Melbourne at least)

          • SomeOtherGeek
          • 10 years ago

          Oops, comment reading fail! My bad.

    • Palek
    • 10 years ago

    There is one thing that’s sorely missing from this review: performance figures for 2.5″ HDDs. It would have been useful to see what kind of a boost us laptop users can expect when upgrading from our current slow-poke HDDs to a cheap SSD. I -[

      • R2P2
      • 10 years ago

      I don’t know if that’s really applicable here – Would you /[

        • Arag0n
        • 10 years ago

        No, but a 80Gb drive may be good for Windows + Office + Browser. Most of people doesn’t have games or software that needs it. They use the almost full capacity of their HDD for music, video and photos. With an external drive of 1Tb, you don’t need the extra room when you are out of house and you only have to plug-in the USB at home.

          • R2P2
          • 10 years ago

          That’s why I said not applicable /[

        • SomeOtherGeek
        • 10 years ago

        Sure, even TR said there was enough room for “Our test system’s Windows 7 x64 installation, plus a handful of applications and only two games, consumes just about 39GB. 11GB of that is for Modern Warfare 2 alone.” Sounds like plenty of room for a laptop in most practical uses. If you got them high end lappers, then you can afford something bigger.

        FYI, my wife had a laptop for 11 years with 30 GB on it and it never went over half usage.

        So, my point is, for most people yes.

          • OneArmedScissor
          • 10 years ago

          Every laptop I have ever seen that our company has was under 20GB. The shared network drive at our office has 8,000 files in it and it’s 2GB.

          I’d love it if they all had 40GB SSDs. They’re never possibly going to fill it with Office documents and PDFs.

            • Palek
            • 10 years ago

            Yep, I had office laptops in mind when I wrote my comment. The numerous laptops used around our company have 80GB or smaller HDDs and I am fairly certain that most users don’t come close to using even half of that capacity.

            I tend to hold on to documents for a long, long time so I’m a bit over 40GB but I could easily shuffle off some of my older files onto a DVD or external drive for the promise of a fast but small SSD.

            • dmjifn
            • 10 years ago

            I agree. I just ran du on my vista laptop (my main machine) and it looks like, after removing virtual machine images, CDROM images, mp3s, digital camera downloads, and a backup of my full 16GB thumb drive, I’m only using 48GB. All the above would be just fine on an external drive.

            • Arag0n
            • 10 years ago

            That’s why I said no about 40Gb (that I’m sure its enought for some people), but 80Gb it’s enought for almost every normal user. I’m sure people prefers better battery life, better performance and others than have to need a USB drive for most of the media. In fact, most of people already has a USB drive for media because most of the laptops nowadays were sold with 120-160Gb.

            • OneArmedScissor
            • 10 years ago

            Probably even 60GB would be enough, and some of those are roughly the same price as the 40GB Intel drives, which are becoming very close to parity with midrange laptop drives.

            It annoys me that SSDs aren’t more prevalent just because laptop manufacturers want to use high capacity as an advertising point. Nobody I’ve ever known is using it, but they sure don’t have a problem bogging the drive down with all sorts of background junk.

            • UberGerbil
            • 10 years ago

            Consumers are driving that as much as manufacturers. People are always tempted to buy more “just in case” and when they see that they can get “more GB” for less money (with a HD vs SSD) they almost always go that way. Couple that with people who are trying to use their laptop as their only machine — including storing all the media they buy on iTunes or whatever — and the truly mobile folks who don’t want to pack an external HD, and there’s a strong incentive for the OEMs to keep stuffing larger and larger HDs into laptops and advertising the heck out of it.

            I think it’s unfortunate that real 2.5″ hybrid drives never panned out at reasonable price points (and that no one seems to have restarted that quest as flash dropped considerably in price). But even the window for that is closing, as “big enough / cheap enough” SSDs loom on the horizon.

    • DrDillyBar
    • 10 years ago

    Thanks, good to see the X25-V finally reviewed.

      • grantmeaname
      • 10 years ago

      Agreed. And it’s nice that it’s a well-written review on a nice site.

Pin It on Pinterest

Share This