While the X25-E’s dominating single-drive performance would surely satiate most folks, its target market is likely to seek out even greater throughput and higher transaction rates by combining multiple drives in RAID. The performance potential of a RAID 0 array made up of multiple Extremes is bountiful to say the least, and with the drive’s frugal power consumption and subsequently low heat output, such a configuration should cope well in densely populated rack-mount enclosures. Naturally, we had to test this potential ourselves.
Armed with a high-end RAID card and four X25-Es, we’ve set out to see just how fast a RAID 0 array can be. This is easily the most exotic storage configuration we’ve ever tested, but can it live up to our unavoidably lofty expectations? Let’s find out.
Ramping up the RAID
The software RAID solutions built into modern south bridge chips are more than adequate for most applications—my personal desktop and closet file server included—but they’re probably not the best foundations for a four-way X25-E array. Such an impressive stack of drives calls for a RAID controller with a little more swagger, so we put in a call to Adaptec, which hooked us up with one of its RAID 5405 cards.
The 5405 features a dual-core hardware RAID chip running at 1.2GHz with 256MB of DDR2 cache memory. We’ll be focusing our attention on RAID 0 today, but the card supports a whole host of other array configurations, including RAID 1, 1E, 5, 5EE, 6, 10, 50, 60, and 36DD. Ok, so maybe not the last one.
Dubbed a “Unified Serial RAID controller,” the 5405 works with not only Serial ATA drives, but also Serial-Attached-SCSI hardware. The card itself doesn’t have any, er, Serial ports onboard. Instead, it has a single x4 mini-SAS connector (at the top in the picture above) and comes with an expander cable that splits into four standard Serial ATA data cables. If you want to use the 5404 with Serial-Attached-SCSI drives, you’ll need to add a SAS expander cable or have a compatible backplane or direct connect SAS storage.
To ensure compatibility with cramped rack-mount enclosures, the 5405 is a low-profile card with standard and short mounting brackets included in the box. It also has a PCI Express x8 interface, making it compatible with a wide range of workstation and server motherboards, in addition to standard desktop fare. PCIe x8 slots tend to be rare on desktop boards, but fear not. We were able to get the 5405 running in our test system’s primary PCIe x16 graphics card slot without a fuss. Since it only has eight lanes of electrical connectivity, the 5405 can’t make the most of an x16 slot’s available bandwidth. However, for four ports, an aggregate 2GB/s of bi-directional bandwidth should be more than adequate—even for X25-Es.
As one might expect, the 5405 isn’t cheap; it costs $335 and up online. Adaptec does provide three years of warranty coverage, though. Drivers are also available not only for Windows, but also for OpenServer, UnixWare, Solaris, FreeBSD, VMware, and both Red Hat and SUSE Linux.
Our testing methods
In truth, we don’t have anything even remotely comparable to line up against four X25-Es strapped to a fancy hardware RAID card. So we’ve thrown a little of everything at this beastly storage configuration instead, including hardware RAM disks from Gigabyte and ACard, a collection of SSDs including the X25-E Extreme on its own, and a handful of the fastest 3.5″ desktop drives on the market.
To keep the graphs on the following pages easier to read, we’ve color-coded the bars by manufacturer. Our X25-E RAID 0 array appears in bright blue, with Intel’s X25-series SSDs appearing in a lighter hue. Note that we also have a set of RAID 0 results for the ANS-9010 RAM disk. Those results were from a virtual two-drive config running off our test system’s ICH7R south bridge RAID controller.
All tests were run three times, and their results were averaged, using the following test system.
|Processor||Pentium 4 Extreme Edition 3.4GHz|
|System bus||800MHz (200MHz quad-pumped)|
|Motherboard||Asus P5WD2 Premium|
|North bridge||Intel 955X MCH|
|South bridge||Intel ICH7R|
|Chipset drivers||Chipset 22.214.171.1243
|Memory size||1GB (2 DIMMs)|
|Memory type||Micron DDR2 SDRAM at 533MHz|
|CAS latency (CL)||3|
|RAS to CAS delay (tRCD)||3|
|RAS precharge (tRP)||3|
|Cycle time (tRAS)||8|
|Graphics||Radeon X700 Pro 256MB with CATALYST 5.7 drivers|
Seagate Barracuda 7200.11 1TB
Seagate Barracuda ES.2 1TB
Samsung SpinPoint F1 1TB
Hitachi Deskstar E7K1000 1TB
Western Digital Caviar Black 1TB
Western Digital RE3 1TB
Western Digital Caviar SE16 640GB
Seagate Barracuda 7200.11 1.5TB
Samsung FlashSSD 64GB
Intel X25-M 80GB
Intel X25-E Extreme 32GB
Gigabyte i-RAM with 4GB DDR400 SDRAM
ACard ANS-9010 with 16GB DDR2-800 SDRAM
|OS||Windows XP Professional|
|OS updates||Service Pack 2|
Thanks to NCIX for getting us the SpinPoint F1.
Our test system was powered by an OCZ PowerStream power supply unit.
We used the following versions of our test applications:
- WorldBench 5.0
- Intel IOMeter v2004.07.30
- Xbit Labs File Copy Test v1.0 beta 13
- HD Tach v3.01
- Far Cry v1.3
- DOOM 3
- Intel iPEAK Storage Performance Toolkit 3.0
The test systems’ Windows desktop was set at 1280×1024 in 32-bit color at an 85Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.
All the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.
WorldBench uses scripting to step through a series of tasks in common Windows applications. It then produces an overall score. WorldBench also spits out individual results for its component application tests, allowing us to compare performance in each. We’ll look at the overall score, and then we’ll show individual application results. You won’t find Gigabyte’s i-RAM in the graphs below because its 4GB maximum storage capacity is too limited for WorldBench to run.
WorldBench is made up of common desktop applications that aren’t typically bound by storage subsystem performance. However, it’s still a little disheartening to see our X25-E RAID config fail to make the podium. Even a single X25-E is faster than our stack of four here.
Multimedia editing and encoding
Windows Media Encoder
VideoWave Movie Creator
Our X25-E RAID 0 array does reasonably well in WorldBench’s Premiere test, but scores are close through the rest of WorldBench’s multimedia editing and encoding tests. Note that the RAID setup is 13 seconds slower than a single X25-E in the Media Encoder test, though.
The four-drive X25-E setup takes top honors in WorldBench’s ACDSee test, but it’s only 11 seconds quicker than one of the Extremes on its own.
Multitasking and office applications
Mozilla and Windows Media Encoder
WorldBench’s office and multitasking tests appear unable to exploit faster storage configurations.
The WinZip and Nero tests are more storage-bound than any others in the WorldBench suite, and again, there’s little difference in performance between a single X25-E Extreme and four of them in a RAID 0 array.
To test system boot and game level load times, we busted out our trusty stopwatch.
Ignore this one, folks. Our RAID setup may take more than a minute longer to boot than the rest, but it’s also the only configuration that has to initialize the Adaptec RAID card, which takes its sweet time booting up.
Of course, we can’t blame the Adaptec card’s initialization time for the X25-E RAID config’s uninspired level load times. The RAID 0 array is at least within striking distance of a single X25-E in Doom 3, but it’s a few seconds back in Far Cry.
File Copy Test is a pseudo-real-world benchmark that times how long it takes to create, read, and copy files in various test patterns. File copying is tested twice: once with the source and target on the same partition, and once with the target on a separate partition. Scores are presented in MB/s.
To make things easier to read, we’ve separated our FC-Test results into individual graphs for each test pattern. We’ll tackle file creation performance first.
Now that’s more like it. Our X25-E RAID array roars to victory in all five file creation test patterns. The striped array’s performance is most dominant with the Install, ISO, and MP3 test patterns, which have smaller numbers of larger files than the Programs and Windows test patterns. We see the most impressive performance scaling with the MP3 test pattern, which runs more than 3.5 times faster with four X25-Es than it does with a single drive.
Although it continues to lead the field by a wide margin with most test patterns, our X25-E RAID 0 array’s read performance isn’t nearly as impressive as its write speeds. In fact, with the Windows test pattern, the X25-E array is actually slower than a single X25-E. Even when it’s out ahead of the rest of the pack, the Extreme SSD array is never more than 1.6 times faster than a single-drive config.
Next, File Copy Test combines read and write tasks in some, er, copy tests.
The Windows test pattern again proves challenging for our X25-E array, which would have otherwise swept FC-Test’s copy tests. Still, four X25-Es are consistently faster than just one, and occasionally by significant margins. We find the best performance scaling with the ISO test pattern, which is made up of only a few very large files and runs a little better than 2.7 times faster on our RAID config.
The results of FC-Test’s partition copy tests mirror those of the straight copy tests. Our X25-E RAID config is certainly dominant, but it can’t shut out the ANS-9010 RAM disk.
We’ve developed a series of disk-intensive multitasking tests to highlight the impact of seek times and command queuing on hard drive performance. You can get the low-down on these iPEAK-based tests here. The mean service time of each drive is reported in milliseconds, with lower values representing better performance.
Our iPEAK workloads were recorded using a 40GB partition, so they’re a little big for the 4GB i-RAM, 16GB ANS-9010, and even the 32GB X25-E. The app had no problems running, but it warned us that I/O requests that referenced areas beyond the drives’ respective capacities would be wrapped around to the beginning of each drive. Since there should be no performance difference between the beginning and end of an SSD, the results should be valid.
With just one exception, our four-drive X25-E Extreme array is the class of our iPEAK multitasking tests. It’s not miles ahead of the competition, though. If you average the mean service time across all nine test patterns, the X25-E RAID config works out to 0.14 milliseconds. The ANS-9010 RAID setup averages out to 0.18 ms, while a single X25-E sits at 0.35 milliseconds.
IOMeter presents a good test case for both seek times and command queuing.
The results of our IOMeter tests are as interesting as they are varied. Let’s start with the obvious, which is the fact that with the exception of the web server test pattern, the X25-E array isn’t the fastest config on the block. That said, our RAID 0 array does offer a significant performance boost over a single X25-E, particularly as the load ramps up. Under the heaviest loads, the RAID config offers transaction rates close to four times higher than a single X25-E with the file server, workstation, and database test patterns. Our striped array only offers about double the performance of a single drive with the web server test pattern, which is made up exclusively of read requests.
Our IOMeter CPU utilization results suggest that the X25-E RAID array’s processor utilization is lower than one might expect in light of its transaction rates. Given the huge gaps in transaction rates, these results are a little difficult to interpret on their own, so we’ve whipped up another set of graphs that illustrates the transaction rate per CPU utilization percentage. Since our mechanical hard drives don’t deliver anywhere near SSD levels of performance here, we’ve left them out of the equation, with the exception of the VelociRaptor.
No doubt thanks to its use of a hardware RAID controller, our X25-E Extreme array offers much better performance per CPU cycle than the competition. Note that the ANS-9010 RAM disk RAID array, which uses the ICH7R south bridge chip’s software RAID solution, offers the lowest transaction rate per CPU cycle.
We tested HD Tach with the benchmark’s full variable zone size setting.
Four X25-E Extreme SSDs in RAID 0 deliver by far the highest sustained throughput we’ve ever measured in HD Tach, and it doesn’t matter whether you’re reading or writing. We don’t see anything close to a 4X increase in performance over a single-drive config, though.
The Adaptec 5405’s PCI Express interface has plenty of bandwidth at its disposal, as evidenced by the X25-E array’s monstrous 755MB/s burst speed. That’s all coming from the RAID controller’s onboard 256MB cache, so we’re really not hitting the SSDs here.
While still in the realm of near-instantaneous, the X25-E array’s random access time is just a sliver higher than that of a single-drive config. This result was consistent across all three of our test runs, as well.
HD Tach’s margin for error in the CPU utilization is +/- 2%, which makes the X25-E array competitive with drives that offer the highest sustained throughput.
Although the recent wave of solid-state hard drives that’s flooded the market primarily targets mobile applications, SSDs aren’t quite ready to replace their mechanical counterparts for most users. The price is simply too high at the moment, not only in terms of the total cost of a drive, but the cost per gigabyte, as well. However, we don’t have to wait for prices to fall for SSDs to make sense in the enterprise world. For those less interested in storage capacity and more concerned with throughput and the ability to handle a barrage of concurrent I/O requests comfortably, solid-state drives like Intel’s X25-E Extreme have a great performance per dollar proposition.
Interestingly, the perks that make SSDs attractive for notebooks also pay dividends for enterprise RAID configurations. The X25-E’s 2.5″ form factor is easy to pack into low-profile rack-mount enclosures, and thanks to the drive’s very low power consumption, there’s little need to worry about excessive heat. Because solid-state drives lack moving parts, the environmental vibration that can become problematic in a tightly-packed array isn’t an issue, either.
As we’ve seen today, a collection of X25-Es in RAID 0 can be very fast indeed—under the right circumstances. You need the right sort of workload to exploit the enormous performance potential of four of the fastest flash drives on the market. With our Adaptec 5405, our array offered the best performance scaling with sustained transfers, in particular with real-world writes. As one might expect from solid-state storage, the array also made short work of our multitasking and multi-user loads, delivering the best performance under our most demanding loads.
Naturally, a four-drive X25-E Extreme array is going to be overkill for most—it is a $2000 storage solution, after all. But if you have the right sort of workload, there’s staggering performance to be had.