Despite its popularity as a checkbox feature, RAID has remained a tricky proposition for those with only two drives. A two-drive RAID 0 array may offer better performance, but that comes at the cost of reliability. A two-drive RAID 1 array gives the peace of mind of a mirrored backup drive, but its performance benefits aren’t quite as compelling as RAID 0’s. RAID levels 10 and 0+1 combine the best of both worlds, but require at least four drives. If only there were a way to balance the benefits of RAID 0 and RAID 1 with only two.
Enter the Matrix. Err, Matrix RAID.
Intel’s Matrix RAID technology allows users to combine RAID 0 and 1 arrays with only two drives, promising mirrored redundancy for important data and striped performance for speedy access. That sounds almost too good to be true, doesn’t it? Read on to see if Matrix RAID really delivers the best of both worlds.
What is Matrix RAID?
Before delving into Matrix RAID, we should quickly go over its component parts: RAID 0 and RAID 1. In a two-drive RAID 1 array, data on one drive is replicated to the other in real time. Drives are mirror images of each other, so if one drive fails, no data is lost. RAID 1 arrays can also offer performance benefits since data can be read from both drives at the same time. However, because of the data mirroring, RAID 1 arrays offer only half of the total capacity of the two drives involved.
With a two-drive RAID 0 array, data is broken down into blocks that are striped across the drives. This striping allows RAID 0 to offer superior I/O performance because both read and write tasks are split between the disks. There’s no mirroring of data, so the total storage capacity of the array is equal to the capacity of both drives combined. RAID 0’s superior performance and capacity come at a price, though. If one drive in a RAID 0 array fails, all data stored on that array is lost. Since the failure of either drive will cause the array to fail, a RAID 0 array’s Mean Time Between Failure (MTBF) is half that of a single drive.
To balance performance with redundancy, Matrix RAID allows users to split a pair of disks into two volumes, one for RAID 0 and one for RAID 1. The Matrix RAID scheme mirrors data on the RAID 1 volume of the disk while striping data on the RAID 0 volume. Since Matrix RAID volumes still span two drives, they can offer performance and redundancy benefits similar to traditional RAID arrays.
Matrix RAID’s marriage of RAID 0 and 1 may sound a little like RAID 0+1 (or RAID 10), but there are a couple of key differences to note. First, a RAID 0+1 array can sustain a single drive failure without any data loss because its striped data is also mirrored. If a single drive fails in a Matrix RAID array, only data on the RAID 1 volume is preservedany data on the RAID 0 volume is lost. RAID 0+1’s added redundancy does require extra drives, though. You’ll need at least four disks to create a RAID 0+1 array.
The vulnerability of Matrix RAID’s RAID 0 volume requires care in distributing data to each volume. In a system with Matrix RAID, important data should be stored on the RAID 1 volume, leaving the RAID 0 volume free for data that needs to be faster rather than redundant. For instance, Intel suggests putting the operating system, business applications, and critical data on the RAID 1 portion of the array, while storing games, swap files, and digital media scratch space on the RAID 0 portion.
To Intel’s credit, using Matrix RAID is a breeze. RAID volumes are easy to create and configure, and they appear as logical drives in Windows that can be partitioned and formatted as the user sees fit. There is a catch, though. Matrix RAID is currently only available in Intel’s ICH6R south bridge. Non-Intel chipsets, including any chipset for the Athlon 64, can’t do it.
Today we’ll be looking at Matrix RAID’s performance versus traditional RAID, all while using only two drives. Since Matrix RAID has no direct peers, we’ll only be looking at the performance of RAID 0, RAID 1, Matrix RAID 0, and Matrix RAID 1 with Intel’s ICH6R south bridge. Here’s how each test configuration was set up:
- RAID 1 Two Seagate Barracuda 7200.7 NCQ 160GB drives were configured in a RAID 1 array. With data mirrored between the two drives, the array offered 160GB of total storage.
- RAID 0 Two drives were configured in a RAID 0 array with Intel’s recommended 128KB stripe size for optimal desktop and workstation performance. Due to the nature of RAID 0, the array offered 320GB of total available storage.
- Matrix RAID 1 Drives were configured in a Matrix RAID array with a RAID 1 volume on the first half of the disk and a RAID 0 volume on the second half. This arrangement gave us an 80GB RAID 1 volume and a 160GB RAID 0 volume. Again, we used the recommended 128KB stripe size for RAID 0.
- Matrix RAID 0 To ensure that our Matrix RAID results weren’t hampered by the fact that the RAID 0 volume was at the physical end, rather than the beginning, of the disk, we created a second Matrix RAID configuration with a RAID 0 volume on the first half of the disk and a RAID 1 volume on the second half. This gives us another 160GB RAID 0 volume and 80GB RAID 1 volume, but this time, their positions on the disk are reversed. As before, a 128KB stripe size was used for the RAID 0 volume.
To clarify, in the results you’ll see on the following pages, anything labeled “Matrix RAID 0” is the result of testing the performance of the RAID 0 volume at the start of the Matrix RAID array. Likewise, anything labeled “Matrix RAID 1” is the result of testing the RAID 1 volume at the start of the second Matrix RAID array. In no case will you see results for a RAID 0 or RAID 1 volume located in the second half of the array, even though this will be an unavoidable placement for one of the two volumes in Matrix RAID.
When comparing the performance of the traditional and Matrix RAID configurations, keep in mind that our traditional RAID 0 array offers 320GB of storage that extends from beginning to end of both physical drives. Similarly, our RAID 1 array packs 160GB into the same space. In Matrix RAID, the physical space on each disk space is split between the RAID 0 array and the RAID 1 array.
Since it wouldn’t be a party without a little single-drive action, so we also tested our system with a RAID-less single hard drive.
In all cases, the system’s operating system was located on a single hard drive, separate from the drive or array being tested.
Our testing methods
All tests were run three times, and their results were averaged, using the following test systems.
|Processor||Pentium 4 3.4GHz Extreme Edition|
|System bus||800MHz (200MHz quad-pumped)|
|Motherboard||DFI LANParty 925X-T2|
|North bridge||Intel 925X MCH|
|South bridge||Intel ICH6R|
|Chipset drivers||Intel 184.108.40.2062|
|Memory size||1GB (2 DIMMs)|
|Memory type||Micron DDR2 SDRAM at 533MHz|
|CAS latency (CL)||3|
|RAS to CAS delay (tRCD)||3|
|RAS precharge (tRP)||3|
|Cycle time (tRAS)||8|
|Graphics||Radeon X700 Pro 256MB with CATALYST 5.2 drivers|
|Hard drives|| Seagate Barracuda 7200.7 SATA NCQ 120GB SATA
Maxtor DiamondMax Plus D740X 40GB ATA/133
|OS||Windows XP Professional|
|OS updates||Service Pack 2, DirectX 9.0C|
All of our test systems were powered by OCZ PowerStream power supply units. The PowerStream was one of our Editor’s Choice winners in our latest PSU round-up.
We used the following versions of our test applications:
- WorldBench 5.0
- Intel IOMeter v2004.07.30
- Xbit Labs File Copy Test v1.0 beta 13
- TCD Labs HD Tach v3.01
- Far Cry v1.3
- DOOM 3
The test systems’ Windows desktop was set at 1280×1024 in 32-bit color at an 85Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests. All of the 3D gaming tests used the high detail image quality settings.
All the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.
WorldBench overall performance
WorldBench uses scripting to step through a series of tasks in common Windows applications and produces an overall score. WorldBench also spits out individual results for its component application tests, allowing us to compare performance in each. We’ll look at the overall score, and then we’ll show individual application results alongside the results from some of our own application tests.
As far as WorldBench is concerned, Matrix RAID volumes are every bit as fast as their more traditional counterparts. The overall scores are pretty close, but let’s drill down into WorldBench’s individual results to see if we can find a little more action.
Multimedia editing and encoding
Windows Media Encoder
VideoWave Movie Creator
Performance is reasonably consistent through WorldBench’s audio and video encoding and editing tests. Matrix RAID volumes are often faster, but always within no more than a couple of seconds of equivalent RAID arrays.
ACDSee clearly shows a performance advantage from striping. Again, Matrix RAID volumes are within a couple of seconds of their Matrix-less counterparts.
Multitasking and office applications
Mozilla and Windows Media Encoder
RAID doesn’t do much for WorldBench’s office and multi-tasking tests, where all configurations are pretty close.
Winzip and Nero both show improved performance with RAID 0, although differences are much more pronounced in Nero. There, Matrix RAID offers a notable performance gain over traditional RAID configurations. It’s also interesting to note that both Matrix RAID 1 and RAID 1 are slower than our single-drive configuration in Nero.
Boot and load times
To test system boot and game level load times, we busted out our trusty stopwatch.
Matrix RAID doesn’t appear to add much onto standard RAID boot times. Our single-drive configuration boots the quickest, most likely because it doesn’t need to initialize an array before loading Windows.
Level load times don’t show much improvement with RAID, but at least the Matrix RAID volumes have no trouble keeping up with traditional arrays.
File Copy Test
File Copy Test is a pseudo-real-world benchmark that times how long it takes to create, read, and copy files in various test patterns. File copying is tested twice: once with the source and target on the same partition, and once with the target on a separate partition. Scores are presented in MB/sec.
Although striping clearly speeds up file creation, neither Matrix RAID 0 nor 1 loses ground to equivalent traditional RAID arrays.
Results are a little closer when we look at performance in FC-Test’s read test, but Matrix RAID continues to show no signs of weakness. Depending on the test pattern, Matrix RAID volumes are either a little faster or a little slower than traditional RAID arrays, but never by more than a couple of percent.
In the copy and partition copy tests, performance between Matrix RAID and traditional arrays continues to be consistent. As in the file creation tests, our single-drive configuration is clearly faster than both Matrix RAID 1 and RAID 1.
IOMeter – Transaction rate
IOMeter presents a best-case scenario for command queuing, and based on our results, may also be sensitive to a RAID volume’s size and position on the disk. None of our RAID arrays, Matrix or otherwise, were able to complete an IOMeter run with 128 or 256 outstanding I/Os.
Matrix RAID volumes clearly offer higher IOMeter transaction rates than their traditional counterparts, but I suspect that has more to do with the fact that our Matrix RAID volumes are confined to the first half of our physical disks. Since hard drives write from the inside edge out, it’s quicker for the drive head to access data at the beginning of the disk than at the end.
In addition to favoring Matrix RAID volumes at the start of the disk, IOMeter delivers on both mirroring and striping’s performance potential as the number of concurrent I/O requests increases. Note that with the write-dominated Web Server test pattern, mirroring actually achieves higher transaction rates than striping.
IOMeter – Response time
Matrix RAID volumes continue to offer better IOMeter performance when we look at response times, but again, this may be due to the fact that they’re working with the fastest half of the disk.
IOMeter – CPU utilization
IOMeter CPU utilization results don’t show Matrix RAID stumbling. CPU utilization with both Matrix and non-Matrix RAID is a little higher than with a single drive, but generally not by more than half a percent.
We tested HD Tach with the benchmark’s full variable zone size setting.
Our Matrix RAID 1 volume scores much better in HD Tach’s average read and write speed tests than its traditional RAID 1 counterpart, but results are mixed with RAID 0.
Looking at burst times, RAID 0 offers a clear performance advantage. Matrix RAID volumes are nipping at the heels of their standard RAID counterparts, too.
Matrix RAID rules HD Tach’s random access time test, most likely because those RAID volumes are working with the fastest first half of the disk. RAID 1’s access time is superior to RAID 0’s, so it’s no surprise that the Matrix RAID 1 volume comes out ahead overall.
Matrix RAID appears to have slightly higher CPU utilization than traditional RAID, but considering HD Tach’s +/- 2% margin for error in this test, the results are a little too close to call.
Based on the results of our testing, Matrix RAID volumes appear to be every bit as fast as their traditional RAID counterparts. In some cases, they’re even faster, although that may be an artifact of the fact that our Matrix RAID volumes were confined to the beginning of the disk. Either way, Matrix RAID holds up its end of the bargain and delivers the best of both RAID 0 and RAID 1 with only two drives. That’s pretty sweet.
The only knock I have against Matrix RAID is the fact that, at least for now, you can only get it with Intel’s ICH6R south bridge. Such a compelling technology deserves more widespread availability, and I can only hope that ATI, NVIDIA, SiS, and VIA are developing something similar for their core logic chipsets.
As it stands now, Matrix RAID may be the most compelling storage technology for PC enthusiasts. Intel has proven that two-drive RAID doesn’t have to be a compromise between performance and redundancy anymore. With Matrix RAID, you get both.