Of course, SCSI drives have always had superior performance and reliability when compared to IDE drives, but even today SCSI remains an expensive and noisy alternative that just can't touch the price per megabyte you'll get with today's IDE drives. Serial ATA is coming, too, but it's been coming for a while. While new motherboards tout integrated Serial ATA, the drives just aren't on the market yet, and there's no telling when they'll arrive in full force. This and that manufacturer had drives on display at Comdex, but the shelves are still bare.
If you want a lot of fast, reliable storage without paying a huge price premium, IDE RAID fits the bill. As an added bonus, it's readily available now in a number of different flavors from quite a few different manufacturers. We've rounded up four IDE RAID flavors from 3ware, Adaptec, HighPoint, and Promise and run them through the wringer. Actually, we ran them through a bunch of wringers. The cards have different capabilities, features, and performance, but which one is right for you? Read on and find out.
Depending on who you ask, RAID stands for either Redundant Array of Inexpensive Disks or Redundant Array of Independent Disks. I suppose which expanded acronym you choose to use will depend on how much you paid for your hard drives, since you can build RAID arrays with cheap 5,400 RPM IDE drives or uber-expensive 15,000 RPM SCSI beasts. I'll sit on the fence for this one and simply refer to RAID as a Redundant Array of Independent/Inexpensive Disks. Note that the only reason I'm putting independent before inexpensive is because 'd' becomes before 'e' in the alphabet. I'm just being fair.
RAID arrays use multiple physical disk drives to achieve one of three goals: improved performance, improved redundancy, or both. Multiple physical drives can yield better performance since data can be read from more than one drive at once. RAID arracy achieve redundancy by using some of the storage capacity provided by auxiliary drives for mirror or parity data that can be used to rebuild a RAID array in the event of a drive failure.
In this comparison, we'll be looking at several different RAID configurations, which I've outlined below.
RAID 1 - Worried that your hard drive may crash and take with it everything you've diligently forgotten to back up? RAID 1 is for you. RAID 1, otherwise known as mirroring, duplicates the contents of your primary drive onto an auxiliary drive to guard against physical drive failure. If one or the other crashes, you have a perfect backup. However, RAID 1 won't protect you against viruses or other threats to the integrity of data on your hard drive; RAID 1 is useful only for extending the Mean Time Between Failure (MTBF) of your storage setup. In the event of a drive crash, just replace the damaged disk with a new drive, and the RAID card will rebuild the mirrored array.
Despite the fact RAID 1 uses two physical hard drives, the operating system only sees one logical drive. Since data on one drive is duplicated on the other, a two-disk RAID 1 array's storage capacity will only be equal to the storage capacity of a single disk.
RAID 0 - RAID 0 eschews the redundancy of mirroring in favor of striping, which spreads one logical drive over multiple physical hard drives. Blocks written to and read from the array are spread over multiple disks, which improves performance, but greatly reduces reliability. A striped logical drive is vulnerable to the failure of any one of its physical hard drives. If, for example, you had a two-drive RAID 0 array, your MTBF is cut in half since you have twice the exposure to drive failure. A failure of any drive in a RAID 0 array will kill the array completely and take with it all the data on the logical drive.
Though striping kills redundancy, it at least maximizes the storage capacity of the physical drives in the array. Since a RAID 0 array's logical drive is spread over each physical drive, the total storage capacity of the array is the sum of the physical drives' storage capacities.
Just for kicks, we'll be looking at two-drive and four-drive RAID 0 arrays in this comparison to give you an idea how RAID 0 performance scales with additional drives.
RAID 10/0+1 - RAID 10 and 0+1 are similar in that they attempt to achieve better performance and redundancy by combining RAID 0 and RAID 1 arrays. However, RAID 10 and 0+1 go about combining mirroring and striping a little differently. RAID 10 is a striped set of mirrored arrays, while RAID 0+1 is a mirrored pair of striped arrays. Striped mirrors versus mirrored stripes. Got it?
Because RAID 10 and 0+1 both combine mirroring and striping, they share the same storage capacity characteristics. With either a RAID 10 or 0+1 array, the total storage capacity of the array will be equal to half of the sum of the storage capacity all the drives. Both keep a mirrored duplicate of their respective arrays' logical drives, so only half of the drives' total storage is available.
RAID 10 and 0+1 arrays share the same storage capacity, and they're both sufficiently redundant to guard against a single drive failure, but after a single drive failure, RAID 0+1 is vulnerable. If a single drive in a RAID 0+1 array fails, you lose your mirror and are left with a striped RAID 0 array. That striped RAID 0 array is going to have a lower MTBF than a single drive, so you'd better get a replacement drive in a hurry. A RAID 10 array is actually capable of withstanding multiple drive failures under the right circumstances. RAID 10 can survive a second failure if it occurs in a different mirror group than the first failed drive. However, if two drives fail within the same mirror group, all you're left with is half of a striped array, which is useless.
Technically, two drives could fail in a RAID 0+1 array without killing you if both failures were limited to the same stripe group. However, since a RAID 0+1 array reverts to a single RAID 0 stripe in the event of a single drive failure, the drive that could be sacrificed for a second failure isn't being used anyway, so there's no reason for it to fail.
RAID 5 - RAID 5 throws out mirroring in favor of striping with distributed parity. As with RAID 0, data is striped across the array's multiple drives, but this time, a parity bit is also calculated. An index of these parity bits is spread across each drive in the array. Maintaining that index of parity bits slows down a RAID 5 array's performance, but gives it a level of redundancy that striped RAID 0 arrays otherwise lack. If a drive is lost, a RAID 5 array can rebuild itself using the data on the other physical disks, the parity bit index, and some simple binary math.
The storage capacity of a RAID 5 array depends on the number of drives in the array. The parity bit index requires storage capacity equal to the size of one of the physical disks in a RAID 5 array. The more physical drives in a RAID 5 array, the less overall storage capacity, percentage-wise, is eaten by the parity index. We're using four physical disk drives for this comparison, so the storage capacity of our RAID 5 array will be equal to the combined storage capacity of three of our physical drives.
RAID 10 and 0+1 combine the benefits of mirroring and striping, with RAID 10 having the best redundancy of the array types we're considering here, but RAID 10 and 0+1 still steal half your storage capacity. If maximizing storage potential is your primary concern, RAID 5 will give you the most efficient use of available space, and its storage efficiency only increases with each hard drive you add to the array. Of course, RAID 5's distributed parity is quite a bit more complex than striping, mirroring, or even combinations of the two, so don't expect a RAID 5 array to be the fastest option.
I'd get into some specific MTBF calculations here if I could only find a MTBF formula that I was happy with to describe RAID 10 and 0+1. Since we're already going to be looking at a lot of performance graphs, I didn't want to throw additional math at you.