Of course, SCSI drives have always had superior performance and reliability when compared to IDE drives, but even today SCSI remains an expensive and noisy alternative that just can't touch the price per megabyte you'll get with today's IDE drives. Serial ATA is coming, too, but it's been coming for a while. While new motherboards tout integrated Serial ATA, the drives just aren't on the market yet, and there's no telling when they'll arrive in full force. This and that manufacturer had drives on display at Comdex, but the shelves are still bare.
If you want a lot of fast, reliable storage without paying a huge price premium, IDE RAID fits the bill. As an added bonus, it's readily available now in a number of different flavors from quite a few different manufacturers. We've rounded up four IDE RAID flavors from 3ware, Adaptec, HighPoint, and Promise and run them through the wringer. Actually, we ran them through a bunch of wringers. The cards have different capabilities, features, and performance, but which one is right for you? Read on and find out.
RAID school
Depending on who you ask, RAID stands for either Redundant Array of Inexpensive Disks or Redundant Array of Independent Disks. I suppose which expanded acronym you choose to use will depend on how much you paid for your hard drives, since you can build RAID arrays with cheap 5,400 RPM IDE drives or uber-expensive 15,000 RPM SCSI beasts. I'll sit on the fence for this one and simply refer to RAID as a Redundant Array of Independent/Inexpensive Disks. Note that the only reason I'm putting independent before inexpensive is because 'd' becomes before 'e' in the alphabet. I'm just being fair.
RAID arrays use multiple physical disk drives to achieve one of three goals: improved performance, improved redundancy, or both. Multiple physical drives can yield better performance since data can be read from more than one drive at once. RAID arracy achieve redundancy by using some of the storage capacity provided by auxiliary drives for mirror or parity data that can be used to rebuild a RAID array in the event of a drive failure.
In this comparison, we'll be looking at several different RAID configurations, which I've outlined below.
Despite the fact RAID 1 uses two physical hard drives, the operating system only sees one logical drive. Since data on one drive is duplicated on the other, a two-disk RAID 1 array's storage capacity will only be equal to the storage capacity of a single disk.
Though striping kills redundancy, it at least maximizes the storage capacity of the physical drives in the array. Since a RAID 0 array's logical drive is spread over each physical drive, the total storage capacity of the array is the sum of the physical drives' storage capacities.
Just for kicks, we'll be looking at two-drive and four-drive RAID 0 arrays in this comparison to give you an idea how RAID 0 performance scales with additional drives.
Because RAID 10 and 0+1 both combine mirroring and striping, they share the same storage capacity characteristics. With either a RAID 10 or 0+1 array, the total storage capacity of the array will be equal to half of the sum of the storage capacity all the drives. Both keep a mirrored duplicate of their respective arrays' logical drives, so only half of the drives' total storage is available.
RAID 10 and 0+1 arrays share the same storage capacity, and they're both sufficiently redundant to guard against a single drive failure, but after a single drive failure, RAID 0+1 is vulnerable. If a single drive in a RAID 0+1 array fails, you lose your mirror and are left with a striped RAID 0 array. That striped RAID 0 array is going to have a lower MTBF than a single drive, so you'd better get a replacement drive in a hurry. A RAID 10 array is actually capable of withstanding multiple drive failures under the right circumstances. RAID 10 can survive a second failure if it occurs in a different mirror group than the first failed drive. However, if two drives fail within the same mirror group, all you're left with is half of a striped array, which is useless.
Technically, two drives could fail in a RAID 0+1 array without killing you if both failures were limited to the same stripe group. However, since a RAID 0+1 array reverts to a single RAID 0 stripe in the event of a single drive failure, the drive that could be sacrificed for a second failure isn't being used anyway, so there's no reason for it to fail.
The storage capacity of a RAID 5 array depends on the number of drives in the array. The parity bit index requires storage capacity equal to the size of one of the physical disks in a RAID 5 array. The more physical drives in a RAID 5 array, the less overall storage capacity, percentage-wise, is eaten by the parity index. We're using four physical disk drives for this comparison, so the storage capacity of our RAID 5 array will be equal to the combined storage capacity of three of our physical drives.
RAID 10 and 0+1 combine the benefits of mirroring and striping, with RAID 10 having the best redundancy of the array types we're considering here, but RAID 10 and 0+1 still steal half your storage capacity. If maximizing storage potential is your primary concern, RAID 5 will give you the most efficient use of available space, and its storage efficiency only increases with each hard drive you add to the array. Of course, RAID 5's distributed parity is quite a bit more complex than striping, mirroring, or even combinations of the two, so don't expect a RAID 5 array to be the fastest option.
I'd get into some specific MTBF calculations here if I could only find a MTBF formula that I was happy with to describe RAID 10 and 0+1. Since we're already going to be looking at a lot of performance graphs, I didn't want to throw additional math at you.
| IDF16 keynote live blog, day two | 13 |