Single page Print

Chipset Serial ATA and RAID performance compared


Whose arrays are faster?
— 12:00 AM on December 8, 2005

STORAGE SUBSYSTEMS DON'T GET nearly enough attention, though they're arguably the most important subsystem of a modern PC. Of course, they store all of a system's data—an increasingly precious resource that most of us don't back up nearly often enough. Storage subsystems are often the slowest components in a modern PC, as well. Hard drives are essentially mechanical devices, and even with ever-growing platter densities, their seek times can amount to a practical eternity in computer time. Modern storage subsystems have a trick up their sleeves, though. RAID arrays have the potential to improve storage performance dramatically by spreading data over multiple drives. They can also improve redundancy, allowing a system to survive one or more drive failures with no data loss.

If RAID's speed and redundancy aren't enough to pique your interest, maybe its price will. Serial ATA RAID support is included in most of today's core logic chipsets, so it's essentially free. Chipset RAID has been around for a while, of course, but only the most recent core-logic chipsets from Intel and NVIDIA support arrays of up to four drives and the highly coveted RAID 5.

We've spent a couple of months running Intel's ICH7R and NVIDIA's nForce4 Serial ATA RAID controllers through our exhaustive suite of storage tests, and the results are something to behold. We started with a single drive and worked our way up through RAID levels 0, 1, 10, 0+1, and 5 with two, three, and even four hard drives. Read on to see which RAID controller reigns supreme and how the different RAID levels compare in performance.

RAID refresher
Before we dive into an unprecedented, er, array of benchmark results, we should take a moment to unravel the oft-misunderstood world of RAID. Depending on who you believe, RAID either stands for Redundant Array of Independent Disks or Redundant Array of Inexpensive Disks. We're inclined to side with "independent" rather than "inexpensive," since RAID arrays can just as easily be built with uber-expensive 15K-RPM SCSI drives as they can with relatively inexpensive Serial ATA disks. RAID arrays aren't necessarily redundant, as you'll soon see, so there's little point in squabbling over the acronym.

There are a myriad of RAID levels to choose from, but we'll be focusing our attention on RAID 1, 0, 10, 0+1, and 5. Those are the array types supported by today's core-logic chipsets, and they're also the most common RAID options for add-in cards. Each array level offers a unique blend of performance, redundancy, and capacity, which we've outlined below. For the sake of simplicity, we'll assume that arrays are being built with identical drives of equal capacity.

  • RAID 0 — Otherwise known as striping, RAID 0 improves performance by spreading data across multiple disks in an array. Data is striped in blocks, whose size can range from a few kilobytes all the way up to 256KB or more. The size of these blocks is consistent throughout the array and known as the stripe size. Users can usually define the stripe size when configuring the array.

    RAID 0 arrays can be created with as few as two drives or with as many drives as there are open ports on the RAID controller. Adding more drives should improve performance, but those performance gains come with an increased risk of data loss. Because RAID 0 spreads data over multiple disks, the failure of a single drive will destroy all the data in an array. The more drives in a RAID 0 array, the greater the chance of a drive failure.

    While it may be more prone to data loss than other levels, RAID 0 does offer the best capacity of any conventional array type. The capacity of a RAID 0 array is equal to the capacity of one of the drives in the array times the total number of drives in the array, with no space wasted on mirrors, parity, or other such luxuries.


    RAID 0 (left) and RAID 1 (right)

  • RAID 1 — Also known as mirroring, RAID 1 duplicates the contents of a primary drive on an auxiliary drive. This arrangement allows a RAID 1 array to survive a drive failure without data loss. The enhanced fault tolerance of a RAID 1 array (or of most other RAID levels) is no substitute for a real data backup solution, of course. If the primary drive in a RAID 1 array is plagued with viruses, malicious software, or data loss due to user error, the mirrored auxiliary drive will suffer the same fate simultaneously.

    Although RAID 1's focus is on redundancy, mirroring can improve performance by allowing read requests to be distributed between the two drives (although not all RAID 1 implementations take advantage of this opportunity). RAID 1 is one of the least efficient arrays when it comes to storage capacity, though. Because data is duplicated on an auxiliary drive, the total capacity of the array is only equal to the capacity of a single drive.

  • RAID 10 — This array type combines RAID 0 and RAID 1, and it enjoys the benefits of both striping and mirroring. Data is striped across a pair of mirrored arrays, allowing for better performance without sacrificing redundancy. Combining mirroring and striping allows RAID 10 to enjoy the performance perks of each, making this array type particularly potent. RAID 10 arrays can also survive multiple drive failures without data loss, provided that at least one drive in each mirrored array remains unscathed.


    RAID 10: An array of striped (red) mirrors (blue)

    RAID 10's attractive blend of performance and redundancy does come at a price, though. Arrays require an even number of at least four drives to implement, and capacity is limited to just half the total capacity of the drives used.

  • RAID 0+1 — RAID 0+1 is similar to RAID 10 in that it combines striped and mirrored components, but it does so in reverse. Rather than striping array of mirrors, RAID 0+1 mirrors an array of stripes. Like RAID 10, RAID 0+1 can benefit from the performance characteristics of both mirroring and striping. However, a RAID 0+1 array can only tolerate one drive failure without data loss. The failure of one drive will destroy a RAID 0+1 array's mirrored component, effectively reducing it to RAID 0.


    RAID 0+1: An array of mirrored (blue) stripes (red)

    Like RAID 10, RAID 0+1 implementations require an even number of at least four drives. Array capacity is equal to half the total capacity of drives in the array.

  • RAID 5 — Referred to as striping with parity, RAID 5 attempts to combine the performance benefits of RAID 0 with improved fault tolerance that doesn't consume half the drives in an array. Like RAID 0, RAID 5 stripes data across multiple disks in blocks of a given size. This setup allows the array to use multiple disks for both read and write requests, allowing for performance that should rival that of a similar striped array.

    Since striped arrays are a disaster as far as fault tolerance is concerned, RAID 5 adds a measure of fault tolerance using a little binary math. Parity data is calculated for each rank of blocks written to the array, and that parity data is spread across the drives in the array. This data can be used to reconstruct the contents of a failed drive, allowing RAID 5 arrays to survive a single drive failure with no data loss.


    RAID 5: Striping with parity

    Parity's real benefit isn't so much that it allows for a measure of fault tolerance, but that this fault tolerance can be achieved with little capacity sacrifice. Because only one parity block is needed for each rank of blocks written to the array, parity data only consumes the capacity of one drive in the array. This results in an array capacity equal to the total capacity of all drives in the array minus the capacity of just one drive—a huge improvement over the mirror overhead of RAID 1, 10, and 0+1 arrays. Parity also allows fault-tolerant RAID 5 arrays to be created with as few as three drives.

    Parity calculations, however, can introduce significant computational overhead. Parity data must be calculated each time data is written to the disk, creating a potential bottleneck for write performance. High-end RAID 5 add-in cards typically avoid this bottleneck by performing parity calculations with dedicated hardware, but current chipset-based RAID 5 implementations rely on the CPU to perform parity calculations.