Personal computing discussed

Moderators: renee, morphine, Steel

 
Scrotos
Graphmaster Gerbil
Topic Author
Posts: 1109
Joined: Tue Oct 02, 2007 12:57 pm
Location: Denver, CO.

Is RAID 5/6 dead due to large drive capacities?

Fri Mar 15, 2013 3:09 pm

Ok, so I'm wanting to build a home storage server. I realize that RAID is not a backup strategy, but a server with a RAID array will be my backup device for desktops and laptops and will also serve media files.

Now, in trying to figure out the best route for storage capacity and retaining integrity of the storage, I figured I'd go with RAID 6. I don't care about blinding speed and I'll use 1 TB or 2 TB drives. BUT WAIT! I start reading all these articles from 2007 onward about how rebuilding/resilvering a RAID 5/6 array will assuredly fail once you have 12 TB or more of data. The reasoning is that a drive has an "unrecoverable read error" (URE) rate of 1 bad read every 10^14 reads. When you do the math with 512 byte sectors, that's around 12 TB worth of data. Not a big deal if you're reading 12 TB of data from one drive as the built-in hard drive controller detects it and maps it as a bad sector and moves the data elsewhere, but if you're rebuilding an array that causes the entire array to be unable to be rebuilt and you lose all data in the array.

RAID 5/6 is still ok with "small" drives, like 146 GB or 300 GB drives. But the larger the drive, the more DANGER there is.

That's the theory.

What I want to know is what people have experienced themselves. This is not a SATA versus SAS thing. Is anyone running RAID 5/6 with large drives and large numbers of drives? Do you know how EMC or other vendors preconfigure their arrays? Has there been any actual study on these kinds of failures? Does everyone just run RAID 10 now? I've seen math proving points going either direction but I don't have enough firsthand experience to know if it's just fearmongering or people in the field are seeing the problems described by the naysayers.
 
kc77
Gerbil Team Leader
Posts: 242
Joined: Sat Jul 02, 2005 2:25 am

Re: Is RAID 5/6 dead due to large drive capacities?

Fri Mar 15, 2013 3:24 pm

The deal with RAID 5 in particular is that when it is paired with large capacity drives it is more susceptible to array failure because of the longer rebuild times. As a result during a rebuild you can't lose another drive without losing the entire array. RAID 6 provides protection against this issue.

That being said if you are going for large capacity storage you should be creating multiple arrays to underpin your logical volumes. Let's say I was going for 12TB (I usually go for higher but this is for example), what I would probably do is create two arrays to underpin a logical volume that spans both arrays. So now instead of more than two drives knocking out the array it would take now more than 4 and so on and so on. For my storage arrays if the stars align it would take quite a bit to knock out the array.\

Raid 10 should be saved for DB's or VMs which require performance over storage space.
Core i7 920 @stock - 6GB OCZ Mem - Adaptec 5805 - 2 x Intel X25-M in RAID1 - 5 x Western Digital RE4 WD1003FBYX 1TB in RAID 6 - Nvidia GTX 460
 
relmerator
Gerbil In Training
Posts: 9
Joined: Tue Oct 14, 2008 10:17 am

Re: Is RAID 5/6 dead due to large drive capacities?

Fri Mar 15, 2013 3:48 pm

I've got a RAID6 array with six 2TB drives -- just cheapo WD green drives from back when they were $79 a pop. The perf is surprisingly good on the Areca card they're connected to. I just use it for backup storage primarily, so it's not heavily loaded.
 
Flatland_Spider
Graphmaster Gerbil
Posts: 1324
Joined: Mon Sep 13, 2004 8:33 pm

Re: Is RAID 5/6 dead due to large drive capacities?

Fri Mar 15, 2013 3:53 pm

The answer to your question is much simpler.

RAID 5/6 aren't good at sustained writes, so neither is a good choice for a storage server that computers are going to be backing up to. RAID 10 or 01 with a spare drive or two is the way to go.

Yes, I pretty much run RAID 10 exclusively. Disks are big enough that I can take the space hit, and the performance gains are worth it.

kc77 wrote:
That being said if you are going for large capacity storage you should be creating multiple arrays to underpin your logical volumes. Let's say I was going for 12TB (I usually go for higher but this is for example), what I would probably do is create two arrays to underpin a logical volume that spans both arrays. So now instead of more than two drives knocking out the array it would take now more than 4 and so on and so on. For my storage arrays if the stars align it would take quite a bit to knock out the array.


Are you using two different controllers, as well, or just partitioning up the drives on one? I've heard about duplexing, but I've never seen anyone actually do it.
 
kc77
Gerbil Team Leader
Posts: 242
Joined: Sat Jul 02, 2005 2:25 am

Re: Is RAID 5/6 dead due to large drive capacities?

Fri Mar 15, 2013 4:08 pm

Flatland_Spider wrote:

Are you using two different controllers, as well, or just partitioning up the drives on one? I've heard about duplexing, but I've never seen anyone actually do it.


One controller two arrays. Most controllers at the minimum will do two. The more expensive ones can do considerably more. The most I've seen is 128 per controller. For example...

_________________________________LV__________________________________________
\ ARRAY #1 - RAID 6 (8 disks) \ ARRAY #2 - RAID 6 (8 disks) \ARRAY #3 - RAID 6 (8 disks)
\ HBA

Under this example if the stars align it would take 7 disks to knock out the array.
Core i7 920 @stock - 6GB OCZ Mem - Adaptec 5805 - 2 x Intel X25-M in RAID1 - 5 x Western Digital RE4 WD1003FBYX 1TB in RAID 6 - Nvidia GTX 460
 
Waco
Maximum Gerbil
Posts: 4850
Joined: Tue Jan 20, 2009 4:14 pm
Location: Los Alamos, NM

Re: Is RAID 5/6 dead due to large drive capacities?

Fri Mar 15, 2013 4:11 pm

Even with my "small" setup at home I run RAID 10 - mostly because I could grab either one of my external boxes in an emergency and have all of my data.
Victory requires no explanation. Defeat allows none.
 
Scrotos
Graphmaster Gerbil
Topic Author
Posts: 1109
Joined: Tue Oct 02, 2007 12:57 pm
Location: Denver, CO.

Re: Is RAID 5/6 dead due to large drive capacities?

Fri Mar 15, 2013 4:32 pm

kc77 wrote:
Flatland_Spider wrote:

Are you using two different controllers, as well, or just partitioning up the drives on one? I've heard about duplexing, but I've never seen anyone actually do it.


One controller two arrays. Most controllers at the minimum will do two. The more expensive ones can do considerably more. The most I've seen is 128 per controller. For example...

_________________________________LV__________________________________________
\ ARRAY #1 - RAID 6 (8 disks) \ ARRAY #2 - RAID 6 (8 disks) \ARRAY #3 - RAID 6 (8 disks)
\ HBA

Under this example if the stars align it would take 7 disks to knock out the array.


I understand the concept, but can you give an example of implementation? Do you assign it in a RAID controller as the 3 arrays and then in the OS, say Windows, do a "dynamic disk" made of those three arrays to appear as one single disk? Or are there RAID controllers that do this?

I'm mostly familiar with HP ACU and a little bit with the Dell OpenManage PERC stuff, but haven't see how to create this type of thing before on a low level.

What you made in your example was a RAID 60?

Edit: Yup, ok, I think I see what's up. The controllers I have to play with that aren't in production are P400 and PERC 5/i. The only combined RAID they support is 10. To get 50 and 60 I'd have to mess with some of our P410-based servers which are in production so I ain't messing with them.

http://h18004.www1.hp.com/products/serv ... index.html

So to answer my own question, it looks like a recent enough RAID controller will allow you to create these combined arrays and budget builder home users will need to fork out a bit of money for something similar.
 
GrimDanfango
Gerbil First Class
Posts: 112
Joined: Sun May 10, 2009 9:53 am

Re: Is RAID 5/6 dead due to large drive capacities?

Fri Mar 15, 2013 4:40 pm

I'm going for an 8x3TB Raid 10 for my new setup.
I tried to get away with a 5-disk Raid 5 in a Thecus NAS, but its just horrible for the seriously heavy sequential reads and write I subject it to from 3 machines at once.

The nice thing about Raid 10 is I don't need a top-end Areca to get passable performance. I'm hooking it up to a basic LSI server HBA (that's actually packaged with the Intel dual-Xeon board I'm getting)

3TB disks arent quite cheap enough that the cost didn't make my eyes water a little bit though :-P

Just waiting on my orders to come through and I can start building this beast. I'm hoping it's as fast as I'm expecting!
 
Beelzebubba9
Gerbil
Posts: 85
Joined: Tue Oct 19, 2004 10:23 am
Location: New York, NY

Re: Is RAID 5/6 dead due to large drive capacities?

Fri Mar 15, 2013 4:59 pm

The general wisdom seems to be to use Raid 10 for most things these days. You gain a lot more fault tolerance (if you're lucky you can lose half the disks in the array without losing data) and speed at the cost of capacity. But with 4TB drives being so reasonably priced, it's hard to find real issue with ~8TB of usable space from a 4 Drive Raid 10 Array.

Regarding what SAN vendors do, it varies from SAN to SAN. I know the Nimble SANs we just bought use Raid 6 with a hot spare, and when a drive fails the array coalesces all of the rebuild data in to 4MB sequential writes before committing it to disk (as it does with all write IO), meaning the rebuild process happens much, much faster. Usually high end SANs have a much looser affiliation between your data and the disks it sits on, so it's not quite so literal as it is in a DAS situation.
 
sjl
Gerbil
Posts: 71
Joined: Tue Dec 07, 2004 5:14 pm

Re: Is RAID 5/6 dead due to large drive capacities?

Fri Mar 15, 2013 5:13 pm

Speaking as somebody who manages backup for a living, and who has aspirations towards the SAN space: the reason why enterprises use RAID 5/6 (or 10) with relatively small drives isn't primarily because of the time it takes to rebuild a failed drive (although that is part of it). It's because they're after performance. A single drive can deliver so many MB/s when it's sequential (reading or writing); if it's random, it can deliver only so many IOPS (Input/output Operations Per Second.) When you stack multiple drives together, both figures go up roughly linearly with the number of drives (handwave, handwave.) This is why a lot of storage arrays have so much "unallocated" space - if that space was allocated to active jobs, performance on the critical systems would go through the floor.

SSDs are moving into this area, although a great many enterprises are still wary of the relative immaturity of the technology. Factor in the slack space, though, and they're very seriously price competitive.

For home use, it comes down to acknowledging that RAID reconstruction takes time (not quite the same thing, but when I added two 3 TB drives to my now-ten-drive array, it took two days for the restripe to finish; I'll be doing a deliberate rebuild of a couple of disks - replacing WD Greens with Reds - in a couple of months), and that there is a risk of another drive failing while it runs. RAID 6 helps somewhat with that, but there is still a risk of a third drive failing during reconstruction. RAID is great in its space, but don't forget to take backups. That may be a second, offline array; it may be a "cloud" (eurgh, I hate that term) backup service; or it could be a ridiculous quantity of burnt DVDs/BDs. Frequently, the data that really matters is only a few hundred GB, and the rest is relatively easy to reconstruct from scratch.
 
kc77
Gerbil Team Leader
Posts: 242
Joined: Sat Jul 02, 2005 2:25 am

Re: Is RAID 5/6 dead due to large drive capacities?

Fri Mar 15, 2013 6:42 pm

Scrotos wrote:
I understand the concept, but can you give an example of implementation? Do you assign it in a RAID controller as the 3 arrays and then in the OS, say Windows, do a "dynamic disk" made of those three arrays to appear as one single disk? Or are there RAID controllers that do this?

I'm mostly familiar with HP ACU and a little bit with the Dell OpenManage PERC stuff, but haven't see how to create this type of thing before on a low level.

What you made in your example was a RAID 60?

Edit: Yup, ok, I think I see what's up. The controllers I have to play with that aren't in production are P400 and PERC 5/i. The only combined RAID they support is 10. To get 50 and 60 I'd have to mess with some of our P410-based servers which are in production so I ain't messing with them.

http://h18004.www1.hp.com/products/serv ... index.html

So to answer my own question, it looks like a recent enough RAID controller will allow you to create these combined arrays and budget builder home users will need to fork out a bit of money for something similar.


That example wasn't Raid 60. Raid 60 is a stripe across two raid 6 collections, which is a nested raid level. You also don't need an expensive raid controller to do something like in my example. MDADM + LVM2 will do that just fine. WIndows is generally bad when it comes to offering enterprise grade storage. But in my example what you are basically doing is pooling three storage arrays and creating your logical volume on top of that.

RAID 10 as great as it is really just wastes space if you don't have an application that requires it. You are halving your storage to do RAID 10. The main reason for RAID 10 is to get the performance of a stripe (both read and write) while adding maximum fault tolerance. No need to pool arrays together as every 4 disks adds two disks (if the stars align) of fault tolerance. It also rebuilds a crap ton faster.

In addition the write penalty for RAID 6 isn't something anyone at home will notice. While present, the write penalty is really something only sys admins need to be aware of when coming up with the appropriate storage set for applications. Even on a old controller like mine in my sig will still push 70+ MB/s writes and burst higher than that. LSI cards which are better do much better. If you aren't running GbE on your switch then it won't even matter and hardly cut into what a single GbE connection can do sustained (118 MB/s).
Core i7 920 @stock - 6GB OCZ Mem - Adaptec 5805 - 2 x Intel X25-M in RAID1 - 5 x Western Digital RE4 WD1003FBYX 1TB in RAID 6 - Nvidia GTX 460
 
Scrotos
Graphmaster Gerbil
Topic Author
Posts: 1109
Joined: Tue Oct 02, 2007 12:57 pm
Location: Denver, CO.

Re: Is RAID 5/6 dead due to large drive capacities?

Sat Mar 16, 2013 11:55 am

Ok, your example isn't RAID 60, but isn't it conceptually the same? For an 8 drive RAID 10, that's a stripe of 4 2-drive mirrors, right? Conceptually I saw a stripe of 3 8-drive RAID 6's.

I haven't been able to actually try creating something like that on a controller that purports to support it so I don't know if 50 and 60 mean ONLY two RAID 5/6 striped or if there's the flexibility to allow for N stripes of RAID 5/6. It seems illogical to me that 10 could be N/2 stripes of mirrors but with 50/60 you only get 2 stripes, but then again logic doesn't always fit into industry standards.

sjl wrote:
For home use, it comes down to acknowledging that RAID reconstruction takes time (not quite the same thing, but when I added two 3 TB drives to my now-ten-drive array, it took two days for the restripe to finish; I'll be doing a deliberate rebuild of a couple of disks - replacing WD Greens with Reds - in a couple of months), and that there is a risk of another drive failing while it runs. RAID 6 helps somewhat with that, but there is still a risk of a third drive failing during reconstruction. RAID is great in its space, but don't forget to take backups. That may be a second, offline array; it may be a "cloud" (eurgh, I hate that term) backup service; or it could be a ridiculous quantity of burnt DVDs/BDs. Frequently, the data that really matters is only a few hundred GB, and the rest is relatively easy to reconstruct from scratch.


30 TB? In theory you have like a 250% chance of the entire thing being destroyed during a resilver operation! :o

I take it you're not too concerned with that, based on real-life experience?
 
just brew it!
Administrator
Posts: 54500
Joined: Tue Aug 20, 2002 10:51 pm
Location: Somewhere, having a beer

Re: Is RAID 5/6 dead due to large drive capacities?

Sat Mar 16, 2013 1:29 pm

Yes, RAID 5 has issues, for the reasons already described, and RAID 6 mitigates this by having an extra parity stripe (so you would need to have *two* failures to take the array down). The Linux mdadm RAID system also scans the entire array periodically, to help prevent multiple latent errors from accumulating.

My take on RAID is that unless you plan to have a large number of drives (5 or more) just set up a couple of RAID 1 arrays and call it a day. A 4 drive RAID 6 array will have the same capacity as a pair of RAID 1 arrays made from the same drives, with slower write performance.
Nostalgia isn't what it used to be.
 
GrimDanfango
Gerbil First Class
Posts: 112
Joined: Sun May 10, 2009 9:53 am

Re: Is RAID 5/6 dead due to large drive capacities?

Sat Mar 16, 2013 2:34 pm

just brew it! wrote:
My take on RAID is that unless you plan to have a large number of drives (5 or more) just set up a couple of RAID 1 arrays and call it a day. A 4 drive RAID 6 array will have the same capacity as a pair of RAID 1 arrays made from the same drives, with slower write performance.

If you had 4 drives, as two RAID 1 mirrors, would there be any reason not to stripe them into a RAID10 array, for the added performance?
 
kc77
Gerbil Team Leader
Posts: 242
Joined: Sat Jul 02, 2005 2:25 am

Re: Is RAID 5/6 dead due to large drive capacities?

Sat Mar 16, 2013 2:37 pm

Scrotos wrote:
Ok, your example isn't RAID 60, but isn't it conceptually the same? For an 8 drive RAID 10, that's a stripe of 4 2-drive mirrors, right? Conceptually I saw a stripe of 3 8-drive RAID 6's.

Not really, but I can see some of the similarities. In my example the HBA has no clue that we are pooling the three arrays and treating them as one physical device. Depending on how the pooling is being performed (and the filesystem), the data could mainly exist on any one of the three R6 collections. In Raid 60 the data is always striped between the two R6 collections or three if there was a third and the HBA is completely aware of the R6 arrays which make up the nested R60 array since it's striping data between the two or three.

There's also a recovery difference between the two methods. Let's say I lost a physical device (R6 array) which was underpinning the pooled storage. If the data is contiguous then there's a good chance that I could recover from the lost array. That's just not going to happen if your Raid 60 array dies. Could you pay someone? Probably but I could only imagine the cost of trying to recover a Raid 60 array.
Core i7 920 @stock - 6GB OCZ Mem - Adaptec 5805 - 2 x Intel X25-M in RAID1 - 5 x Western Digital RE4 WD1003FBYX 1TB in RAID 6 - Nvidia GTX 460
 
sjl
Gerbil
Posts: 71
Joined: Tue Dec 07, 2004 5:14 pm

Re: Is RAID 5/6 dead due to large drive capacities?

Sun Mar 17, 2013 2:17 am

Scrotos wrote:
30 TB? In theory you have like a 250% chance of the entire thing being destroyed during a resilver operation! :o

24 TB - I'm running RAID 6. And your maths is out - multiply the probabilities, don't add them. :p
I take it you're not too concerned with that, based on real-life experience?

Nah. The bulk of the data on the array is stuff I can recreate, given sufficient time and motivation. The remainder is backups of my main system, and - eventually - a bunch of playpen VMs for fiddling around with stuff for personal education. It wouldn't be fun to lose the array, but it would be survivable.
GrimDanfango wrote:
If you had 4 drives, as two RAID 1 mirrors, would there be any reason not to stripe them into a RAID10 array, for the added performance?

Not really, no. There's a small chance of total data loss (losing both disks in a mirrored pair with the other mirrored pair surviving), whereas the two RAID 1 mirrors would see half the data survive, but realistically, if you suffered that, you're probably suffering bad karma anyway.
 
Bensam123
Gerbil Elite
Posts: 990
Joined: Wed May 29, 2002 12:19 pm
Contact:

Re: Is RAID 5/6 dead due to large drive capacities?

Sun Mar 17, 2013 4:30 am

No one mentioned it, but how likely the array to fail completely while rebuilding depends on how much stress you're putting on the drives in the array. This in turn depends on how aggressive the array is rebuilding. A lot of controllers come with a option to change how fast the array rebuilds. The faster it rebuilds, the more stress it puts on the drives in the array. A slower rebuild time would stress the drives less, but takes longer. Of course that also strings out the amount of time the array can fail completely in.

Raid 5 also has issues with the 'write hole' although modern controllers have a way around this and raid 6 completely negates this effect.

I haven't heard about the erroneous data problem being a huge issue.
 
just brew it!
Administrator
Posts: 54500
Joined: Tue Aug 20, 2002 10:51 pm
Location: Somewhere, having a beer

Re: Is RAID 5/6 dead due to large drive capacities?

Sun Mar 17, 2013 11:00 am

Bensam123 wrote:
No one mentioned it, but how likely the array to fail completely while rebuilding depends on how much stress you're putting on the drives in the array. This in turn depends on how aggressive the array is rebuilding. A lot of controllers come with a option to change how fast the array rebuilds. The faster it rebuilds, the more stress it puts on the drives in the array. A slower rebuild time would stress the drives less, but takes longer. Of course that also strings out the amount of time the array can fail completely in.

I was under the impression that this setting is more to minimize performance impact the rebuild has on normal accesses while the rebuild is in progress.

An array rebuild by itself does not stress the drives much, because it is almost all sequential access. The spindle motors would be running anyway, and the rebuild stresses the head actuator minimally since it is just stepping across the entire drive one track at a time (accelerations of the head assembly are much smaller than in typical use).
Nostalgia isn't what it used to be.
 
Bensam123
Gerbil Elite
Posts: 990
Joined: Wed May 29, 2002 12:19 pm
Contact:

Re: Is RAID 5/6 dead due to large drive capacities?

Sun Mar 17, 2013 3:28 pm

Indeed, the rebuild itself is sequential access, but what happens if you throw some requests in there or anything outside of the rebuild process accesses the array? The more aggressive the rebuild, the more disk thrashing you end up with. This effect is magnified if you end up with a disk queue and it ends up constantly disk thrashing while rebuilding. The option itself may be more for reserving performance for other people accessing the array, but that doesn't mean that is its only use.

Ironically I haven't heard anyone discuss this point of view. I've heard about the URE, but haven't seen it become a problem. My friend experienced the write hole before, but that's rare. I've also heard people discuss buying lots of disks being a bad idea. You should build a array with disks from separate lots or even different aged disks so they all don't have a chance to fail at roughly the same time.
 
Ryu Connor
Global Moderator
Posts: 4369
Joined: Thu Dec 27, 2001 7:00 pm
Location: Marietta, GA
Contact:

Re: Is RAID 5/6 dead due to large drive capacities?

Sun Mar 17, 2013 5:55 pm

Modern drives don't experience extensive wear.

The head floats over a cushion of air. The arm moves through the process of a Lorentz force, which presents minimal wear and isn't sensitive to temperature.

Rebuilds don't cause drives to fail. The failure of rebuilds is simply a manifestation of manufacturing defects. The drive would have failed regardless due to the defect.

Storage, performance, price, and most importantly fault tolerance are all factors that each RAID level shift into greater strengths or weaknesses. RAID 1, 5, 6, and nested designs like 10, 50, and 60 each bring a different pro and con. Larger capacity drives don't change that evaluation. Remember that data sets grow over time just like storage.
All of my written content here on TR does not represent or reflect the views of my employer or any reasonable human being. All content and actions are my own.
 
just brew it!
Administrator
Posts: 54500
Joined: Tue Aug 20, 2002 10:51 pm
Location: Somewhere, having a beer

Re: Is RAID 5/6 dead due to large drive capacities?

Sun Mar 17, 2013 7:08 pm

Ryu Connor wrote:
Modern drives don't experience extensive wear.

Well yeah, there's that too.

Given how infrequently an array build should be occurring it's basically a complete non-issue.
Nostalgia isn't what it used to be.
 
Bensam123
Gerbil Elite
Posts: 990
Joined: Wed May 29, 2002 12:19 pm
Contact:

Re: Is RAID 5/6 dead due to large drive capacities?

Sun Mar 17, 2013 7:22 pm

I can't tell if you're disagreeing with what I said you're being vague on purpose.

"The failure of rebuilds is simply a manifestation of manufacturing defects. The drive would have failed regardless due to the defect."

Mechanical hard drives aren't SSDs. They don't wear at the same rate regardless of use, which is what you're implying.

Adding more strain to a hard drive that maybe likely to fail due to it being in operation for the same amount of time, under the same workload, and may be from the same lot (so it also may suffer from whatever defects the first hard drive did) is a bad idea.

(I'm guessing your whole post was based off the Google HD research done awhile back)
 
Waco
Maximum Gerbil
Posts: 4850
Joined: Tue Jan 20, 2009 4:14 pm
Location: Los Alamos, NM

Re: Is RAID 5/6 dead due to large drive capacities?

Sun Mar 17, 2013 7:24 pm

Bensam123 wrote:
(I'm guessing your whole post was based off the Google HD research done awhile back)

I know I set all of my drives to never spin down after reading it. The power consumption difference doesn't bother me.
Victory requires no explanation. Defeat allows none.
 
kc77
Gerbil Team Leader
Posts: 242
Joined: Sat Jul 02, 2005 2:25 am

Re: Is RAID 5/6 dead due to large drive capacities?

Sun Mar 17, 2013 8:11 pm

Ryu Connor wrote:
Storage, performance, price, and most importantly fault tolerance are all factors that each RAID level shift into greater strengths or weaknesses. RAID 1, 5, 6, and nested designs like 10, 50, and 60 each bring a different pro and con. Larger capacity drives don't change that evaluation. Remember that data sets grow over time just like storage.

This seems self contradictory. Larger capacity drives and the speed of them do change the evaluation. Maybe I'm misunderstanding you. Can you elaborate?
Core i7 920 @stock - 6GB OCZ Mem - Adaptec 5805 - 2 x Intel X25-M in RAID1 - 5 x Western Digital RE4 WD1003FBYX 1TB in RAID 6 - Nvidia GTX 460
 
Ryu Connor
Global Moderator
Posts: 4369
Joined: Thu Dec 27, 2001 7:00 pm
Location: Marietta, GA
Contact:

Re: Is RAID 5/6 dead due to large drive capacities?

Sun Mar 17, 2013 8:32 pm

Not sure why.

RAID 1, 5, 6, and 10 (amongst others) are very different. A bigger drive doesn't change that RAID1 doesn't have performance and it has terrible storage for the price. That RAID5 has better storage, price, and performance, but weaker fault tolerance. Etc. etc.

Just because my data set will suddenly fit on a 4TB drive doesn't suddenly mean RAID1 is the right fit.
All of my written content here on TR does not represent or reflect the views of my employer or any reasonable human being. All content and actions are my own.
 
kc77
Gerbil Team Leader
Posts: 242
Joined: Sat Jul 02, 2005 2:25 am

Re: Is RAID 5/6 dead due to large drive capacities?

Sun Mar 17, 2013 9:07 pm

Ryu Connor wrote:
Not sure why.

RAID 1, 5, 6, and 10 (amongst others) are very different. A bigger drive doesn't change that RAID1 doesn't have performance and it has terrible storage for the price. That RAID5 has better storage, price, and performance, but weaker fault tolerance. Etc. etc.

Just because my data set will suddenly fit on a 4TB drive doesn't suddenly mean RAID1 is the right fit.


The RAID value alone isn't going to do much. You kind of need to evaluate the drives you have, the money it will cost, the space you need, the HBA/RAID card you have, and even the performance required before determining RAID level. A quick example of how the hard drive matters would be something like databases. Would you use a 300GB 15000 rpm drive or a 1TB 5400 rpm drive? The data would likely fit on both but if you are going for optimum performance you probably should pick the 300GB drive. That's before we even discuss the appropriate RAID level. You could just pick the RAID level (likely 10) and stick a 5400 rpm drive in there. But chances are the performance would suck.

That's what I mean by possibly misunderstanding you. While the RAID level matters it's selection doesn't happen in a bubble other factors matter and what type of hard drive you have matters just as much as what RAID level you are selecting.
Core i7 920 @stock - 6GB OCZ Mem - Adaptec 5805 - 2 x Intel X25-M in RAID1 - 5 x Western Digital RE4 WD1003FBYX 1TB in RAID 6 - Nvidia GTX 460
 
Ryu Connor
Global Moderator
Posts: 4369
Joined: Thu Dec 27, 2001 7:00 pm
Location: Marietta, GA
Contact:

Re: Is RAID 5/6 dead due to large drive capacities?

Sun Mar 17, 2013 9:15 pm

Yeah, and I noted all those factors.

I did say storage, performance, price, and fault tolerance are all choices to be made between the RAID levels. You are picking that number based on your requirements.

When addressing the OP question, these factors mean that growing storage has not killed off RAID 5 or 6.
All of my written content here on TR does not represent or reflect the views of my employer or any reasonable human being. All content and actions are my own.
 
Waco
Maximum Gerbil
Posts: 4850
Joined: Tue Jan 20, 2009 4:14 pm
Location: Los Alamos, NM

Re: Is RAID 5/6 dead due to large drive capacities?

Sun Mar 17, 2013 10:35 pm

Ryu Connor wrote:
A bigger drive doesn't change that RAID1 doesn't have performance and it has terrible storage for the price.

What does RAID 1 not have in performance? Any proper software or hardware controller will read from all the drives optimistically and write speeds are the same as a single drive.
Victory requires no explanation. Defeat allows none.
 
Buub
Maximum Gerbil
Posts: 4969
Joined: Sat Nov 09, 2002 11:59 pm
Location: Seattle, WA
Contact:

Re: Is RAID 5/6 dead due to large drive capacities?

Sun Mar 17, 2013 11:28 pm

Waco wrote:
Ryu Connor wrote:
A bigger drive doesn't change that RAID1 doesn't have performance and it has terrible storage for the price.

What does RAID 1 not have in performance? Any proper software or hardware controller will read from all the drives optimistically and write speeds are the same as a single drive.

I was thinking the exact same thing. In essence, it's RAID 0 for reads and single drive for write performance. That's one of he things that really makes RAID 10 fly.
 
Ryu Connor
Global Moderator
Posts: 4369
Joined: Thu Dec 27, 2001 7:00 pm
Location: Marietta, GA
Contact:

Re: Is RAID 5/6 dead due to large drive capacities?

Mon Mar 18, 2013 12:35 am

Waco wrote:
Ryu Connor wrote:
A bigger drive doesn't change that RAID1 doesn't have performance and it has terrible storage for the price.

What does RAID 1 not have in performance? Any proper software or hardware controller will read from all the drives optimistically and write speeds are the same as a single drive.


RAID1 isn't striping (you dont set a block size) it can't give you amazing reads. Even when there is a boost it is a far cry from the double performance reads like actual striping.

http://techreport.com/review/2525/real- ... explored/3
http://patrick.wagstrom.net/weblog/2011 ... windows-7/
http://www.maximumpc.com/article/raid_done_right

RAID1 performance is either no better or only slightly better than single drive in the examples.

If all I want is fault tolerance this is fine. If I want performance and storage for the price it isn't.
All of my written content here on TR does not represent or reflect the views of my employer or any reasonable human being. All content and actions are my own.

Who is online

Users browsing this forum: No registered users and 1 guest
GZIP: On