Personal computing discussed
Moderators: renee, morphine, Steel
kc77 wrote:That being said if you are going for large capacity storage you should be creating multiple arrays to underpin your logical volumes. Let's say I was going for 12TB (I usually go for higher but this is for example), what I would probably do is create two arrays to underpin a logical volume that spans both arrays. So now instead of more than two drives knocking out the array it would take now more than 4 and so on and so on. For my storage arrays if the stars align it would take quite a bit to knock out the array.
Flatland_Spider wrote:
Are you using two different controllers, as well, or just partitioning up the drives on one? I've heard about duplexing, but I've never seen anyone actually do it.
kc77 wrote:Flatland_Spider wrote:
Are you using two different controllers, as well, or just partitioning up the drives on one? I've heard about duplexing, but I've never seen anyone actually do it.
One controller two arrays. Most controllers at the minimum will do two. The more expensive ones can do considerably more. The most I've seen is 128 per controller. For example...
_________________________________LV__________________________________________
\ ARRAY #1 - RAID 6 (8 disks) \ ARRAY #2 - RAID 6 (8 disks) \ARRAY #3 - RAID 6 (8 disks)
\ HBA
Under this example if the stars align it would take 7 disks to knock out the array.
Scrotos wrote:I understand the concept, but can you give an example of implementation? Do you assign it in a RAID controller as the 3 arrays and then in the OS, say Windows, do a "dynamic disk" made of those three arrays to appear as one single disk? Or are there RAID controllers that do this?
I'm mostly familiar with HP ACU and a little bit with the Dell OpenManage PERC stuff, but haven't see how to create this type of thing before on a low level.
What you made in your example was a RAID 60?
Edit: Yup, ok, I think I see what's up. The controllers I have to play with that aren't in production are P400 and PERC 5/i. The only combined RAID they support is 10. To get 50 and 60 I'd have to mess with some of our P410-based servers which are in production so I ain't messing with them.
http://h18004.www1.hp.com/products/serv ... index.html
So to answer my own question, it looks like a recent enough RAID controller will allow you to create these combined arrays and budget builder home users will need to fork out a bit of money for something similar.
sjl wrote:For home use, it comes down to acknowledging that RAID reconstruction takes time (not quite the same thing, but when I added two 3 TB drives to my now-ten-drive array, it took two days for the restripe to finish; I'll be doing a deliberate rebuild of a couple of disks - replacing WD Greens with Reds - in a couple of months), and that there is a risk of another drive failing while it runs. RAID 6 helps somewhat with that, but there is still a risk of a third drive failing during reconstruction. RAID is great in its space, but don't forget to take backups. That may be a second, offline array; it may be a "cloud" (eurgh, I hate that term) backup service; or it could be a ridiculous quantity of burnt DVDs/BDs. Frequently, the data that really matters is only a few hundred GB, and the rest is relatively easy to reconstruct from scratch.
just brew it! wrote:My take on RAID is that unless you plan to have a large number of drives (5 or more) just set up a couple of RAID 1 arrays and call it a day. A 4 drive RAID 6 array will have the same capacity as a pair of RAID 1 arrays made from the same drives, with slower write performance.
Scrotos wrote:Ok, your example isn't RAID 60, but isn't it conceptually the same? For an 8 drive RAID 10, that's a stripe of 4 2-drive mirrors, right? Conceptually I saw a stripe of 3 8-drive RAID 6's.
Scrotos wrote:30 TB? In theory you have like a 250% chance of the entire thing being destroyed during a resilver operation!
I take it you're not too concerned with that, based on real-life experience?
GrimDanfango wrote:If you had 4 drives, as two RAID 1 mirrors, would there be any reason not to stripe them into a RAID10 array, for the added performance?
Bensam123 wrote:No one mentioned it, but how likely the array to fail completely while rebuilding depends on how much stress you're putting on the drives in the array. This in turn depends on how aggressive the array is rebuilding. A lot of controllers come with a option to change how fast the array rebuilds. The faster it rebuilds, the more stress it puts on the drives in the array. A slower rebuild time would stress the drives less, but takes longer. Of course that also strings out the amount of time the array can fail completely in.
Ryu Connor wrote:Modern drives don't experience extensive wear.
Bensam123 wrote:(I'm guessing your whole post was based off the Google HD research done awhile back)
Ryu Connor wrote:Storage, performance, price, and most importantly fault tolerance are all factors that each RAID level shift into greater strengths or weaknesses. RAID 1, 5, 6, and nested designs like 10, 50, and 60 each bring a different pro and con. Larger capacity drives don't change that evaluation. Remember that data sets grow over time just like storage.
Ryu Connor wrote:Not sure why.
RAID 1, 5, 6, and 10 (amongst others) are very different. A bigger drive doesn't change that RAID1 doesn't have performance and it has terrible storage for the price. That RAID5 has better storage, price, and performance, but weaker fault tolerance. Etc. etc.
Just because my data set will suddenly fit on a 4TB drive doesn't suddenly mean RAID1 is the right fit.
Ryu Connor wrote:A bigger drive doesn't change that RAID1 doesn't have performance and it has terrible storage for the price.
Waco wrote:Ryu Connor wrote:A bigger drive doesn't change that RAID1 doesn't have performance and it has terrible storage for the price.
What does RAID 1 not have in performance? Any proper software or hardware controller will read from all the drives optimistically and write speeds are the same as a single drive.
Waco wrote:Ryu Connor wrote:A bigger drive doesn't change that RAID1 doesn't have performance and it has terrible storage for the price.
What does RAID 1 not have in performance? Any proper software or hardware controller will read from all the drives optimistically and write speeds are the same as a single drive.