Personal computing discussed
Moderators: renee, morphine, Steel
Ryu Connor wrote:Yeah, and I noted all those factors.
I did say storage, performance, price, and fault tolerance are all choices to be made between the RAID levels. You are picking that number based on your requirements.
When addressing the OP question, these factors mean that growing storage has not killed off RAID 5 or 6.
Ryu Connor wrote:Waco wrote:Ryu Connor wrote:A bigger drive doesn't change that RAID1 doesn't have performance and it has terrible storage for the price.
What does RAID 1 not have in performance? Any proper software or hardware controller will read from all the drives optimistically and write speeds are the same as a single drive.
RAID1 isn't striping (you dont set a block size) it can't give you amazing reads. Even when there is a boost it is a far cry from the double performance reads like actual striping.
http://techreport.com/review/2525/real- ... explored/3
http://patrick.wagstrom.net/weblog/2011 ... windows-7/
http://www.maximumpc.com/article/raid_done_right
RAID1 performance is either no better or only slightly better than single drive in the examples.
If all I want is fault tolerance this is fine. If I want performance and storage for the price it isn't.
Scrotos wrote:That's why I'm looking for feedback from people in the field, so to speak, to see if they are running TB level RAID 5/6 and seeing these disasters as were predicted. All my experience with RAID 5/6 is with 146 GB or 300 GB SAS drives which are below the level of OMG FAILURE that these doomsayers indicate will bring pain.
Ryu Connor wrote:Waco wrote:Ryu Connor wrote:A bigger drive doesn't change that RAID1 doesn't have performance and it has terrible storage for the price.
What does RAID 1 not have in performance? Any proper software or hardware controller will read from all the drives optimistically and write speeds are the same as a single drive.
RAID1 isn't striping (you dont set a block size) it can't give you amazing reads. Even when there is a boost it is a far cry from the double performance reads like actual striping.
http://techreport.com/review/2525/real- ... explored/3
http://patrick.wagstrom.net/weblog/2011 ... windows-7/
http://www.maximumpc.com/article/raid_done_right
RAID1 performance is either no better or only slightly better than single drive in the examples.
If all I want is fault tolerance this is fine. If I want performance and storage for the price it isn't.
Waco wrote:Ryu Connor wrote:Waco wrote:What does RAID 1 not have in performance? Any proer software or hardware controller will read from all the drives optimistically and write speeds are the same as a single drive.
RAID1 isn't striping (you dont set a block size) it can't give you amazing reads. Even when there is a boost it is a far cry from the double performance reads like actual striping.
http://techreport.com/review/2525/real- ... explored/3
http://patrick.wagstrom.net/weblog/2011 ... windows-7/
http://www.maximumpc.com/article/raid_done_right
RAID1 performance is either no better or only slightly better than single drive in the examples.
If all I want is fault tolerance this is fine. If I want performance and storage for the price it isn't.
RAID 1 is RAID 0 with an infinitely small to [size of the drive] large stripe size. Sure, Windows 7 software RAID only reads from one drive, but last I checked, almost everything else will read from all drives in a RAID 1 array simultaneously.
Since all drives are identical you can read any stripe you wish from each disk.
Ryu Connor wrote:People keep saying a hardware RAID controller makes the difference for RAID1, but I don't see it.
Areca ARC-1220 8-Port PCIe RAID6-Controller
Areca ARC-1220 8-Port PCIe RAID6-Controller
The performance for RAID1 on that card is still basically a single drive. That is a real RAID controller.
Bauxite wrote:In ZFS and other schema that are designed correctly a mirror of size N can work like a RAID0 of N drives for reads, and slightly slower than a RAID0 of N/2 drives for writes.
Waco wrote:Linux/UNIX do read balancing even with the most basic software RAID.
Waco wrote:You're quoting 7 year old reviews?
just brew it! wrote:Waco wrote:Linux/UNIX do read balancing even with the most basic software RAID.
Well I just did a quick disk throughput check of the Linux RAID-1 array on my home desktop, and the read and write speeds were nearly identical (100 MB/sec for writes, 101 MB/sec for reads). So at least in my case, I'm not seeing the effects of this read balancing.
Scrotos wrote:The contention is that the larger drive capacities have explicitly made RAID 5 worthless. Fault tolerance is worthless if a rebuild will automatically cause the entire array to fail.
Ryu Connor wrote:Scrotos wrote:The contention is that the larger drive capacities have explicitly made RAID 5 worthless. Fault tolerance is worthless if a rebuild will automatically cause the entire array to fail.
This still comes from an argument that no one apparently needs more space, which isn't true.
Still seems to apply superstition that rebuilds kill drives.
Ignores that during the recreation of the mirror after a failure that the "good" drive might die before completion causing the array to fail.
Backups, they matter, yo.
Waco wrote:EDIT: It performs read-balancing on multiple accesses. A single application probably won't see any performance benefit, but doing two dd's at once should be substantially faster. At least, that's how I understand it.
Convert wrote:URE is an estimate. I think what we are really trying to determine here is the difference between theory and practice. I think this would be like figuring out MTBF though, it's also just an estimate.
SecretSquirrel wrote:3) Even with gigabit to my workstation, the network is the limiting factor for writes to the array.
Scrotos wrote:I don't understand what you're exactly saying. I would like more space. I think most of everyone in the universe wants more space. The arguments that I linked to say that because of the URE on drives, you're certain to get a failure during a rebuild and this is due primarily to each drive having larger capacity. By that token, it also seems that reading a 2 TB drive 6 times would also give you an URE, but it wouldn't matter in that instance because you're not trying to rebuild an array from data and parity, you're just losing a sector. In a RAID rebuild, that would cause the entire thing to die.
It's not that you choose RAID 5/6 to maximize space, that's not what the issue is. A 4 TB drive doesn't mean you're going to not make a five 1TB drive RAID 5 to get the space. It means if you make a five 4 TB drive RAID 5, during rebuild from a failed drive, you'll get an error that will cause the entire RAID to fail. I think maybe that's where you're getting confused. It isn't a "large drives mean you can use a single large drive instead of a RAID of smaller ones" issue, it's making that RAID out of big ones that's the problem.
I get the feeling that the math doesn't match up to reality, but I don't have access to something like that Google hard drive survey to give real-world experience. So all I got is the math to feed my fear.
Waco wrote:Also - backwards RAID 10 (01?) isn't any more susceptible to failure. You just change which drives can fail before you lose the whole array, right? A mirror of stripes can lose half of the drives and a stripe of mirrors can lose half of them as well.
Ryu Connor wrote:Waco wrote:Also - backwards RAID 10 (01?) isn't any more susceptible to failure. You just change which drives can fail before you lose the whole array, right? A mirror of stripes can lose half of the drives and a stripe of mirrors can lose half of them as well.
An issue of perspective I suppose.
With 0+1 losing one drive means an entire set has failed. If you replaced that drive and then the rebuild from the alternate set suffers a failure then the rebuild cannot complete. I only have two drives of protection if both drives in the same set die.
Ryu Connor wrote:Drives fail, doesn't matter if it is one drive, multiple drives, or one of the RAIDs.
Small drive vs big drive doesn't change that. Rebuilds don't change that. We also can't calculate your luck. Taking the safe route might not end like you suspect. Streaks are a fickle thing, just visit Vegas.
WD RE wrote:<10 in 10^16
Convert wrote:From what I know about failed RAID 5 rebuilds is that it will simply fail to rebuild, it won't break the array. What I'm kind of curious is if a URE guarantees a failed rebuild. I'm not an expert about RAID parity but is it not possible that the block that needs to be recovered could be found in parity on another drive or does it require the missing member to help with that?
Convert wrote:This reminds me of the problem with SSD’s and how many times the flash can be written to. There’s a lot of math you can run on that problem too. There are people out there though that have been running programs to constantly write data to their SSDs to test their longevity. Obviously these are two completely different things but it’s interesting to see how real life correlates to estimated values: http://www.xtremesystems.org/forums/sho ... nm-Vs-34nm (scroll down for more graphs)