Linux software RAID gotcha

Where Penguins and Daemons chill together in the warmth of the Sun.

Moderators: SecretSquirrel, notfred

Linux software RAID gotcha

Postposted on Mon Sep 26, 2011 9:08 pm

Who was the doofus who decided that the default layout for a Linux software RAID array puts the RAID superblock at the very end of the device or partition? The upshot of this is that the system can have a hard time telling the difference between an array that is built on raw devices, versus an array that is built on partitions that extend to the end of the devices. If you rely on the system's automatic array detection logic, it may get it wrong; this can have some rather unpleasant (and very puzzling) results.

I managed to get the MD subsystem on my work desktop into a very confused state because of this, and ended up accidentally corrupting (and needing to rebuild from scratch) the RAID-1 array I was setting up. No real harm done (no data lost), but man was that a head-scratcher!

When I finally figured out WTF was going on, I forced it to use "version 1.2" superblocks when I rebuilt the array. This puts the superblocks at a fixed offset from the *start* of the device or partition. Seems a lot more sensible to me!

Moral of the story: Always use the "--metadata=1.2" option if you're building a RAID array on partitioned devices!
(this space intentionally left blank)
just brew it!
Administrator
Gold subscriber
 
 
Posts: 37673
Joined: Tue Aug 20, 2002 10:51 pm
Location: Somewhere, having a beer

Re: Linux software RAID gotcha

Postposted on Mon Sep 26, 2011 9:50 pm

axeman wrote:Good to know, dude. I'll still take it over fakeraid any day.

Oh yeah, me too. Not the least because fakeraid typically doesn't work in Linux anyway, because the drivers are (usually) Windows-only. Now that GRUB 2 supports booting from software RAID arrays, you can even use software RAID for the boot volume...
(this space intentionally left blank)
just brew it!
Administrator
Gold subscriber
 
 
Posts: 37673
Joined: Tue Aug 20, 2002 10:51 pm
Location: Somewhere, having a beer

Re: Linux software RAID gotcha

Postposted on Tue Sep 27, 2011 7:35 am

JBI, why would you RAID partitions instead of devices? Surely you have two drives in this machine that are in RAID-1? Or are you RAIDing two drives that have one partition on each?
mentaldrano
Gerbil
 
Posts: 75
Joined: Thu Mar 20, 2008 4:17 pm

Re: Linux software RAID gotcha

Postposted on Tue Sep 27, 2011 7:47 am

Agree with mentaldrano. When I've done RAID on Linux I do it on the bare devices and then use LVM to produce the equivalent of partitions within the RAID device.
notfred
Grand Gerbil Poohbah
 
Posts: 3731
Joined: Tue Aug 10, 2004 10:10 am
Location: Ottawa, Canada

Re: Linux software RAID gotcha

Postposted on Tue Sep 27, 2011 7:55 am

notfred wrote:Agree with mentaldrano. When I've done RAID on Linux I do it on the bare devices and then use LVM to produce the equivalent of partitions within the RAID device.

Same here, if only to avoid the headaches.
Z68XP-UD4 | 2700K @ 4.4 GHz | 16 GB | 770 | PCP&C Silencer 950 | XSPC RX360 | Heatkiller R3 | D5 + RP-452X2 | HAF 932 | 1 TB WD Black w/ SRT
Waco
Gerbil Elite
 
Posts: 746
Joined: Tue Jan 20, 2009 4:14 pm

Re: Linux software RAID gotcha

Postposted on Tue Sep 27, 2011 8:32 am

Huh. I wonder what the reasoning behind that change is.

Thanks for the heads up.
Desktop: FX-8350 | 32 GB | XFX Radeon 6950 | Windows 7 x64
Laptop: i7 740QM | 12 GB | Mobility Radeon 5850 | Windows 8.1.1.1.1 x64
SuperSpy
Gerbil Jedi
Gold subscriber
 
 
Posts: 1570
Joined: Thu Sep 12, 2002 9:34 pm
Location: TR Forums

Re: Linux software RAID gotcha

Postposted on Tue Sep 27, 2011 9:21 am

mentaldrano wrote:JBI, why would you RAID partitions instead of devices? Surely you have two drives in this machine that are in RAID-1? Or are you RAIDing two drives that have one partition on each?

Yes, RAIDing two partitioned drives.

The reasoning behind doing it this way was to insulate myself somewhat from differences in drive sizes. If the array is built on raw devices, and a drive needs to be replaced, you can get tripped up by the fact that different brands/models of drive may have slightly different capacities. If a replacement drive is smaller than the other drives in the array, it won't work. By using partitions under the RAID, I can make the RAID array a percent or so smaller than the physical drives, thereby sidestepping that potential landmine.

The "gotcha" mentioned in the OP hit when I got the bright idea of "hey, I could partition the pad space too, and make a small scratch array out of that". Shortly thereafter, all hell broke loose. :roll:

I retrospect, maybe I should've just used the --size option when creating the array, instead of trying to trick it using partitions. :wink:

notfred wrote:Agree with mentaldrano. When I've done RAID on Linux I do it on the bare devices and then use LVM to produce the equivalent of partitions within the RAID device.

My home system is actually booting from a RAID-1 which is built on partitioned drives, with LVM on top of *that*. :lol: I don't recall if I was even given a choice of whether to use partitions or raw devices when I set that one up; in that case the disk setup was done via Ubuntu's "Alternate Install" CD.

I'm not sure why the home system *doesn't* get confused, since the partitions extend to the end of the disks, and the array *isn't* using the 1.2 metadata format. As a guess, maybe it is because on that array the type of the underlying partitions is set to "Linux RAID autodetect". I intentionally *didn't* set the partition type on the drives in the new array because the mdadm documentation claims that this is deprecated.

And yes, I agree that if I'd used raw devices I would've never seen this.
(this space intentionally left blank)
just brew it!
Administrator
Gold subscriber
 
 
Posts: 37673
Joined: Tue Aug 20, 2002 10:51 pm
Location: Somewhere, having a beer

Re: Linux software RAID gotcha

Postposted on Tue Sep 27, 2011 9:42 am

just brew it! wrote:The reasoning behind doing it this way was to insulate myself somewhat from differences in drive sizes. If the array is built on raw devices, and a drive needs to be replaced, you can get tripped up by the fact that different brands/models of drive may have slightly different capacities. If a replacement drive is smaller than the other drives in the array, it won't work. By using partitions under the RAID, I can make the RAID array a percent or so smaller than the physical drives, thereby sidestepping that potential landmine.

Ugh, I JUST had to deal with this. I had a Seagate Barracuda LP 2 TB drive die on me and the replacement (WITH THE SAME MODEL NUMBER) is 100 MB smaller than the old one. I had to shrink my array to get it to sync up with the new drive. :roll:


I guess I should have thought of that when replying earlier.
Z68XP-UD4 | 2700K @ 4.4 GHz | 16 GB | 770 | PCP&C Silencer 950 | XSPC RX360 | Heatkiller R3 | D5 + RP-452X2 | HAF 932 | 1 TB WD Black w/ SRT
Waco
Gerbil Elite
 
Posts: 746
Joined: Tue Jan 20, 2009 4:14 pm

Re: Linux software RAID gotcha

Postposted on Tue Sep 27, 2011 9:48 am

For the record software raid (Windows or Linux) and host raid(fake raid) are basically the same. i just avoid them both because they both suck. the exceptions I've found are these: Linux software raid with ZFS is ok. Or if you're using an Intel Server board and can use ESRT2 host raid. ESRT2 is based off of LSI Raid Controllers and works very well with most Linux distros. But if RAID is a major concern, then spend the money on a server motherboard and a decent raid card. (highpoint rocket raid is not a decent raid card, on board raid is better and more reliable then rocket raid)
MCP MCDST MCSA MCTS MCITP
A+ Net+
Intel Core i7-950 Intel DX58SO Mobo 6GB Corsair XMS3 Tri-Channel BFG Geforce 260 GTX
2x 160GB Seagate HDs RAID 0 2x 500GB WD RE3 HDs RAID 0
Built 40K+ systems and still counting
EV42TMAN
Gerbil
 
Posts: 39
Joined: Fri Jun 10, 2011 11:50 am

Re: Linux software RAID gotcha

Postposted on Tue Sep 27, 2011 9:51 am

EV42TMAN wrote:For the record software raid (Windows or Linux) and host raid(fake raid) are basically the same. i just avoid them both because they both suck.

Hey now - that's not true at all any more.

Software RAID is by far the ideal RAID to use for non-parity RAID and it's fast becoming a better choice than hardware parity RAID as well. It's far more portable and in most cases performs better.
Z68XP-UD4 | 2700K @ 4.4 GHz | 16 GB | 770 | PCP&C Silencer 950 | XSPC RX360 | Heatkiller R3 | D5 + RP-452X2 | HAF 932 | 1 TB WD Black w/ SRT
Waco
Gerbil Elite
 
Posts: 746
Joined: Tue Jan 20, 2009 4:14 pm

Re: Linux software RAID gotcha

Postposted on Tue Sep 27, 2011 10:02 am

EV42TMAN wrote:For the record software raid (Windows or Linux) and host raid(fake raid) are basically the same. i just avoid them both because they both suck.

The generally accepted usage is that "fake RAID" means you need to load a proprietary driver. And yes, that does indeed suck. It is non-portable, and quality/reliability of the driver and RAID BIOS varies widely.

OTOH pure software RAID on Linux has gotten quite good in the past couple of years (modulo little hiccups like the one which spawned this thread). Bootable software RAID arrays don't require serious voodoo any more (they only require entry-level voodoo... :lol:), and performance of RAID-5 is even pretty reasonable on a modern multi-core CPU.
(this space intentionally left blank)
just brew it!
Administrator
Gold subscriber
 
 
Posts: 37673
Joined: Tue Aug 20, 2002 10:51 pm
Location: Somewhere, having a beer


Return to Linux, Unix, and Assorted Madness

Who is online

Users browsing this forum: No registered users and 3 guests