JBI, why would you RAID partitions instead of devices? Surely you have two drives in this machine that are in RAID-1? Or are you RAIDing two drives that have one partition on each?
Yes, RAIDing two partitioned drives.
The reasoning behind doing it this way was to insulate myself somewhat from differences in drive sizes. If the array is built on raw devices, and a drive needs to be replaced, you can get tripped up by the fact that different brands/models of drive may have slightly different capacities. If a replacement drive is smaller than the other drives in the array, it won't work. By using partitions under the RAID, I can make the RAID array a percent or so smaller than the physical drives, thereby sidestepping that potential landmine.
The "gotcha" mentioned in the OP hit when I got the bright idea of "hey, I could partition the pad space too, and make a small scratch array out of that". Shortly thereafter, all hell broke loose.
I retrospect, maybe I should've just used the --size
option when creating the array, instead of trying to trick it using partitions.
Agree with mentaldrano. When I've done RAID on Linux I do it on the bare devices and then use LVM to produce the equivalent of partitions within the RAID device.
My home system is actually booting from a RAID-1 which is built on partitioned drives, with LVM on top of *that*.
I don't recall if I was even given a choice of whether to use partitions or raw devices when I set that one up; in that case the disk setup was done via Ubuntu's "Alternate Install" CD.
I'm not sure why the home system *doesn't* get confused, since the partitions extend to the end of the disks, and the array *isn't* using the 1.2 metadata format. As a guess, maybe it is because on that array the type of the underlying partitions is set to "Linux RAID autodetect". I intentionally *didn't* set the partition type on the drives in the new array because the mdadm documentation claims that this is deprecated.
And yes, I agree that if I'd used raw devices I would've never seen this.