Linux RAID success

Where Penguins and Daemons chill together in the warmth of the Sun.

Moderators: SecretSquirrel, notfred

Linux RAID success

Postposted on Sun Apr 03, 2011 9:00 am

Took longer than I thought. After a total of about 10 to 12 hours of study and attempts, I've got it pretty much nailed. For posterity's sake, here's how:

Got myself 3 drives - one for the OS, 2 for a RAID1 array. So thats:
/dev/sda
/dev/sdb
/dev/sdc

Did:
cfdisk /dev/sdb # created new partition with all available space, partition type FD
cfdisk /dev/sdc # did the same
mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf.original

Then I edited /etc/mdadm/mdadm.conf
replaced the following:
DEVICE partitions
with the following
DEVICE /dev/sdb1 /dev/sdc1
then went to the bottom of the file and under the line: "# definitions of existing MD arrays" I put:
ARRAY /dev/md0 devices=/dev/sdb1,/dev/sdc1 level=1

Then format a partition on the array.
cfdisk /dev/md0 # I chose 83 for a Linux type partition
mkfs -t ext4 /dev/md0 100%

I'm fudging a couple of these from memory - such as the mkfs line - I'm still quite the newbie, so I don't know if that "100%" is right.

All the above will create the array and initialize the array after any reboot. So now I guess all I have to do is put an entry in /etc/fstab? I'll give that a shot real quick and see. Be right back after a reboot.
Last edited by flip-mode on Sun Apr 03, 2011 9:24 am, edited 3 times in total.
flip-mode
Gerbil Khan
Silver subscriber
 
 
Posts: 9084
Joined: Thu May 08, 2003 12:42 pm
Location: Cincinnati, OH

Re: Linux RAID success, finally.

Postposted on Sun Apr 03, 2011 9:03 am

OK, that didn't work. Evidently, fstab is referenced before the array is up and running.
flip-mode
Gerbil Khan
Silver subscriber
 
 
Posts: 9084
Joined: Thu May 08, 2003 12:42 pm
Location: Cincinnati, OH

Re: Linux RAID success, finally.

Postposted on Sun Apr 03, 2011 9:08 am

Oh hellz yeah. I was wrong. /etc/fstab works just fine, but I made a mistake. The fstab entry should not be /dev/md0 since that is the virtual disk drive - not the partition. Once I changed that to /dev/md0p1 I rebooted and everything is mounted.

EXCITING!
flip-mode
Gerbil Khan
Silver subscriber
 
 
Posts: 9084
Joined: Thu May 08, 2003 12:42 pm
Location: Cincinnati, OH

Re: Linux RAID success

Postposted on Sun Apr 03, 2011 12:41 pm

Decided not to get really adventurous and use software RAID-1 for the boot volume, eh? :lol:
(this space intentionally left blank)
just brew it!
Administrator
Gold subscriber
 
 
Posts: 37684
Joined: Tue Aug 20, 2002 10:51 pm
Location: Somewhere, having a beer

Re: Linux RAID success

Postposted on Sun Apr 03, 2011 2:51 pm

just brew it! wrote:Decided not to get really adventurous and use software RAID-1 for the boot volume, eh? :lol:

For now, at least. The impetus for this is work - I want to replace our NAS at work. I needed to get familiar with how things work.

I think the plan for work may be to use an SSD for the boot drive (for the sake of reliability) and not do raid for that either. Or, I could do a RAID1 boot array and a separate RAID 1 data array. Finally, I was considering RAID5 or RAID1+0 options. Storage space won't be much of a concern - I could get by with as little as 1TB, but since I have 2TB drives at hand, I could easily do double that. The failure rate of 2TB drives isn't exactly encouraging. :-?

I haven't really thought it all the way through.

And then the last substantial missing piece of the puzzle remaining will be Samba. I've used Samba before, but never in very sophisticated ways.

In the last two months I've done firewall, ftp, http, and soon RAID and file sharing services for the office with Linux. It's been a satisfying journey.
flip-mode
Gerbil Khan
Silver subscriber
 
 
Posts: 9084
Joined: Thu May 08, 2003 12:42 pm
Location: Cincinnati, OH

Re: Linux RAID success

Postposted on Mon Apr 04, 2011 11:03 am

Which distro did you end up using?
Desktop: FX-8350 | 32 GB | XFX Radeon 6950 | Windows 7 x64
Laptop: i7 740QM | 12 GB | Mobility Radeon 5850 | Windows 8.1.1.1.1 x64
SuperSpy
Gerbil Jedi
Gold subscriber
 
 
Posts: 1573
Joined: Thu Sep 12, 2002 9:34 pm
Location: TR Forums

Re: Linux RAID success

Postposted on Mon Apr 04, 2011 11:32 am

SuperSpy wrote:Which distro did you end up using?

It works just the same in both Debian and Ubuntu - which makes plenty of sense. But Since the md driver is part of the Linux kernel, I believe it should work almost identically regardless of distribution, with the exception of the location of mdadm.conf

I set the array up on Ubuntu. But then for fun I copied my mdadm.conf to a usb drive then yanked Ubuntu and did a clean install of Debian 6.01a. After install, I just copied my mdadm.conf from the usb drive and rebooted and the array started up without a single issue. That's pretty frickin awesome. That's the beauty of Linux and conf files, and for whatever reason it continues to impress me. The learn can be a steep climb at times, but after that things can be done and then replicated with amazing speed.
flip-mode
Gerbil Khan
Silver subscriber
 
 
Posts: 9084
Joined: Thu May 08, 2003 12:42 pm
Location: Cincinnati, OH

Re: Linux RAID success

Postposted on Mon Apr 04, 2011 11:40 am

Oh, by the way, I wanted to ask opinions on whether to go RAID1 or RAID5 for the office file server.

If I do RAID5, it would be with (3) 1TB drives.

If I do RAID1, it would be with (2) 2TB drives.

Fault tolerance is the same, although the likelihood of a drive failure is 50% higher with 3 drives instead of 2. Performance of RAID5 is better than RAID1 according to the md manual, but I'm not so sure the performance differences will be dramatic enough to matter. Power consumption isn't really worth factoring in.

So, I'm leaning RAID1, but want to hear you all's thoughts.
flip-mode
Gerbil Khan
Silver subscriber
 
 
Posts: 9084
Joined: Thu May 08, 2003 12:42 pm
Location: Cincinnati, OH

Re: Linux RAID success

Postposted on Mon Apr 04, 2011 12:02 pm

Depending on the stripe size of 5 though and office writes (which i assume are mostly small), you may not see a huge speed advantage.

My vote is for RAID1. You can easily backup R1 while keeping one of the drives active too. Near 24/7 operation.
i7 860 - GA-P55-USB3 - 8GiB - HD7850 - SSD - 3.64TB HDD - Xonar D1 - U2410 - Win7 Pro x64.
DancinJack
Minister of Gerbil Affairs
 
Posts: 2042
Joined: Sat Nov 25, 2006 3:21 pm
Location: Austin, TX

Re: Linux RAID success

Postposted on Mon Apr 04, 2011 12:07 pm

flip-mode wrote:I set the array up on Ubuntu. But then for fun I copied my mdadm.conf to a usb drive then yanked Ubuntu and did a clean install of Debian 6.01a. After install, I just copied my mdadm.conf from the usb drive and rebooted and the array started up without a single issue. That's pretty frickin awesome. That's the beauty of Linux and conf files, and for whatever reason it continues to impress me. The learn can be a steep climb at times, but after that things can be done and then replicated with amazing speed.

Yeah, that's one of *NIX's traditional strengths as a server platform. Most services are configured via a single text file, or at most a folder containing a small number of text files. Copy the config file(s) to the appropriate location on a new system, and you've just migrated the settings for that service. A corollary of this is that in a pinch, pretty much any system configuration change can be accomplished with nothing more than a text editor and access to the partition containing the /etc folder.

The way that pretty much everything is easily scriptable, and how the base set of CLI tools all work together is a boon to sysadmins as well.

GUI-based configuration and system management tools only get you so far, since you can only do things that the designer of the tool thought of ahead of time. It is kind of like the old joke about WYSIWYG document editors: it really should be WYSIAYG (What You See Is *All* You Get). :lol:

flip-mode wrote:Oh, by the way, I wanted to ask opinions on whether to go RAID1 or RAID5 for the office file server.

If I do RAID5, it would be with (3) 1TB drives.

If I do RAID1, it would be with (2) 2TB drives.

Fault tolerance is the same, although the likelihood of a drive failure is 50% higher with 3 drives instead of 2. Performance of RAID5 is better than RAID1 according to the md manual, but I'm not so sure the performance differences will be dramatic enough to matter. Power consumption isn't really worth factoring in.

So, I'm leaning RAID1, but want to hear you all's thoughts.

Yeah, that's a tough call. A few more things to consider:

Depending on the speed of the CPU, write performance might actually be better with the RAID-1.

1TB drives are probably more reliable than 2TB ones (fewer platters), so that may negate the additional failure risk from the 3rd drive.

I believe with the RAID-5 you have the option of adding a drive and "reshaping" the array in place to add capacity in the future.

Sorry, I think I've just added to the confusion! :lol:
(this space intentionally left blank)
just brew it!
Administrator
Gold subscriber
 
 
Posts: 37684
Joined: Tue Aug 20, 2002 10:51 pm
Location: Somewhere, having a beer

Re: Linux RAID success

Postposted on Mon Apr 04, 2011 12:08 pm

DancinJack wrote:My vote is for RAID1. You can easily backup R1 while keeping one of the drives active too. Near 24/7 operation.

Another interesting option for doing backups is to use LVM to set up a "snapshot" file system. This ensures a consistent point-in-time backup image, with essentially zero downtime.
(this space intentionally left blank)
just brew it!
Administrator
Gold subscriber
 
 
Posts: 37684
Joined: Tue Aug 20, 2002 10:51 pm
Location: Somewhere, having a beer

Re: Linux RAID success

Postposted on Mon Apr 04, 2011 12:17 pm

just brew it! wrote:I believe with the RAID-5 you have the option of adding a drive and "reshaping" the array in place to add capacity in the future.


This is true. I don't think there is any real limit to the amount of drives either. Fantastic flexibility.
i7 860 - GA-P55-USB3 - 8GiB - HD7850 - SSD - 3.64TB HDD - Xonar D1 - U2410 - Win7 Pro x64.
DancinJack
Minister of Gerbil Affairs
 
Posts: 2042
Joined: Sat Nov 25, 2006 3:21 pm
Location: Austin, TX

Re: Linux RAID success

Postposted on Mon Apr 04, 2011 12:40 pm

just brew it! wrote:
DancinJack wrote:My vote is for RAID1. You can easily backup R1 while keeping one of the drives active too. Near 24/7 operation.

Another interesting option for doing backups is to use LVM to set up a "snapshot" file system. This ensures a consistent point-in-time backup image, with essentially zero downtime.


I would second this. :)
Core i7 920 @stock - 6GB OCZ Mem - Adaptec 5805 - 2 x Intel X25-M in RAID1 - 5 x Western Digital RE4 WD1003FBYX 1TB in RAID 6 - Nvidia GTX 460
kc77
Gerbil Team Leader
 
Posts: 242
Joined: Sat Jul 02, 2005 2:25 am

Re: Linux RAID success

Postposted on Mon Apr 04, 2011 2:02 pm

axeman wrote:Don't run RAID5 in software. Even with a dedicated RAID controller than can do all the parity calculations, the consensus seems to be that RAID 10 is a better solution overall except when it comes to how much disk space is used by the redundancy, which is a non-issue given the price/GB that disks are at these days.
And this is due to performance or what?
flip-mode
Gerbil Khan
Silver subscriber
 
 
Posts: 9084
Joined: Thu May 08, 2003 12:42 pm
Location: Cincinnati, OH

Re: Linux RAID success

Postposted on Mon Apr 04, 2011 3:37 pm

flip-mode wrote:
axeman wrote:Don't run RAID5 in software. Even with a dedicated RAID controller than can do all the parity calculations, the consensus seems to be that RAID 10 is a better solution overall except when it comes to how much disk space is used by the redundancy, which is a non-issue given the price/GB that disks are at these days.
And this is due to performance or what?


Performance and the RAID-5 write hole. On the performance side there is a reason good RAID5 controllers cost $500+. Calculating all that parity is hard work you don't want your CPU wasting time doing.

The write hole has to do with commiting data to the array and losing power before the parity can be updated. If this happens you can lose data (sometimes silently) which is why raid-5 controllers often have NVRAM so their state survives a powerloss. Obviously this is unavailable in software RAID.

Without dedicated hardware to handle RAID I would stick to RAID 1.
ekul
Gerbil
 
Posts: 81
Joined: Thu Jan 17, 2008 1:25 pm

Re: Linux RAID success

Postposted on Mon Apr 04, 2011 4:45 pm

Welcome to flip-mode. I've been running an mdadm Raid-5 on Lenny since...well, whenever Lenny was still the testing branch. I don't know how I ever lived without a NAS. Right now I'm backing up the array to an external HDD via rsnapshot. I'd like to tinker a bit with BackupPC as well.

If I had to do it all over again today, I would probably go Raid-6 for some extra peace of mind.


just brew it! wrote:Another interesting option for doing backups is to use LVM to set up a "snapshot" file system. This ensures a consistent point-in-time backup image, with essentially zero downtime.

My ears picked up on that one. I'm going to have to read up on this.
DrCR
Gerbil
 
Posts: 70
Joined: Tue May 10, 2005 7:18 am

Re: Linux RAID success

Postposted on Mon Apr 04, 2011 8:24 pm

Well, if I'm feeling ambitious, I might give both configs a test run and run some benchmarks.

I have (3) 2TB drives. That makes for a number of convenient RAID1 scenarios. I could run the 3rd drive as the dedicated backup drive. Or it could run as a hot spare. Or, I could keep it on the shelf as a cold spare.

Seems the only reason to go RAID5 is if it meaningfully outperforms the RAID1 setup, and even then there's that scary write hole issue.

Just for the record, and so you all know how bad things currently are and that there's nowhere to go but up, my office is currently relying on a 5 year old Buffalo Terastation. I think it's a version 1.04. It's a 1TB NAS - (4) 250GB RAID5. I think it averages around 15Mbps transfers. I honestly don't know how fast it is. It's slow enough that I haven't found anyone that's bothered to put it on a chart e.g. at Small Net Builder or some such.
flip-mode
Gerbil Khan
Silver subscriber
 
 
Posts: 9084
Joined: Thu May 08, 2003 12:42 pm
Location: Cincinnati, OH

Re: Linux RAID success

Postposted on Tue Apr 05, 2011 8:09 am

Unless you really need the capacity, it's probably not worth going RAID5 over 1/10 until you have 4 or more drives.

I setup a debian machine very similarly a week or 2 ago, and while write speeds suffered significantly, read speeds were still plenty fast enough to saturate a GBe link. It actually turned out to be faster in general than the RAID10 config (I tested that too) as the read improvement apparently overshadowed the write penalty (the machine's primary job is to collect and zip the nightly backups).

Code: Select all
md2 : active raid5 sda4[0] sdd4[4] sdb4[2] sdc4[1]
      2843822592 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]

md1 : active raid10 sda2[0] sdd2[4] sdb2[2] sdc2[1]
      48825344 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]

md0 : active raid1 sda1[0] sdd1[4] sdb1[2] sdc1[1]
      498676 blocks super 1.2 [4/4] [UUUU]

md0 = RAID1 /boot (512MB)
md1 = RAID10 / (50GB)
md2 = RAID5 /home (~3TB)
Desktop: FX-8350 | 32 GB | XFX Radeon 6950 | Windows 7 x64
Laptop: i7 740QM | 12 GB | Mobility Radeon 5850 | Windows 8.1.1.1.1 x64
SuperSpy
Gerbil Jedi
Gold subscriber
 
 
Posts: 1573
Joined: Thu Sep 12, 2002 9:34 pm
Location: TR Forums

Re: Linux RAID success

Postposted on Sun Apr 10, 2011 2:37 am

JollyPepper
Gerbil In Training
 
Posts: 2
Joined: Tue Mar 29, 2011 1:50 pm

Re: Linux RAID success

Postposted on Sun Apr 10, 2011 8:59 am

That first link is rather old, it's before the advent of SMART and block remapping in IDE drives. I agree with the performance penalty parts, but the bit about drives quietly dying is now invalid.
notfred
Grand Gerbil Poohbah
 
Posts: 3732
Joined: Tue Aug 10, 2004 10:10 am
Location: Ottawa, Canada

Re: Linux RAID success

Postposted on Sun Apr 10, 2011 9:26 am

That site is against parity in all forms. RAID 5 is just the most popular platform of it.
Ivy Bridge i5-3570K@4.0Ghz, Gigabyte Z77X-UD3H, 2x4GiB of PC-12800, EVGA 660Ti, Corsair CX-600 and Fractal Refined R4 (W). Kentsfield Q6600@3Ghz, HD 4850 2x2GiB PC2-6400, Gigabyte EP45-DS4P, OCZ Modstream 700W, and PC-7B.
Krogoth
Maximum Gerbil
Silver subscriber
 
 
Posts: 4404
Joined: Tue Apr 15, 2003 3:20 pm
Location: somewhere on Core Prime

Re: Linux RAID success

Postposted on Sun Apr 10, 2011 9:56 am

notfred wrote:That first link is rather old, it's before the advent of SMART and block remapping in IDE drives. I agree with the performance penalty parts, but the bit about drives quietly dying is now invalid.

Yup... and the bit about failed sectors "returning garbage" isn't valid either; the drive returns a read failure error. With any sane RAID setup, this will cause the drive to be dropped from the array, and an alert being sent to the server administrator indicating that the drive needs to be replaced and the array recovered.

Newer versions of Linux mdadm RAID also do a periodic scan of all of the drives for bad sectors, greatly reducing the odds that you'll have a double read failure (i.e. bad sector causes one drive to be kicked from the array; second bad sector on one of the *other* drives during the subsequent recovery results in loss of the entire array). Before this scan mechanism was implemented, the double read failure scenario was probably the biggest problem for RAID-5; now at least you've got a good chance of finding bad sectors soon after they develop (when the array can still be recovered), rather than having them sitting there like a ticking time bomb.
(this space intentionally left blank)
just brew it!
Administrator
Gold subscriber
 
 
Posts: 37684
Joined: Tue Aug 20, 2002 10:51 pm
Location: Somewhere, having a beer

Re: Linux RAID success

Postposted on Sun Apr 10, 2011 12:31 pm

Any reason you can't drop Freenas on your machine? You could at least test different RAID levels quickly and easily...
If we cannot be free, at least we can be cheap! FZ.
satchmobob
Gerbil Team Leader
 
Posts: 212
Joined: Tue Oct 15, 2002 2:42 pm
Location: Worcester, UK

Re: Linux RAID success

Postposted on Sun Apr 10, 2011 1:18 pm

flip-mode wrote:Fault tolerance is the same, although the likelihood of a drive failure is 50% higher with 3 drives instead of 2. Performance of RAID5 is better than RAID1 according to the md manual, but I'm not so sure the performance differences will be dramatic enough to matter. Power consumption isn't really worth factoring in.

So, I'm leaning RAID1, but want to hear you all's thoughts.

READ performance for RAID5 is potentially better (though I question that if we're talking RAID10, not RAID1). WRITE performance for RAID5 is going to be somewhat to substantially worse.

BTW, if you're talking more than two drives, I assume you mean RAID10 (RAID0 striping on top of RAID1 mirrors).

OK, reread the first post. Yes, RAID10 is really the better of all the options, but you need at least four drives for the array. With only two drives for the array, you're talking a simple RAID1 mirror, as you stated.
Buub
Maximum Gerbil
Silver subscriber
 
 
Posts: 4200
Joined: Sat Nov 09, 2002 11:59 pm
Location: Seattle, WA

Re: Linux RAID success

Postposted on Mon Apr 18, 2011 12:40 pm

Update,

Just now got the RAID 1 array shared out with Samba and did a transfer test and clocked 105-107 MB/s sustained transfer speed. :P 8) :P

And how does that compare to the office's current storage device - a Buffalo Terastation RAID 5? That machine transfers the same file at 8.5-10 MB/s sustained.

That's freaking awesome improvement for you right there. Yes.

Next step is to get more familiar with Samba.
flip-mode
Gerbil Khan
Silver subscriber
 
 
Posts: 9084
Joined: Thu May 08, 2003 12:42 pm
Location: Cincinnati, OH

Re: Linux RAID success

Postposted on Fri Apr 22, 2011 12:05 am

Are these configured with regular partitions or have you tried out LVM's?
Image
Nitrodist
Grand Gerbil Poohbah
 
Posts: 3280
Joined: Wed Jul 19, 2006 1:51 am
Location: Minnesota


Return to Linux, Unix, and Assorted Madness

Who is online

Users browsing this forum: No registered users and 4 guests