Hi, everybody! Hi, Dr….. Evil!

I’m just going to jump right in with some details on a project I recently finished up. It’s a ridiculously large (in all senses of the word) storage server. Long story short, a while back I wound up with a number of Thunder K7X boards full of Athlon XP’s, and a bunch of 36GB 10K SCSI drives. I decided to use one of the boards and six of the drives as the basis for a storage server. I picked up a PCI-X hardware RAID card off eBay for $60 or so, and stuck six of the 36GB drives on it in a RAID 5. I love hardware RAID cards, because they give you cool stuff like online capacity expansion and audible alarms in the event of a drive failure. I used this configuration to play around with iSCSI a bit using Linux, but unfortunately it seems the iscsitarget project has some performance issues with the Microsoft initiator, so I was never able to come close to the “local” speed of the array (150MB/s sustained transfer rate, if memory serves) even with gigabit ethernet.

Of course, if you’re going to make a storage server, it needs to have not storage, but STORAGE, and unless you’ve won the Powerball that means SATA drives. I picked up another hardware RAID controller on eBay, this one a 6 port SATA PCI-X job for $150 or so. At the time I started this project, the price/capacity sweet spot on SATA drives was 300GB, so I picked up a total of four Maxtor 300GB drives and started chucking them into an array.

Two of them were already in a RAID 1 courtesy of a software/hardware RAID controller on my motherboard, and they happened to have a bunch of data I didn’t want to lose. Did I mention I love hardware RAID controllers? It went like this: Add one of the two RAID 1 drives to the controller by itself and it comes up as JBOD with all the data on it. Chuck on another (blank) drive and drag it into the “array.” The controller then turns the JBOD into a mirror. Connect another drive and drag it in and the controller turns the mirror into a RAID 5. If you want, keep adding drives until you run out of ports, or devote one to a hot spare, whatever you wanna do.

Of course, you can’t just stick a system with a total of ten hard drives into any old case. Damage helped out here by giving me an old Supermicro SC760 enclosure, a full-tower monster with infinite fan mounting points that sounds like a B-52 when you turn it on. I hacked together a mount for the SCSI drives that fits into a bunch of unused 5.25″ bays, complete with a couple of fans to keep them cool. It looks awful but it works well. I keep the thing in my basement next to the gig-E switch and patch panel, so neither its… homeliness nor its propeller noise bothers me.

Anyway, I currently have four drives in the RAID 5, with around 900GB of raw capacity, and it’s really stinking fast.Yes, I know, there are 750GB SATA drives out there now, but I like the fact that I can lose a drive without losing any data, and I like the fact that (unlike my former sw/hw mirror) an OS crash doesn’t force a rebuild. Besides, if I want to “rebuild” with bigger drives, I can just create a second array, copy the data over to it, then remove the old array and expand the new one. Server hardware is fun, even if it’s last-gen. 🙂

Comments closed
    • Alanzilla
    • 13 years ago

    I have four 250GB drives in a RAID5 through W2KAS. I don’t like hardware RAID setups. Most of them are too slow and even more corrupt data.

    • IntelMole
    • 13 years ago

    With ten drives, might RAID-6 not be better if the hardware supports it? Or are we talking separate arrays containing ten drives total?

    • Steel
    • 13 years ago

    Mine’s bigger.


Pin It on Pinterest

Share This