Page 1 of 1

Server Racks & installing a rack mount chassis

Posted: Thu Nov 22, 2012 2:10 am
by Jingles
I'm looking at building my own NAS. QNAP and Synology pretty cool but they aren't exactly budget friendly specially if you want lots of storage, i.e. 8 or more bays. I reckon I can build one cheaper so that's what I'm doing.

It's still in the planning phase, but this is what I have planned so far:

Intel Core i3 3220T
ASUS P8Z77-M PRO
Corsair CMX8GX3M2A1600C9 8GB (2x4GB) DDR3
Samsung 830 Series 128GB SSD
Norco RPC-4224 Rackmount - a 4U, 24 Bay, rack mount chassis.
Corsair AX1200 Gold
Areca ARC-1280ML
Noctua NH-D14 CPU Cooler
OS: TBC

Total = about $2,500 (sans HDDs) for a 24 bay NAS.

I'm going to house it in a proper server rack, something like an 24RU tall server rack.

The Norco RPC-4224 chassis is 19" wide so I'll need a 19" wide rack, thats pretty self explanatory. What I am wondering is does it matter if the Rack is deeper than the chassis or anything else that is going to be mounted in the rack like a rack mount switch?

Re: Server Racks & installing a rack mount chassis

Posted: Thu Nov 22, 2012 8:28 am
by Chrispy_
There are two basic types of 19" rack, comms racks and server racks:

Comms racks are typically not deep enough for even the shorter servers, they're designed to hold patch panels and quarter-depth switches/routers/monitoring equipment and don't take rack rails. Comms racks only have two mounting posts with RU (rack unit) holes, so stuff just mounts to the front and 'hangs'; You're limited to fairly small, light kit in a comms rack.

Server racks are all a typically deeper and come with four RU posts which allows you to install rails that take heavier, longer items like servers, UPS units and disk cabinets. Depth of these varies but it's usually around 1m. You should find that server rack rail kits are adjustable by around 9-12" and therefore fit any rack.

Assuming you get a four-post rack, it's a server rack and you'll need a rail kit to mount a server into it.
It honestly doesn't matter how deep the server is as long as it fits the rail kit, and the rail kit fits the rack. The more spare space you have at the back, the easier cable management and power distribution is.
A typical rack for me has various units in it with varying depths. A couple of patch panels and a few short 1u devices that just screw into the front posts, and then a bunch of full length kit like SANs and servers. My aim when racking is usually to minimise the length of cabling because fighting cable spaghetti does not help with 99.9% uptime SLA's :)

Re: Server Racks & installing a rack mount chassis

Posted: Thu Nov 22, 2012 11:20 am
by Jingles
Awesome thanks for the info and tips Chrispy_ :)

your right I'm going to go with a four-post rack, the NAS is going to be pretty heavy. The chassis I'm looking at using isn't exactly short at 25.5".

Fighting cable spaghetti is never any fun, ever. I still don't know how cables get tangled when they sit there and don't move. I reckon there is a cable tangling monster going around.

Re: Server Racks & installing a rack mount chassis

Posted: Thu Nov 22, 2012 12:13 pm
by just brew it!
For a file server I would strongly recommend going with a platform that supports ECC RAM. This will either mean upgrading to a Xeon (and taking a cost hit) or AMD Phenom II / FX series with an Asus motherboard (and taking a power consumption hit). Intel's consumer platforms don't support ECC at all, and most AMD consumer motherboards don't either except for the ones from Asus.

My inclination would be to go for an AMD and make sure all the power management settings are at their most aggressive (maybe even underclock/undervolt), to keep idle power draw down.

Re: Server Racks & installing a rack mount chassis

Posted: Thu Nov 22, 2012 12:47 pm
by yogibbear
I swear there was a forum question a month ago asking the same question but had opted to remove the word "server" from the thread title... and they got a whole lot less helpful responses :lol:

Re: Server Racks & installing a rack mount chassis

Posted: Thu Nov 22, 2012 2:21 pm
by absurdity
I'd suggest making your boot volume (I assume that's the SSD) redundant as well. Is thing going to be mission critical? You might want to buy some spare hardware too. One thing you get from the big guys is warranty coverage, but you're kind of on your own if building one yourself (manufacturer's warranties aren't very good for stuff that requires high up-time, and generally don't last very long).

Re: Server Racks & installing a rack mount chassis

Posted: Fri Nov 23, 2012 1:05 am
by SecretSquirrel
Skip the SSD for the boot drive. Zero need for it and it really isn't going to be that much more reliable than a regular spinning disk at this point. As far as ECC memory that someone else mentioned, unless you are doing this for a high reliability environment and have the budget, don't bother. Yes, you run the risk of the occasional random reboot, but you are talking about something that might happen once or twice a year. Since it sounds like a home or small business setup, buy quality gear, but no need to go overboard. The Areca cards are good, I use one, but their support is out of Taiwan, so don't have unreal expectations there.

I'll toss out the reminder... RAID is not backup. Since you are looking at something that could hold ~72TB, fully kitted, give some thought to how you back it up. I'm running Linux on my NAS and my solution was a second system that I duplicated the important data to (about 5%) and I use ZFS with file system snapshots on the second system. My primary NAS is 16TB, the backup is 2TB. Most of the space is DVD/CD images and if they are lost it only costs me the time to re-rip them. Important stuff is duplicated to the second array and I have 14 days of nightly snapshots then 3 months of weekly snaps. Really important stuff is packaged up, encrypted, and sent offsite on a nightly basis.

Re: Server Racks & installing a rack mount chassis

Posted: Fri Nov 23, 2012 1:15 am
by Jingles
just brew it! wrote:
For a file server I would strongly recommend going with a platform that supports ECC RAM ... My inclination would be to go for an AMD


Thanks for the tip, I hadn't looked at a platform that supports ECC RAM, I wrote it off as a bit to expensive for just a home NAS since I was looking at going with an Intel CPU. Now that I have reconsidered it looks like I can save a little money by going with an ASUS M5A97-EVO R2.0, AMD FX 4100 and some Kingston KVR13E9/4I 4GB (1x4GB) 1333Mhz DDR3 RAM. Power consumption isn't too much of a concern.



absurdity wrote:
Is thing going to be mission critical?


No it's just for home use which is why I don't need anything terribly hardcore, I'll be providing my own support. I can live without a NAS for a day or two if a part dies like the boot drive, I'll just take an image of the boot drive and replace it with the same SSD.

Re: Server Racks & installing a rack mount chassis

Posted: Fri Nov 23, 2012 9:37 am
by just brew it!
SecretSquirrel wrote:
As far as ECC memory that someone else mentioned, unless you are doing this for a high reliability environment and have the budget, don't bother. Yes, you run the risk of the occasional random reboot, but you are talking about something that might happen once or twice a year.

Even if high availability isn't a critical issue, I'd still be concerned about occasional random data corruptions. Depending on what's stored on this server, that's arguably more important than uptime.

By going the AMD route you can get an ECC-capable platform without paying a price premium; and as the OP has already discovered, Kingston unbuffered ECC DIMMs are quite affordable as well.

Re: Server Racks & installing a rack mount chassis

Posted: Fri Nov 23, 2012 10:12 am
by TechNut
For this kind of application, ECC is really overkill. ECC in modern servers is more about working through a chip failure than random bit flips. Sure, it could happen, but the most likely scenario is the server reboots, versus data corruption.

I did the whole 42U set up here at home. I have a HP 10000G2 rack, and it is a beauty. I picked it up almost new off of eBay. Moving it home was interesting story, and me and my father-in-law had some good bonding time. Nothing like moving a 300lb awkward shaped thing into your house!

In regards to racks, just keep in mind there is more to racks than meets the eye.

If, for example, you buy a rack with square holes, you will need cage nuts. This means you need to understand loads, and what screws work best. M6? 10-32? 12-24? It all depends on the load. Round hole racks are good for home applications, but, you sometimes have a little less flexibility.

Rail kits can be VERY expensive for what you get. And not all of the rail kits you buy for your case work well. If you buy a Norco case, you should use Norco rails, OR, do what I did, and pick up a set of Universal rails that accept any load up to around 200lbs.

For one server, heat is not likely to be a issue, but, if you start filling the rack up, you need to worry about airflow, and not cooking your components. This means blanking panels, and for at home applications, I do recommend a in-rack fan. That would only be if you load the rack up, you just need to watch the temps. I have seen so many people on Youtube put a rack in and then run things. They have crazy AC set ups, etc. Just applying some facilities planning concepts before you get started will be a big help. Remember, a 42U rack needs at least 25sq ft. of space for cooling, maintenance, etc. If you cannot access each side easily, then you have not set it up right. If you feel like it can kill you while sitting underneath it, or with your head inside and equipment above, you have not done it right.

Remember, you have to worry about loading issues with racks as well. You want 200lbs of weight AT LEAST in the bottom 3rd of any rack. This makes sure if you slide something heavy out at the top, the rack will not tip over. This is not so much of a issue for just one server, but, longer term if you start using your investment, you do not need it being unstable. I'm not sure about you, but I do not want to be crushed!

Plus, good PDU's and power distribution to make it all safe since the density goes up. Grounding is VERY important. Shocks = bad! You need to make a choice between 240v and 120v too (that is if you get a rack with 240v PDUs). 120v works fine, just keep in mind it is not good for scaling up density, and it is not as power efficient. For one server, it is ok, 10 servers, not so much. I strong recommend AGAINST a 5$ power bar inside your rack. Buy a proper 0U PDU and plug in there. Shock hazards and cables popping out when you least expect it are not good things.

Oh, and if you plan on sitting in front of your rack, a 20" one is ok to use a regular keyboard and monitor (since it can sit on top). BUT, if you go with anything bigger, you will likely want a monitor/keyboard 1U combo.

All I can say is doing this has been fun, but it has not been cheap!!

Re: Server Racks & installing a rack mount chassis

Posted: Fri Nov 23, 2012 11:32 am
by just brew it!
TechNut wrote:
ECC in modern servers is more about working through a chip failure than random bit flips. Sure, it could happen, but the most likely scenario is the server reboots, versus data corruption.

While this may have been true in the past, I question whether it is still true today on modern OSes with (by contemporary measure) a reasonable amount of RAM. These days the OS kernel only occupies a small fraction of the physical RAM; most of it is being used for file system cache... i.e. data.

I'm not sure where the resistance to ECC is coming from. Unless you insist on using an Intel platform the cost delta is minimal.

Re: Server Racks & installing a rack mount chassis

Posted: Sat Nov 24, 2012 12:00 am
by TechNut
I'm not against ECC RAM. I think it has its place. For home and semi-pro uses though, I think the price gap is crazy. 16GB, last time I checked was over 200$ for decent DDR3 ECC, whereas you can get the same speed, non-ECC for 45$ (Newegg Black Friday). For small amounts it might be ok, but, it makes a purchase that much more expensive for not much gain at home. The performance is 2% less with ECC, and the risk of data corruption is so low, its not a worry for most home applications. Remember, normal memory has CRC checking built in, and can correct single bit flips. ECC is designed to fix 2 bit flips.

RE: the OS and more a chance of data corruption, well, that's not 100% true. Every page of memory has bits set which tell the OS what the protection is, etc. Those get flipped, well you get your crash. Also, memory is a mix of instructions and data, so, if instructions get messed up, the application will crash. However, the CRC in the memory will catch that so you really need something bad to happen to generate multiple bit flips i.e. bad chip.

Now, I did read about one of the guys from Sun a few years ago getting silent data corruption n his HDD from the power supply, and only ZFS caught that. It does happen, but if I was to advise someone to spend 4x's more to eliminate less than a 0.0001% chance of data corruption, I would probably not. It is not like a picture or video would be corrupted with one bit being flipped. Now, if you are doing banking or other high-end tasks where precision is needed, then by all means, ECC makes a lot of sense for that extra layer of protection.

Do you know the easy way to tell a ECC from non-ECC? Chip count. If the number of chips on the PCB (per side) is even, there's no ECC. If there is an odd number, then it should be ECC. The extra chip is needed to store parity information. It is partially why the cost of a ECC memory module is more than regular since a minimum of two extra chips are needed, plus likely better cooling, voltage regulation, etc.

Re: Server Racks & installing a rack mount chassis

Posted: Sat Nov 24, 2012 1:00 am
by kc77
Putting ECC in a file server really is just the proper thing to do for anyone who owns a file server. You aren't necessarily trying to just prevent crashes but you are also trying to prevent data corruption. Non-ECC memory does not correct memory errors. In order for a correction to be made there must be parity somewhere to record or check the intended result. ECC memory actually corrects bit-flips. The reason why ECC is important is that ALL operating systems to date (even ZFS and NetApp appliances) assume one thing when it comes to data transfers. That the data coming from system memory is correct. There are some exceptions but those are not the rule. Without ECC memory the OS doesn't have a clue that what's being checked on disk is exactly what came from memory. That's why ECC is important.

All of that being said you don't 'need 16GB of RAM for a file server. 4GB will do unless you are running ZFS + dedup or compression. At 4GB the cost over non-ECC is small and easily worth the cost.

Re: Server Racks & installing a rack mount chassis

Posted: Sat Nov 24, 2012 3:14 am
by Flatland_Spider
I would try to boot your NAS off of an SSD or USB drive. The OS isn't going to have a whole lot of writes to the SSD or USB, and some NAS distros will be optimized for booting off a USB drive. This will also keep you from having some odd partition scheme where the array is partitioned into storage and OS parts. 32-64GB SSDs are pretty cheap now, and 8-16GB flash drives are cheap too. You're not going to need a lot of space for Linux, so you don't have to spend a whole lot.

I happen to like Areca cards simply because some of them have network ports built in, and it's much nicer logging into the card for admin functions then keeping a software package running.

The next thing to consider is what are you going to be using for filesharing and the workload. Samba isn't multi-threaded, so it will respond better to a faster CPU. NFS is multi-threaded, so more cores is better with NFS. Windows Ultimate does have the ability to mount NFS drives like SMB/CIFS drives, but you have to install it. 5 people hitting this probably aren't going to stress it, but 5 VMs might.

You should add a battery backup unit for the RAID card. Some cards won't hit their max performance without it. I know 3ware won't, and I haven't tried it with an Areca card.

I would jump up to 16GB of RAM and an i5. Linux will use the extra RAM to cache files which will speed up the performance of the NAS, and real cores are better then fake cores. Of course, you have to go back to your workload, and 4GBs and a Pentium maybe fine if you're the only one hitting while using Samba or NFS.

AMD procs are also worth a look. NAS servers don't need to be super powerful, but they can benefit from more cores.

Re: Server Racks & installing a rack mount chassis

Posted: Sat Nov 24, 2012 9:13 am
by just brew it!
TechNut wrote:
I'm not against ECC RAM. I think it has its place. For home and semi-pro uses though, I think the price gap is crazy. 16GB, last time I checked was over 200$ for decent DDR3 ECC, whereas you can get the same speed, non-ECC for 45$ (Newegg Black Friday).

16 GB of ECC RAM for $100: http://www.newegg.com/Product/Product.a ... 6820239140

Yes, that's more than double your non-ECC price, but in absolute terms it isn't bad since RAM is so cheap these days.

TechNut wrote:
For small amounts it might be ok, but, it makes a purchase that much more expensive for not much gain at home. The performance is 2% less with ECC, and the risk of data corruption is so low, its not a worry for most home applications. Remember, normal memory has CRC checking built in, and can correct single bit flips. ECC is designed to fix 2 bit flips.

No, that's not correct. Normal memory has no CRC checking; a flipped bit is a flipped bit.

TechNut wrote:
RE: the OS and more a chance of data corruption, well, that's not 100% true. Every page of memory has bits set which tell the OS what the protection is, etc. Those get flipped, well you get your crash. Also, memory is a mix of instructions and data, so, if instructions get messed up, the application will crash. However, the CRC in the memory will catch that so you really need something bad to happen to generate multiple bit flips i.e. bad chip.

I repeat: Normal DRAM does not have CRC.

Perhaps you're thinking back to the 386 days when most RAM had a parity bit? PC vendors stopped doing this back in the early 1990s to cut costs.

Re: Server Racks & installing a rack mount chassis

Posted: Sat Nov 24, 2012 12:23 pm
by TechNut
Take a look at this link...

http://www.jedec.org/category/technolog ... ddr4-sdram

Looks like CRC is in there, or at least will be for DDR4.

It could be that I remember those errors from back in the old-school days. I seem to recall CRC and bit checking being in there, but DDR2/3 it seems does not have CRC checking unless it is in ECC. One of the added benefits it seems of DDR4 will be simple CRC even on low-end systems. Which makes sense, since memory density is increasing.

So I did learn something new :)

I have a paper somewhere (recent 2009) on ECC from AMD, I should see if I can find the link. It does explain what can happen and the rates of error. It really depends on your risk, 1 bit per 4GB per month, that's 24x7 continuous operation. It's 1 in 4 billion odds there. For a home user, this is more than acceptable since a bit flip is not going to ruin any personal data (a pixel on a screen is slightly off colour, etc).

The lowest price (newegg.ca) I could find today for 16GB (2x8 which is what I typically buy) is 130$, for the cheapest, no-name ECC Registered DDR3 1333. Would I trust it? Probably not. Would it work? Sure! I guess it depends on what you are using it for. It is at least double the price of regular memory, and for some it can be worth it. It comes down to budget and risk tolerance that that one bit flip is important to you or not. :)

Re: Server Racks & installing a rack mount chassis

Posted: Sat Nov 24, 2012 1:42 pm
by just brew it!
TechNut wrote:
Take a look at this link...

http://www.jedec.org/category/technolog ... ddr4-sdram

Looks like CRC is in there, or at least will be for DDR4.

That's for the data transfers across the bus. It protects against data getting corrupted while it is being transferred between the memory controller and the DIMMs, not against flipped bits in the DRAM chips themselves. (ECC DIMMs protect against data bus errors too; the bus CRC mentioned in your link is essentially away to protect the data transfers without requiring ECC DIMMs.)

If you want error detection or correction for flipped bits in the DRAM chips you still need ECC DIMMs on top of the bus CRC.

Edit: To put it another way: ECC DIMMs ensure that the data you read back matches the data that was originally written. Bus CRC only guarantees that the DRAM chips receive exactly what the memory controller wrote, and that when you subsequently read the data back the memory controller receives exactly what is currently in the DRAM chips. Unlike ECC, the bus CRC does not protect the data from being corrupted while it is sitting in the DRAM chips.

Re: Server Racks & installing a rack mount chassis

Posted: Sat Nov 24, 2012 4:04 pm
by Beelzebubba9
What exactly are your requirements for the NAS? Will it be doing block level storage, or just file shares? Do you have an IOPS or throughput requirement? How many concurrent users will there be? What kind of connectivity are you looking for? And what will your total size requirement be? And what OS were you going to run? Windows, an off-the-shelf linux distro, or a custom NAS/SAN OS like FreeNAS, OpenFiler, Open Indiana etc?

If this is just a home file server, it'll be hard to justify (from a cost perspective) what you're building out unless you need some insane (20+TB) amount of usable space or high IOPS for block level storage. A single 2U QNAP TS-859 Pro can hold 24TiB of raw disks, will cost less than what you've spec'd, and probably be more reliable. Or at least you can call support if you break it. They're quiet compared to most rack mount chassis, come with redundant PSUs, and consume very little power, and have a ton of features (like Time Machine, iSCSI, BitTorrent server support, etc). We use them at my workplace for cheap big dumb storage for data we don't want to fill up our six figure production SANs with.

Building your own storage can be fun, but you'd be surprised at how many caveats there can be if you want it to be fast and reliable on the scale I think you seem to be looking for.

Re: Server Racks & installing a rack mount chassis

Posted: Sat Nov 24, 2012 6:59 pm
by Aphasia
I can just say this, I've built a server from my old parts and have it serving main file storage with 4x2TB Raid-5, DLNA and as a hyper-v host for my webserver, and if I had the choice in getting new, I would get ECC ram. Just to have one less thing to worry about for a system that is on 24/7 and processes alot of data through memory.

Or if I hadnt the need for Hyper-V I would just see to it that I got a normal appliance NAS that can make fill the gigabit network. Because that for me was also a factor. And my Qnap nas that I use as backup for the fileserver, only does around 30-40MB/s. The fileserver managed 98%+ utilization of the gigabit NIC.

For some reason, I get curruptions somewhere, and I have no idea where in the chain it is, or it might be program bound since I've only ever seen it on raw-images, perhaps, one in every 2-300 or so, but that doesnt say it's not happening elsewhere. And it could be anywhere within the below chain.
Flash-card -> cardreader -> USB -> Memory -> NIC -> Switch -> NIC -> Memory -> Disk
And then when processing I actually run lightroom straight towards the files on the fileshare and not on local storage, which might affect things.

Re: Server Racks & installing a rack mount chassis

Posted: Sat Nov 24, 2012 8:25 pm
by Jingles
Beelzebubba9 wrote:
If this is just a home file server, it'll be hard to justify (from a cost perspective) what you're building out unless you need some insane (20+TB) amount of usable space or high IOPS for block level storage. A single 2U QNAP TS-859 Pro can hold 24TiB of raw disks, will cost less than what you've spec'd, and probably be more reliable. Or at least you can call support if you break it. They're quiet compared to most rack mount chassis, come with redundant PSUs, and consume very little power, and have a ton of features (like Time Machine, iSCSI, BitTorrent server support, etc). We use them at my workplace for cheap big dumb storage for data we don't want to fill up our six figure production SANs with.

Building your own storage can be fun, but you'd be surprised at how many caveats there can be if you want it to be fast and reliable on the scale I think you seem to be looking for.


FYI: It's for a file/media server. And no I can easily justify it, maybe it might be hard for you because your poor? I can build a NAS box cheaper than I can buy one for actually, and it will be just as reliable. What makes you think my build will probably be unreliable? Maybe your builds suck but I have built many computers that have lasted me years without any upgrades or anything breaking. I can build a NAS with a redundant PSU and it will still be cheaper than what I can buy one for. I'm smart enough to provide my own support thanks, maybe you might need support, but I don't because I'm the one building the system from the ground up so I will know it inside out since I know every part that goes into the build and I'll be able to support myself if anything goes pear shaped. Go ahead and buy a pre built NAS if you want to waste a bunch of money.

Care to name the "many caveats"? I don't think I'd be surprised. It's not rocket science, maybe for you it might be. I'll bet you any money that I can build a NAS that is more reliable, faster, and cheaper than you could build or buy.

FYI I will be using lots of small, 500GB or 1TB, drives so that when a drive fails it won't take as long to re build the array, because the problem with larger drives is the longer they take to rebuild the higher the chance is of another drive failing which could potentially wipe out all of the data on the NAS depending on what sort of RAID configuration is used.

Re: Server Racks & installing a rack mount chassis

Posted: Sat Nov 24, 2012 8:33 pm
by just brew it!
Jingles wrote:
FYI I will be using lots of small, 500GB or 1TB, drives so that when a drive fails it won't take as long to re build the array, because the problem with larger drives is the longer they take to rebuild the higher the chance is of another drive failing which could potentially wipe out all of the data on the NAS depending on what sort of RAID configuration is used.

I understand the motivation, but there are also some downsides to this approach. You'll have more noise, higher power consumption, more heat, and increased risk of drive failure (since you've got more of them). Your cost per GB is also going to be higher since 500 GB drives are well below the $/GB sweet spot these days, and you may also lose some performance due to lower density platters.

I could maybe see going with the 1 TB drives... but definitely not the 500 GB ones.

Re: Server Racks & installing a rack mount chassis

Posted: Sat Nov 24, 2012 11:46 pm
by Flatland_Spider
Jingles wrote:
FYI: It's for a file/media server. And no I can easily justify it, maybe it might be hard for you because your poor? I can build a NAS box cheaper than I can buy one for actually, and it will be just as reliable. What makes you think my build will probably be unreliable? Maybe your builds suck but I have built many computers that have lasted me years without any upgrades or anything breaking. I can build a NAS with a redundant PSU and it will still be cheaper than what I can buy one for. I'm smart enough to provide my own support thanks, maybe you might need support, but I don't because I'm the one building the system from the ground up so I will know it inside out since I know every part that goes into the build and I'll be able to support myself if anything goes pear shaped. Go ahead and buy a pre built NAS if you want to waste a bunch of money.

Care to name the "many caveats"? I don't think I'd be surprised. It's not rocket science, maybe for you it might be. I'll bet you any money that I can build a NAS that is more reliable, faster, and cheaper than you could build or buy.

FYI I will be using lots of small, 500GB or 1TB, drives so that when a drive fails it won't take as long to re build the array, because the problem with larger drives is the longer they take to rebuild the higher the chance is of another drive failing which could potentially wipe out all of the data on the NAS depending on what sort of RAID configuration is used.


No need to get chippy. There are some good points in there.

It's all about priorities. You're saying your time is cheap, and he is saying his time is expensive.

The caveats depend on what we're talking about. Management and interoperability with other systems are more difficult with a homebrew system like this. You don't get a nice GUI if you build it yourself, and you have to know what you're doing to get stuff to work together. High availability with five nines uptime require a lot more engineering and servers.

However, none of that matters since you just seem to be building something crazy for kicks. You're on the right track, so keep pressing forward.

1TB or 2TB drives are the best deals at the moment, and there is a very good reason for going with larger, but fewer, drives. Online expansion means the array will expand when more drives are added. It does not mean the array can be expanded if all of the drives are replaced with larger capacity drives. To replace twenty-four 500GB drives with twenty-four 1TB drives, you would have to back everything up, destroy the old array, create a bigger new array, and then restore everything. This entire process is much simpler if you hang your boot drive off the SATA ports, or USB port, on the motherboard, and you don't have to restore your OS as well as your data.

The larger drives also make it easier to justify running the array in RAID 10 or RAID 01. Either of those RAID levels handle large amounts of disk IO better then RAID 5 or RAID 6, which are peaky.

Re: Server Racks & installing a rack mount chassis

Posted: Sun Nov 25, 2012 8:52 am
by Jingles
just brew it! wrote:
I understand the motivation, but there are also some downsides to this approach. You'll have more noise, higher power consumption, more heat, and increased risk of drive failure (since you've got more of them). Your cost per GB is also going to be higher since 500 GB drives are well below the $/GB sweet spot these days, and you may also lose some performance due to lower density platters.

I could maybe see going with the 1 TB drives... but definitely not the 500 GB ones.


I don't care about noise, it's not a problem for me because the NAS will live where it can make as much noise as it wants i.e. in the spare room away from the lounge room and bedroom.

I don't really care about higher power consumption either, that is just par for the course, more drives = more power. Although I'll make sure I get an efficient PSU so that power isn't wasted.

Heat isn't a problem if it's managed properly, I'll manage it properly. Install enough fans and placing the NAS somewhere where it is cool and there is ample air flow will go a long way to keeping it cool enough. Using WD red drives which run cooler than other drives.

Maybe 1-2 TB drives are the go then, I wouldn't want to trust lots of data to 3 or 4 TB drives thats just too much risk.

Flatland_Spider wrote:
Management and interoperability with other systems are more difficult with a homebrew system like this.


How so? I really don't see how it's really that hard.

Flatland_Spider wrote:
You don't get a nice GUI if you build it yourself, and you have to know what you're doing to get stuff to work together.


So a nice GUI is more important to you than anything else? I don't care about pretty n00b friendly GUIs, it's not like I am going to spend hours a day in the GUI after it's all set up and running smoothly so I don't see how a GUI has anything to do with it. I don't think it's really that hard, just plan properly i.e. RTFM make sure parts are compatible, know how it's all going to go together and work, make sure you understand everything etc... It's no harder than building a gaming rig, a HTPC or any other type of PC it's just different parts and concepts, not brain surgery or rocket science. I could see how it might be seen to be hard if someone struggles to build a normal PC but I have been building my own PCs for a long time and enjoy building them.

Flatland_Spider wrote:
This entire process is much simpler if you hang your boot drive off the SATA ports, or USB port, on the motherboard, and you don't have to restore your OS as well as your data.

The larger drives also make it easier to justify running the array in RAID 10 or RAID 01. Either of those RAID levels handle large amounts of disk IO better then RAID 5 or RAID 6, which are peaky.


Yeah thats what I'm going to do, it's what I usually do, have a boot drive for the OS and applications then keep my data separate so if anything happens to the OS or the boot drive I don't have to spend extra time backing up everything that hasn't already been backed up before I reinstall the OS, restore it from an image or replace the boot drive. I'm still deciding between RAID 10 or RAID 01, lots of reading to do things to learn, fun stuff.

Re: Server Racks & installing a rack mount chassis

Posted: Sun Nov 25, 2012 8:59 am
by just brew it!
Jingles wrote:
Heat isn't a problem if it's managed properly, I'll manage it properly. Install enough fans and placing the NAS somewhere where it is cool and there is ample air flow will go a long way to keeping it cool enough. Using WD red drives which run cooler than other drives.

Sounds like a plan. Do keep in mind though, that even if they are off in a separate room they will still add load to your central A/C (if you have it) in the summertime. Of course this is partially offset in the winter, when they will help heat your home... :wink:

Jingles wrote:
Maybe 1-2 TB drives are the go then, I wouldn't want to trust lots of data to 3 or 4 TB drives thats just too much risk.

Yeah, going bleeding edge is almost never a recipe for reliability!

Re: Server Racks & installing a rack mount chassis

Posted: Sun Nov 25, 2012 9:22 am
by Beelzebubba9
Jingles wrote:
[three useful pieces of information mixed in with personal insults]


Wow I did not expect my post to come across as combative at all, nor do I see any reason why you'd insult me for asking the right questions. I'm an Enterprise Infrastructure Engineer whose core focus is storage and virtualization. I work for a company that provides global emergency notification and support, so any down time - even enough for a SAN reboot - is not an option. Our core storage systems list for the price of a new Ferrari and have effective uptimes that stretch from when they're turned on to when they're decommissioned 5 years later, so I suspect we have differing views of what's 'reliable'. Obviously, downtime isn't much of a concern for a home NAS, but data loss is, since I assume anyone who's shelling out a few Gs on a 10+ TB NAS is doing so because they have a lot of data that's important to them, and backups are much harder to do when you have that much data. To get back to a useful discussion:

1. The QNAP TS-879 runs about $2300 without drives, so it's less than the server you spec'd out. QNAP currently supports 3TB SATA drives (we use Hitachi Constellation ES drives since we had waaaay too many RE4s fail; we were averaging about one a month for a while), so with Raid 10 you'd be looking at ~11TB effective data; ~17TB if you go with Raid 5 and no hot spare (which would be dangerous). I am not saying rolling your own NAS is a bad idea, but unless you're planning on using all of the capacity the chassis you bought can handle or are going to use features that a QNAP doesn't offer, it's not more cost effective. It is a lot more fun, however.

2. You never answered any of the critical questions I asked, either. As others have mentioned, there are draw backs to using lots of smaller disks, and there are also ways to shorten the re-build times of larger disks depending on your implementation. What OS and raid type are you looking at, and what do you want the total effective capacity to be? And do you have *any* IOPS requirements or will this only be used for sequential file transfers over a 1Gb network?

Again, I'm not insulting your manhood, calling you poor, or suggesting that building a massive home NAS is a bad idea. But if you have to Google 'TLER' to find out how not accounting for that alone could wipe out your entire array, then you might not fully understand what you're getting into.

Re: Server Racks & installing a rack mount chassis

Posted: Sun Nov 25, 2012 6:38 pm
by TechNut
You could save ALOT of money, and use Linux and Fiber Channel as your NAS host. You can pick up 7TB of storage, for under 400$ + shipping on eBay. If you know how to use Fiber Channel enclosures and Linux, you can save a bundle. Heck, you can get a old HP DL360G5 + the enclosure (7TB) and the FC HBA for under 800$. After that, it's a matter of configuring Linux, multipath, Software RAID, LVM and iSCSI/Samba and you are done. I have done that here. I looked at the other route of doing what you are looking at, but the case, RAID cards, etc. make it pricey. 14 drives in a RAID 6 set gives me about 400MB/s, which is the max the 2gb Fiber Channel (2gbx2 ports) can do. Adding the same equivalent in consumer or other drives is 4 times more expensive. These drives are the 10K RPM ones too. Food for thought.... Enterprise class drives are also rated for better temperature ranges and workloads than consumer drives (even WD blacks or Reds). I used to use WD blacks, but, their performance stinks compared the FC-based drives in the enclosure. I have multiple enclosures and a older Brocade FC switch to give me expandability.

On my filer server, it is running iSCSI. I have a 10Gbe network. I have 10Gbe cards installed on some older (Intel Q6600 w8 GB) VMware ESXi hosts (running 5.1), and I can get over 440MB/s to a VM. So, this set up can and does work pretty well, especially on the cheap. I have customized the Linux distro to tune Linux to run well in this config.

Since you are doing your own support, this type of project is certainly doable, and you can spend money on extra features, versus just disks. For example, you can do 10Gbe CX4 at around $1000 for the entry point hardware (10 Gbe switch + 2 10Gbe cards + 2 cables). You just have to be diligent in your research and compatibility testing.

HTH...