Personal computing discussed

Moderators: renee, morphine, Steel

 
Symfornix
Gerbil In Training
Topic Author
Posts: 1
Joined: Thu Jan 03, 2002 7:00 pm
Location: Chicago

Fri Jan 04, 2002 11:27 am

I have a Dell PowerEdge 4400 Server. This server has a 2 channel internal RAID controller (Dell branded, but it's Adaptec) which supports RAID 0,1,5 & 10 and has 128MB Cache.

Channel 1 currently has 2x15K RPM Cheetah's in a RAID 1 mirror; it's partitioned into C: (FAT, NT Server 4) and D: (NTFS, Programs).

Channel 2 currently has 6x15K RPM Cheetah's in a RAID 10 array (3xRAID 1); partitioned into E: (NTFS). It's got no files on it currently, but I bought this server to run MS SQL Server 2000.

My dillema: The RAID 10 was factory setup, but I think it was done incorrectly, since my server hard-locks (KB/mouse freeze, nothing in event log, no blue screen, etc) under heavy load on the drives. I think the stripe sizes between the arrays may be incorrect, and I have to re-create the RAID 10 array (E:).

BTW: Dell tech support is clueless.

I ran Dell diags, mem-test, etc. for 72 hours straight w/not a single error; I reseated all components and connectors, etc..I've done everything I (and Dell) can think of, and this is what I conclude.

My Question: What is the correct stripe size for each RAID 1 mirror, and once that's done, what chunk size should I use for the RAID 0 across the RAID 1's? (remember, this is only going to run SQL server with a few large DB's)

Any help or advice is sincerely appreciated.

Thanks!

<font size=-1>[ This Message was edited by: Symfornix on 2002-01-04 10:28 ]</font>
 
TwoFer
Gerbil First Class
Posts: 120
Joined: Thu Dec 27, 2001 7:00 pm

Sat Jan 12, 2002 5:20 am

I don't know, but if I were asking the question I'd wander over to storagereview.com's forum and ask it -- you still have a few days before they close, and the guys there are pros.

(Not that the folks here aren't... but I noticed you didn't have an answer after a week, y'know?)
 
highlandr
Gerbil Elite
Posts: 575
Joined: Thu Dec 27, 2001 7:00 pm
Location: Somewhere in downstate IL
Contact:

Thu Feb 07, 2002 6:53 pm

A non-answer I can ask is why RAID 10?

With 6 drives you could go with RAID 5, even though it would increase CPU load, if I'm not mistaken.


Then again, I don't post on storage review, and wouldn't fancy myself an expert.
 
Oldfart
Gerbil In Training
Posts: 6
Joined: Sun Mar 24, 2002 7:00 pm

Mon Mar 25, 2002 4:19 pm

Stripe and cluster size are usually determined by the work being done. For large files, 64k or 128k stripes would be good. Use the default cluster size for NTFS.

Have you tried formating the E partition from instide NT? Do you have the latest NT service pack installed? Have you installed the latest Adaptec drivers?

Oldfart :grin:
 
Speed
Gerbil Elite
Posts: 702
Joined: Thu Dec 27, 2001 7:00 pm
Location: Chicago, IL USA
Contact:

Mon Mar 25, 2002 6:04 pm

That RAID 10 thing makes me uneasy when it's being in the same breath as "production server". I wouldn't want to try to rebuild it if something happened! Since something apparently <i>has</i> happened, I'd take the opportunity to build a nice, simple RAID 5 array. Something like 4 stripes, 1 parity and 1 hot standby would do the trick. And if you set it up right, it will be self-healing should a drive fail.
You are false data.
 
Steel
Global Moderator
Posts: 2330
Joined: Wed Dec 26, 2001 7:00 pm

Mon Mar 25, 2002 10:50 pm

I think you're confusing RAID 10 with RAID 0. RAID 10 is just as secure as RAID 5 if not more so (can withstand multiple drive failures) and it's faster too.

If stripe size is causing the computer to hard lock, then it's just a testament to how crappy Adaptec RAID cards are. I'd have Dell replace it, sounds like the card may be bad (that is if you ever come back to read this :wink:)
 
Speed
Gerbil Elite
Posts: 702
Joined: Thu Dec 27, 2001 7:00 pm
Location: Chicago, IL USA
Contact:

Mon Mar 25, 2002 11:34 pm

Steel, RAID 10 does in fact use RAID 0. It's the bastard child of RAID 0 and RAID 1. And there are several failure modes that can corrupt the entire RAID 10 array. A RAID 5 array with 6 drives can handle multiple drive failures, and it can do it transparently. That's the kind of stability and reliability that I would want in a database server.
You are false data.
 
inhalent
Gerbil In Training
Posts: 5
Joined: Mon Mar 25, 2002 7:00 pm
Location: Kelowna, BC
Contact:

Tue Mar 26, 2002 2:53 am

Our database server uses RAID 0+1, which I'm assuming the other name RAID 10... RAID5 is best bang for your buck, but you pay in speed. I'd stick with RAID 0+1....

You may also want to consider creating 3 RAID 1 sets (forget striping those)... and then giving those 3 drives to SQL Server (for example) to use and let it allocate the databases across the disks.... I think your question is as much to the storage guys, as it is to a DBA.

With 0+1 you have 1 drive in the end... you may actually see some gains by having 3 drives and doing some fancy shuffling...
 
Steel
Global Moderator
Posts: 2330
Joined: Wed Dec 26, 2001 7:00 pm

Tue Mar 26, 2002 2:27 pm

On 2002-03-25 22:34, Speed wrote:
Steel, RAID 10 does in fact use RAID 0. It's the bastard child of RAID 0 and RAID 1. And there are several failure modes that can corrupt the entire RAID 10 array. A RAID 5 array with 6 drives can handle multiple drive failures, and it can do it transparently. That's the kind of stability and reliability that I would want in a database server.


Speed: I know what I'm talking about, I've delt with RAID before (Compaq Smart array cards of various types). RAID 10 is a stripe of two or more mirrored sets. A 6 drive array can lose up to 3 drives as long as they're part of different mirrored sets. If a traditional RAID 5 looses more than one drive, you're screwed.
Check out the reference section of Storage Review, they explain it pretty well:
http://www.storagereview.com/guide2000/ ... vel01.html
 
Speed
Gerbil Elite
Posts: 702
Joined: Thu Dec 27, 2001 7:00 pm
Location: Chicago, IL USA
Contact:

Wed Mar 27, 2002 5:32 am

Steel, I too have worked in Compaq shops for a while now, and use SMART-2 cards in my boxes. Correct me if I'm wrong, but I don't see RAID 10 being able to recover like RAID 5 can.

<UL><LI>With RAID 5 I can designate hot spares, so if a drive fails, the array automatically rebuilds itself using the spare. Before long it's back to 100%, and I still get a good night's sleep.</LI>
<LI>With RAID 10, your only protection is mirroring. Mirroring is OK as long as you're dealing with total drive failure. But if one drive stays online and develops errors, you have a deadlock condition where you don't know which drive is the bad one. The only way to find out for sure is to take the system down, break the array and surface test both drives. And without parity, you have no idea if the stripe set has been corrupted. So you're talking downtime and restores anyway.</LI></UL>
You are false data.
 
Steel
Global Moderator
Posts: 2330
Joined: Wed Dec 26, 2001 7:00 pm

Wed Mar 27, 2002 10:12 am

Hot spares can be used on any RAID level except 0 so that still makes RAID 10 more secure than 5. The background drive checking the Compaq cards do happen on RAID 1 and 10 arrays as well, the parity checking is only done on RAID 5 because it's the only one that uses parity.
 
Speed
Gerbil Elite
Posts: 702
Joined: Thu Dec 27, 2001 7:00 pm
Location: Chicago, IL USA
Contact:

Wed Mar 27, 2002 6:20 pm

How does that make RAID 10 <i>more</i> secure? I'll accept that it could make it <i>as</i> secure, although the added complexity still bothers me. And isn't this parity checking that only RAID 5 offers what makes it a better choice? That and the fact that RAID 5 makes more efficient use of the disk space, meaning you can hold more stuff per drive.
You are false data.
 
Speed
Gerbil Elite
Posts: 702
Joined: Thu Dec 27, 2001 7:00 pm
Location: Chicago, IL USA
Contact:

Wed Mar 27, 2002 7:15 pm

BTW Steel, I'm asking not just to make conversation. I have a new I2O controller and 4 new drives. :smile:
You are false data.
 
Steel
Global Moderator
Posts: 2330
Joined: Wed Dec 26, 2001 7:00 pm

Wed Mar 27, 2002 11:45 pm

OK, why RAID 10 is more secure than RAID 5 (with or without the hot spare). Lets say we have 6 drives we want to RAID. In a RAID 10 they would be set up like this:
-1-__m___
-2-      |
         |
-3-__m___|stripe
-4-      |
         |
-5-__m___|
-6-

where 1&2, 3&4 and 5&6 are individual mirrors striped together in a RAID 0. If each mirror lost a drive (say 2,3 and 6) the array would still function but if a RAID 5 loses more than one disk the whole array is toast. Since you can add a hot spare to either array type the security of either would be boosted the same. There is still the chance that both drives in a mirror would fail, but figured in with the rest it still gives RAID 10 a better chance of surviving than RAID 5. Another advantage is higher overall performance since parity data isn't being calculated during each write. The biggest drawback to RAID 10 is you lose half your disk space. (But I'm sure you knew that already :wink:)

The parity data of a RAID 5 is really only used in the event of a drive failure, using just parity data to determine if a bit is bad would be difficult (how would it know which bit is bad?). The array card uses the error detection built into the drive iself and rebuilds the data based on what it knows is bad, and that would work in either array configuration. Another thing about RAID 5 is the more drives you add, the bigger chance that more than one would die; eventually it would become less reliable than a single drive. Personally I become wary of arrays bigger than 8 drives, after that it's usually better to create two separate arrays.

<font size=-1>[ This Message was edited by: Steel on 2002-03-27 23:09 ]</font>

<font size=-1>[ This Message was edited by: Steel on 2002-03-27 23:10 ]</font>
 
Steel
Global Moderator
Posts: 2330
Joined: Wed Dec 26, 2001 7:00 pm

Wed Mar 27, 2002 11:46 pm

Since you are only using 4 drives, you'd probably be better off with a RAID 5 unless you need high write speeds.

Who is online

Users browsing this forum: No registered users and 1 guest
GZIP: On