Building TR’s new web server

NOT LONG AGO, it became clear to us that our web server wouldn’t suffice. This box, a dual Pentium III 866MHz system with 768MB RAM and an IDE RAID 1 mirror, was taking a beating. The fact the thing could survive some of the abuse heaped on it amazed me, in fact. I would tell other webmasters our server’s specs, and they would look at me like I’d told them I could levitate. Through the use of Apache, PHP, mySQL, and cunning tricks like prerendering to minimize database queries, we got our humble PIII system to serve gobs of dynamic and semi-dynamic web pages. When a big article like our Radeon 9500 Pro review or IDE RAID round-up hit, this little box would have to serve hundreds of thousands of page impressions in a 12-hour period.

Not only that, but when no one was looking, I would go stand by its rack and say denigrating things about its scalability, just to see if it would crack.

Generally, the thing managed to hold up pretty well, but then we encountered The Problem. The Problem is a slight little issue with some 3Ware IDE RAID controllers, which sometimes decide to lose all memory of their RAID arrays when confronted by a cold boot. Combine this quirk with a late-night server lock-up induced by too-heavy traffic, and you’ve got a recipe for a very tasty disaster. I don’t know exactly how or why the 3Ware could lose the contents of a RAID 1 array, but it managed to do so for us and, judging by what we read on Usenet, a number of other lucky folks.

After we first encountered The Problem, we decided to try to protect ourselves from future disk errors by rebuilding the box with a journaling file system (JFS), Red Hat Linux’s ext3. Adding journaling to our already overloaded drives’ duties was just too much. The system especially had trouble dealing with large-ish files. Ultimately, I resorted to scripting web server log rolls for every hour in order to keep the thing going. If we didn’t manage it right, a (warm, please) reboot would be in order soon.

Obviously, it was time for a second server, something much faster than the current box. Our experience with the old PIII server taught us a couple of vital lessons about bottlenecks in mySQL/Apache servers. First and foremost, you need lots and lots of RAM. Second, a fast, reliable disk array is a must. (A related insight: cheap IDE RAID controllers aren’t to be trusted.) And more CPU power doesn’t hurt, either.

My plan was to build a box beefy enough to act as a database/back-end server for a whole array of Apache boxes, so we could ramp up The Tech Report’s semi-covert plan for utter, crushing world domination by adding lightweight front-end web servers as needed. Our old dual PIII box would be the first of these front-end, non-database boxes.

So I set out to build a new system that would handle the strain of a brutal, unmitigated Slashdotting and keep asking for more. The requirements: more RAM than Dodge, more reliability than Honda, and a RAID array potent enough to kill a horse.

I’m not quite sure how a RAID array could kill a horse, but I know I wouldn’t want to see it.

Oh, and it had to fit into our budget, which is: about five dollars.

Securing the parts
I started out my task by poking around a little to see how we might get a deal or two on some components. AMD was kind enough to kick in a pair of Athlon MP 2200+ chips, which were AMD’s fastest multiprocessor chips at the time. We’ve reviewed Athlon MP processors a number of times, and we’ve always been impressed by the performance of the dual Athlon platform. The Athlon itself is a very good processor, of course, and the dual front-side busses and other sophisticated tricks in the 760MPX chipset make for an excellent server platform—definitely an upgrade from our dual 866MHz Pentium IIIs.


A pair of these puppies now powers TR’s main server

Tyan agreed to supply us with one of its killer server mobos, the new Thunder K7X Pro, if we would display the “Powered by Tyan” logo on the site’s front page. Tyan motherboards have actually powered TR for a long time now, so no problem there. The Thunder K7X Pro (ours is a model S2469UGN) is the latest in Tyan’s very successful line of Athlon MP boards. The K7X Pro doesn’t require a proprietary power connector—it will accept the same auxiliary ATX12V connector as any Pentium 4 mobo or an EPS12V connector like new Xeon mobos—unlike past boards in the Thunder K7 line. The S2469UGN comes loaded with dual Intel NICs (one of which is a Gigabit Ethernet port), a dual-channel Adaptec Ultra 320 SCSI controller, four angled DIMM slots for use in low-profile cases, a pair of 64-bit/66MHz PCI slots, and a pair of sockets for those Athlon MP processors. This Tyan is a true server motherboard, with special features like console redirection to serial ports for better access to remote servers. All in all, exactly the kind of board we needed.


Tyan’s Thunder K7X Pro: 64-bit/66MHz PCI and Ultra 320 SCSI on-board

Next, we got a killer deal from the folks at Corsair on three 1GB DIMMs of registered DDR memory. Corsair RAM is top-notch stuff. We use it for most of our testing here in Damage Labs, and it’s exactly the kind of memory we’d want to put into a critical server. Having 3GB of it would address one of the key problems with our old server, as well. I nearly got a fourth 1GB DIMM, but I decided against it, since the 760MPX chipset can really only use about 3.5GB of main memory, all told.

One interesting note: in order to cram 1GB of memory on one DIMM, Corsair has double-stacked the memory chips on the module. Check out the picture below to see what I mean.


A Corsair 1GB registered DDR memory module


Half the memory chips ride piggyback to achieve 1GB per DIMM

The rest of the server’s components I bought online at the best prices I could find. Let’s take a look at the server’s final specs, and then I’ll discuss some of the server’s features in a little more detail.

 
The specs
Below are the key specifications of the finished web server. I’ve put in all into a nice table, so it’s easy to digest.

  Specs
Processor 2 x Athlon MP 2200+ 1.8GHz
Front-side bus Dual 266MHz (Dual 133MHz DDR)
Motherboard Tyan Thunder K7X Pro S2649UGN
Chipset AMD 760MPX
North bridge AMD 762
South bridge AMD 768
Memory size 3GB (3 DIMMs)
Memory type Corsair Registered PC2100 DDR SDRAM
Sound n/a
Graphics ATI Rage XL (integrated)
RAID controller Intel Server RAID controller U-31 (SRCU31) w/32MB cache
Storage 5 x Maxtor Atlas 10K III Ultra320 10,000 rpm SCSI hard drives
(RAID 10 w/1 hot spare)
OS Red Hat Linux 7.3

They’re not mentioned above, but I also installed a floppy drive and a CD-ROM drive, both with black front faces, to make OS installation/recovery easier. Don’t recall the brands. Doesn’t really matter.

Anyhow, it’s not a bad setup. If you’re like me, you’re thinking “personal workstation—just throw in an AGP card.”

The box
The Chenbro 2U case we chose to house all of this hardware in, however, is much too loud to use in a personal workstation. Great cooling, though. The Chenbro originally arrived with a 300W power supply, and everything would run on that unit, but we replaced it with a 460W model, just to be safe. Here’s how it all looks together:


The box: A 2U enclosure with room for six hot-swap drives and everything else


A little closer look at the guts

You can see that the Chenbro case has six 3.5″ hot-swap drive enclosures. Mounted directly behind them is the enclosure’s SCSI backplane, which supplies power and connectivity to the drives in the hot-swap bays. DIP switches on the backplane control SCSI IDs for the drives. We only had to run a single cable from the SCSI backplane to the RAID controller card.

Server case manufacturers are churning out more exotic cases than the 2U Chenbro unit we chose. It is possible to cram four hot-swappable drive bays and a dual Athlon sever into a 1U chassis, like this. Also, some enclosures offer better reliability in the form of redundant, hot-swappable power supplies. However, those things cost money, and we were approaching the limits of our “about five dollars” budget quickly. We settled on the Chenbro as the best combination of features and price.

 

RAIDing my wallet: the disk subsystem
The most expensive piece of the whole puzzle was the disk subsystem, which was the most troublesome part of our old server, and the one I was most determined to upgrade. I snagged an old version of Intel’s SRCU31 SCSI RAID controller on eBay, and upgraded it to the latest firmware revision, which essentially transformed the unit into a like-new SRC31A with an entirely different driver and software architecture. (It took some work to determine that this upgrade was, in fact, possible. It seemed impressive at the time, anyhow.) I also ordered a 128MB DIMM to use with the controller, to give it a little more cache than the 32MB DIMM that came with it. However, the thing didn’t seem to like higher density memory chips. Rather than push the issue, after seeing the effects of cache memory in our roundup of IDE RAID controllers, I decided we could live with the stock 32MB DIMM.

For drives, I picked Maxtor’s Atlas 10K III. Five of them. Like so:


Our new RAID array poses for a photo


SCA connectors for easy hot-swappability

I set up these 17GB drives in a RAID 10 array with one drive acting as a dedicated hot spare. The total capacity is 34GB, or less than it would be for a RAID 5 array with the same number of drives. However, RAID 10 keeps the controller’s i960 processor from having to handle the parity calculations necessary for RAID 5. Total disk capacity is really not a priority in this application. Performance is, and RAID 10 was the best choice for performance.

Oddly enough, I almost had to scrap the whole thing when the RAID controller card wouldn’t fit into the case with a SCSI cable attached. The controller is a full-height PCI card, and it was plugged into a PCI riser that allows cards to be inserted parallel to the motherboard (this so things fit into a 2U enclosure). SCSI cables plug into the SRCU31 at the very top of the card, and the cable’s connector, slightly but definitely, wouldn’t clear the side wall of the case. I was able to overcome this problem by switching to a SCSI cable with a SCSI connector just a millimeter shorter, which allowed me to cram the card into the case. Crisis averted.

At the end of the day, this hardware setup promised solid reliability, but delivered something just as important: excellent Linux support. Red Hat 7.3 installed without needing additional drivers, and Intel’s software suite offers full control over the RAID array from inside Linux. Have a look at the real-time statistics the software provides for each physical drive in the array:


Oooh.. graphs

Intel’s Storcon utility allows one to configure an array, add or remove drives, check on the status of a degraded array, direct repairs, and the like. On a critical server in a remote location, this kind of capability is priceless.

Or at least fairly expensive.


Clean bill of health
 
Pressing this puppy into service
So now you’ve seen an overview of our new server. I wish I’d had time to do several things before putting this system into service, including taking some better pictures and running some common benchmarks on it, either in Linux or in Windows. However, the old server decided to barf on its RAID array once again just days before I was planning to ship the new box to the co-lo facility. We had to scramble to get the new server shipped out and in place as soon as possible, so I had to cut short the benchmarking and photo sessions.

The folks at our new hosting outfit, Defender Hosting, did a wonderful job helping us get our new system shipped, racked, and turned up. (Thanks again to Hooz at 2CPU for recommending them.) Within 24 hours of leaving Damage Labs, the new system was online and running, serving all of TR’s traffic. And barely breaking a sweat.

Of course, no server setup is perfect, and I’m sure we’ll explore the limits of this one in due time. We have yet to feel the joy of a Slashdotting with this box, so when that happens the first time, all bets are off. This thing could come crashing down like Michael Jackson’s career. However, we have alleviated the most severe bottlenecks we’ve run into the past, which is a start.

Now, the old server will come out of our old hosting outfit and land in Damage Labs for a brief retrofit, in which the IDE RAID controller will be extracted, and a full exorcism performed. Once the server is free of evil spirits, faster hard drives will be installed, probably along with more memory. With luck, our quest for utter, crushing world domination will be on track, and we’ll bring up the first front-end server to go along with our new back-end box.

A brief epilogue
You have just read an article about how I built our new web server. I wrote it because lots of folks told me they wanted to see such an article. No doubt if you have some semblance of experience with server systems, you have developed some very strong, deeply insightful, and intensely correct views about how servers ought to be built and run. You would like to share them with me now, so I can see how desperately and utterly wrong I am about nearly everything.

That’s nice. Please think before you type, though. I promise, I’m only a partial idiot. 

Comments closed
    • TurtlePerson2
    • 11 years ago

    It’s too bad to see how much posts have degenerated. People used to write out long and insightful comments back in these days, but now comments are so short and meaningless. Too bad…

    • Anonymous
    • 17 years ago

    I run a mid-sized hardware/news web-site, and use a P4-2.4 512K L2 box with 1GB of RAM for all of our sql/php/apache/qmail/awstats needs. The server is taxxed, but I can tell you one thing, 1GB of RAM is the bare minimum you should have on any mid-traffic site. Plenty of L2 cache really helps too, but if I had to upgrade the server next time, I would get two lower-end Intel P4s (1.7’s) and cluster them, splitting services amongst each box. Sure, it would be cool to get a Xeon MP box, but I think the $399US/mo would be better spent on two or three $99/mo boxes in a cluster with piles of RAM each.

    • Anonymous
    • 17 years ago

    All database programs “cache” the query results (if they are small enough or pointers to them on the hard disk if they are large) in memory to speed up future queries, a process they call indexing. The result of this is increased RAM and hard disk usage with less processor usage which makes me wonder why you are using dual athlon mps when 1 would be more than enough? If I were you, I’d add more RAM to the old machine and use the new one as a frontend box, as actually working with the queries uses lots of memory (as php stores query results in a buffer) and even more processing power.

    Just my 2 cents

    • Anonymous
    • 17 years ago

    I run a “little” website on my own collocated server too. Here is a bit of information that might prove enlightening:

    So far, for the month of January (25 days), I have served b[<2366813<]b i[

    • IntelMole
    • 17 years ago

    AG68, I could be talking out of my ass here but here goes my understanding on why they need so much RAM…

    how many people are on this site at the moment? Quite a few I’d wager… (insert TR traffic statistics)

    Let’s assume they read a review or the equivalent size of information, then go to the comments section of whatever their reading and make a comment…

    The average review on this site is probably about, what, 10 pages?

    Include pictures and you might be looking at 50-100KB per page, so that’s about 1MB per content review, plus a little more for comments…

    They’ll probably do this in the space of about 10 minutes…

    Now, let’s assume we have 1,000 users doing this at this very moment (that is, changing pages and downloading images)…. that’s about 100MB of RAM needed must be used, just for that 1,000 users changing pages

    And 1,000 users is really not that much btw, if the forums have well over that then there must be many more casual viewers, surfers, and “linked to this site” people, not to mention regular /.’ers.

    For 1,000 users, the TR server has to create pages, stick them in RAM, and send them to all the users at the same time… it’ll probably have the last x weeks material all copied into the RAM as well to save on access latencies…

    How many users are changing pages per time unit Damage? (I really don’t know, I’m guessing at stats here)

    Either way, the RAM gets eaten up real quick…

    Now, like I said, I could be totally wrong on this one, but I hope I’m not,
    IntelMole

    • Anonymous
    • 17 years ago

    Why are you serving static information dynamically? A review isn’t dynamic content. Ads are low eb dynamic. Much more dynamic content is run on much less hardware than this.

    • Anonymous
    • 17 years ago

    #68/73: I think that person goofed and thought the quoted portion was your opinion.

    • Anonymous
    • 17 years ago

    No, aphasia, I was not stating that TR had not done this. I was responding to the fool that thinks it is too much — saying that I prefer the way they have chosen to go. Didn’t realize my comment could be misinterpreted.

    • Aphasia
    • 17 years ago

    [q]I’d rather see TR planning ahead for growth than getting something just barely able to handle their current needs — as it will serve them much better in the longer term.[/q] They did, you just seemed to miss the references, or didnt you read the article.

    • Anonymous
    • 17 years ago

    [q]AG #68

    sounds like you have no idea how much ram a web server really requires even when running multiple sites where most of them are using PHP. [/q]

    #68 here.

    Are you saying that web servers will never, ever require as much as 3 GB of RAM?

    Or that TR will never become more popular and thus receive more hits? Or that the site’s editors will never feel the need to include more images in their reviews? Or that the web site will never becomes simply “bigger”?

    Prove me now that all this RAM is a waste of money.

    • Anonymous
    • 17 years ago

    AG #68

    sounds like you have no idea how much ram a web server really requires even when running multiple sites where most of them are using PHP.

    • Anonymous
    • 17 years ago

    I used to have a poster of Bill Gates in the same room as my computer, and everytime Windows 95 crashed I threw a dart in his face!

    Good times…

    By the way, Bill put up very high resolution, 20 MB images of himself on his website! I didn’t know that he had fans who actually wanted to make their own posters of him!

    §[< http://www.microsoft.com/billgates/bio.asp<]§ More dart throwing in the near future....

    • Mr Bill
    • 17 years ago

    Bill’s denial does not mean he did not do it or say it or remember it. Just look at the transcripts from the monopoly lawsuit. Poor bugger has lost all his memory… Would 640K even be enough?

    • Anonymous
    • 17 years ago

    [q]3GB of RAM for a webserver that serves just 1 website? Just how many page requests does this site recieve?

    Hefty hardware for a website… but ehh… it’s your money. [/q]

    Sounds like you’re the kind of guy who would have agreed with Bill Gates when he said back in 1981: “640K ought to be enough for anybody.”

    • Anonymous
    • 17 years ago

    One does not get a piece of hardware to last as long as possible by getting the cheapest, smallest, weakest stuff available when upgrading.

    I’d rather see TR planning ahead for growth than getting something just barely able to handle their current needs — as it will serve them much better in the longer term.

    • Anonymous
    • 17 years ago

    3GB of RAM for a webserver that serves just 1 website? Just how many page requests does this site recieve?

    Hefty hardware for a website… but ehh… it’s your money.

    • Anonymous
    • 17 years ago

    Adaptec’s ZCR performance is, well… shitty. You’re better off using anything BUT the ZCR card.

    I have word that a full-blown review of the Adaptec 2000S (ZCR) card is in the works at another site. I guess we’ll all see soon enough.

    • Anonymous
    • 17 years ago

    Got any information regarding zero channel raid on that motherboard, and how to enable it – what do you put in the green slot?

    Heh, 18 months ago I had assembled a 2U webserver, using a Mylex Accelleraid 170 RAID controller with 4 36GB SCSI hard drives in RAID 5, only a single 1GHz Athlon however at the time. I spent about 2 hours building it, then other things conspired against me and I put that server on the back burner. Then, a year ago, an old, but essential webserver colocated 100 miles away died (damned 75GXP), so in a panic I drag out the 2U server (2 hours work, remember), get it to work, install FreeBSD and other software, and colocate – yeah, about 5 hours from parts to installation with minimal testing!

    I still don’t know if that SCA backplane auto termininated or not. Still, it has been working fine for a year, so fingers crossed.

    I want to build another one now – with specs similar to the box you have. Zero channel RAID would be nice, as it would save on the cost of a SCSI RAID controller.

    • Anonymous
    • 17 years ago

    What TR doesn’t run on IIS? Maybe it will be much more challenging? TR will upgrade more often? More servers will be needed? And more articles like this (like how to recover from Code Red and such…)…

    Gerbil #XXX

    • Anonymous
    • 17 years ago

    *[http://www.x386.net<]§ to see what a lowly 386 can do with the right love and attention.

    • getbornagain
    • 17 years ago

    sweet…yes i’m an idoit

    • getbornagain
    • 17 years ago

    did it work?

    • sativa
    • 17 years ago

    [quote]I have the same mobo and the sys temp is at 77 degreez C. My dual 2400+ are running at 75 degreez C each. Any advice on how to get them to cool down?[/quote]You need more air circulation in your case. this will bring it down several degrees assuming that there is no other factor (such as improperly seated heatsinks)

    • Anonymous
    • 17 years ago

    [q]I have the same mobo and the sys temp is at 77 degreez C. My dual 2400+ are running at 75 degreez C each. Any advice on how to get them to cool down?[/q]

    Easy, get big-ass copper heatsinks mounted by powerful fans for the cpus. As for the motherboard, good air circulation within the case is usually sufficient. Just make one fan suck the cool air in and another expel the hot air out.

    • Anonymous
    • 17 years ago

    Maybe the Queen’s “English” need to get themselves “mangled” (a couple swift kicks to the butt should do the trick) so that [i]next[/i] time, they don’t make stupid pissy comments…

    • Anonymous
    • 17 years ago

    [quote]we’re not impressed with this mangling of the Queen’s English.[/quote]
    however, i’m impressed by your use of the royal “we”.

    • LiamC
    • 17 years ago

    Mr. Damage, thanks. And that is some performance hit.

    • Anonymous
    • 17 years ago

    Dear friend

    I have ethe same mobo

    g{

    • Anonymous
    • 17 years ago

    I want my money back Scott!

    • LiamC
    • 17 years ago

    AG #6, you should do ads for mastercard, that was priceless :))
    LOL.

    Damage: if the SCSI controller has a host processor, why would you consider _not_ loading it up with RAID 5 parity calcs? I thought that was what it was for? Or does it slow the RAID array down that much? Have you any figures, articles, links with more info on that sort of thing?

    • atidriverssuck
    • 17 years ago

    AG37, you really know how to take a small joke to deep space.

    • Anonymous
    • 17 years ago

    Put Your Ass In The Know

    • Anonymous
    • 17 years ago

    Sorry, I know I sound like a complete n00b, but I have to ask:

    What’s PYAITK?

    • Anonymous
    • 17 years ago

    Thanks for the entertaining writeup! One fault though, you never named your new server! Servers need cool names. It helps keep thme happy. Maybe that’s why your old server turned r{

    • Anonymous
    • 17 years ago

    Congrats to Tech Report for getting all that they felt they could from their PIII-866 server. I know a lot of hard core users would never be able to understand that some people do indeed get a good amount of use out of their hardware for a long time, but you’ve proven that you can. Best of all, it’s replacement provides for a significant upgrade!

    • Anonymous
    • 17 years ago

    *[

    • eitje
    • 17 years ago

    damn fine article.

    i like the Tiger series over the Thunder series, but that’s only because i prefer to have no-frills board.
    we also use Chenbro cases for all the servers in my office. 🙂

    i can’t wait until i can afford to get a little 15U 4-post wheeled frame, running 5 3U dual-AMD systems at home. mmmmm…..

    you’re not crazy, diss!
    imo, you got it just right.

    • Anonymous
    • 17 years ago

    nopcode: how on earth do hosts sell ads where you live, if not by number of impressions?

    • Anonymous
    • 17 years ago

    Very nice, easy-to-follow article. (Must admit I enjoy your writing style and sense of humor.)

    For the guy complaining about the “Powered by Tyan” on the front page — puh-lease. It doesn’t ask you to buy a Tyan board, it just lets you know that the mobo in the server is Tyan (and, as he mentioned, already was before this). Besides, this isn’t a lousy board getting a good review for advertising money by a site that would never actually use the thing in their own computers. It’s a great board being USED for critical hardware and chosen based on unbiased reviews done in the past.

    • indeego
    • 17 years ago

    Damage… curious. About how many “man hours” (no jokes please) do you think in told did it take to build and test such a beast? Sounds like your testing got cut off prematurely (no jokes please) so I’m glad everything working out in the END. (no jokes pleaseg{<)<}g

    • liquidsquid
    • 17 years ago

    Looks like plenty of room left to run a few game servers on that server, tech-report UT2K3 server anyone? (hehe, you wanted to stress test it, right?)

    -LS

    • atidriverssuck
    • 17 years ago

    excellent. So now when the site goes tits up we can blame Tyan hardware advertised on the front page 🙂

    • Anonymous
    • 17 years ago

    All things considered I think that your solution is good and should scale well. My personal pref. is to run a NAS, with a series of blade servers as my database/webserver/everything else servers. That way I can easily swap around applications between blades in the event of a hardware failure. Additionally, blades can scale to 2x or 4x processor with multiple GB of RAM. Anything that’s going to need more processor time or memory isn’t going to be running on an Intel machine anyway… (we hope). I think that you’re somewhat on the way to doing that already, but maybe think about blades for additional servers when the time comes?

    • Anonymous
    • 17 years ago

    *[

    • muyuubyou
    • 17 years ago

    BTW Damage, can you levitate? 🙂

    • Steel
    • 17 years ago

    Nice.

    • kyboshed
    • 17 years ago

    [q]Is the old server making any LAN party appearances, or is buried in a closet folding away.[/q]

    Like the article said (twice!) it’s set to become the first new front end server.

    • Anonymous
    • 17 years ago

    *[

    • droopy1592
    • 17 years ago

    [q]Not only that, but when no one was looking, I would go stand by its rack and say denigrating things about its scalability, just to see if it would crack. [/q]

    That’s what’s wrong with kids these days. NO support!

    • Anonymous
    • 17 years ago

    *[

    • Ruiner
    • 17 years ago

    Nice article.

    I have one of those Atlas 10K III’s (18GB) in my home PC…it roXXor’s for a pretty low price. If you don’t need mega storage, the speed and reliability are sweet. The 15K cheetah’s run circles around it, but for twice the price, it wasn’t for me.

    • Anonymous
    • 17 years ago

    damn, thought I’d caught that (note the typo in the first one)

    • Anonymous
    • 17 years ago

    g{

    • Anonymous
    • 17 years ago

    g{

    • Anonymous
    • 17 years ago

    *[

    • Dposcorp
    • 17 years ago

    Hey EasyRhino, I believe he said he had a total of 5 drives, 4 in the array and one for hot swap.
    As for the “booting and storing the OS and crap”, he is running Linux, so how much OS and crap can their be?

    Nice article Scott. Always fun to see what is going on behind the scenes.

    Also, i partically liked this phrase;

    [q]
    Not only that, but when no one was looking, I would go stand by its rack and say denigrating things about its scalability, just to see if it would crack. [/q]

    Thats my TR; always good for a chuckle.

    P.S. When ever my Radeon starts acting up, I walk by mumbling : “i wonder what kind of 3Dmark scores a GF FX will give me.”
    works wonders.

    • EasyRhino
    • 17 years ago

    Yo Damage, you are using a sixth hard drive just for booting and storing the OS and crap, or is that on the array?

    ER

    • Anonymous
    • 17 years ago

    yey, another great review. Feeeel the world domination.

    • muyuubyou
    • 17 years ago

    See what I told you? SCSI RAID or no RAID at all.

    When it comes to reliability and performance, nothing like SCSI. Yeah, those are pricey, but when reliability is a must…

    When I saw it down for so long, I was sure it was some IDE RAID, otherwise data recovery wouldn’t have taken so long.

    Looks like this server will be more than enough. I’d use the old one to keep the archives separately. You don’t need a lot of Gigs to keep the latest update. We know cost/gig in SCSI is huge.

    • Anonymous
    • 17 years ago

    Kickass box, cool article.

    I’ve been wanting one of those TR Beer Steins for a long time now–I think I’ll go buy a pair of them to celebrate the new box and to contribute what little I can 🙂

    -Rakhmaninov3

    • Anonymous
    • 17 years ago

    *[http://www.tech-report.com/reviews/2002q4/ideraid/index.x?pg=1<]§ Next time, I\'ll whip up some pretty pictures. I promise.

    • nihilistcanada
    • 17 years ago

    Yes but how many FPS does it get in Quake III?

    • Anonymous
    • 17 years ago

    Very nice update, Damage. Thank you.

    You need money. I’ll kick in a bit soon.

    • Mr Bill
    • 17 years ago

    One unused hot spare. 4x17Gb = 68Gb but half of that is mirrored thus the 34Gb total.

    • Anonymous
    • 17 years ago

    I believe it said in the review that they are 17 gig drives.

    -eckslax

    • sativa
    • 17 years ago

    nice setup. how big are the hard drives?

    • BooTs
    • 17 years ago

    I’m with Bill. Exactly how does RAID 10 work? I only know how RAID is set up till RAID 5. :/

    • Mr Bill
    • 17 years ago

    Well, I’m impressed. Very nice setup and design choices. One question about raid 10. Thats a dual raid setup raid 0 + 1? Given its both mirrored and striped over 4 drives. Does that mean that writes are as fast as a two drive raid0 but reads are as fast as a four drive raid0? I like the TaiSol HSF but its easy to break a blade and near impossible to add a fan guard. That would worry me just a bit. If there is space I would suggest the Sunon magnetic levitation fans (pulling up) and a fan grill.

    • Anonymous
    • 17 years ago

    If you guys were sucking up any more than you already are, I’d use you to clean my carpet!

    • Anonymous
    • 17 years ago

    yeah.. TR rocks!

    • Zenith
    • 17 years ago

    Well i can tell you one thing, TR has always loaded like LIGHTING for like…years. This site is good for my homepage because it loads so DAMN fast. Plus its my fav 😀

    • Zenith
    • 17 years ago

    Ok, so second post.

    • Zenith
    • 17 years ago

    I don’t like doing this but: first post.

    Hope this article is good 😀 reading it now.

    • Anonymous
    • 17 years ago

    So, what are the chances of getting this submitted on /.

    After all, aces managed it :p

Pin It on Pinterest

Share This