Personal computing discussed

Moderators: mac_h8r1, Nemesis

 
Omniman
Gerbil Team Leader
Topic Author
Posts: 292
Joined: Sat Dec 13, 2008 1:24 am
Location: White River Junction, Vermont

Decent Test Lab Server

Thu Sep 07, 2017 7:47 pm

I'm trying to come up with a good machine I can use as a test lab server using either Hyper-V or VMware to spin things up and mess around. I'm leaning towards a used HP Z620 with dual 6 core Xeons loaded up with lots of ram off of eBay but I was curious if anyone else had a recommendation?
Intel I7-2600k, Asus P8P67, 16GB DDR3 1600mhz, Geforce GTX 1070, ASUS Xonar D2, Samsung Evo 250GB, Western Digital Black 1TB, Corsair HX750w
 
curtisb
Gerbil XP
Posts: 394
Joined: Tue Mar 30, 2010 11:27 pm
Location: Oklahoma

Re: Decent Test Lab Server

Thu Sep 07, 2017 9:44 pm

I personally would stick with the Dell PowerEdge line. Anything from the Tx20 or Tx30 line should be plenty powerful enough. The "T" being "Tower". Garland Computers usually has some pretty good deals on eBay. Keep in mind with the PowerEdge models that if it doesn't have the R or T in front of the number, it's a much older series box.

They have a T620 listed right now with 16 cores, 192GB of RAM, dual power supplies, and 5 x 146GB 10K drives for $1500. That'll actually give you 32 thread with HTT. That's not bad at all. It has an iDRAC Express, but you can probably find the iDRAC Enterprise module for it pretty cheap on eBay or even Amazon. That will give you a remote console connection to the server to let you see the console, power cycle, monitor health, etc. If you're familiar with HP Servers, it's akin to their iLO.

Which hypervisor you use depends on what you intend on doing with it. Hyper-V has come a long way. The Server 2016 version is really nice and they're still adding features if you get on the Insider Preview for Server.
ASUS MAXIMUS VIII HERO | Intel Core i7-6700 | Asus STRIX GTX 970 4GB | 4 x Corsair LPX 8GB | 2 x Crucial MX200 500GB | 2 x Hitachi Deskstar 4TB | Phanteks Eclipse | Seasonic X-850 | Dell UP2516D
 
ludi
Darth Gerbil
Posts: 7452
Joined: Fri Jun 21, 2002 10:47 pm
Location: Sunny Colorado front range

Re: Decent Test Lab Server

Thu Sep 07, 2017 11:11 pm

Omniman wrote:
I'm trying to come up with a good machine I can use as a test lab server using either Hyper-V or VMware to spin things up and mess around. I'm leaning towards a used HP Z620 with dual 6 core Xeons loaded up with lots of ram off of eBay but I was curious if anyone else had a recommendation?

For just "messing around," how much load do you really intend to put on this machine? One user, multiple concurrent users? A few running images, or a few dozen? I routinely start Windows images from VirtualBox under Ubuntu 16.04 LTS on a Dell T5500 tower with a single X5670 (6C/12T) and I know from past experience that the slightly older T5400 can serve OS and appliance images quite passably from ESXi. A high-spec T5500 can be had for less than $400, and fan-wise, it will be a lot quieter than anything intended for a server rack.

If you don't need ECC memory then any non-K, quad-core CPU from Sandy Bridge on up should be plenty quick.
Abacus Model 2.5 | Quad-Row FX with 256 Cherry Red Slider Beads | Applewood Frame | Water Cooling by Brita Filtration
 
curtisb
Gerbil XP
Posts: 394
Joined: Tue Mar 30, 2010 11:27 pm
Location: Oklahoma

Re: Decent Test Lab Server

Fri Sep 08, 2017 8:12 am

You make some good points, ludi, but I also assumed hr might be looking to gain some experience with full blown server hardware. The PowerEdge line actually runs pretty quiet as far as servers go (the VRTX line is actually designed to run under a desk). If that's not the case, you're right...any of the Precision T models from even the T3500 on up would be fine.

And for what it's worth, the K series CPUs have supported the full virtualization stack since Haswell Devil's Canyon. I didn't realize that when I got my 6700, or I would've got a 6700K.



EDIT: Corrected where all VT features were added.
Last edited by curtisb on Fri Sep 08, 2017 10:41 am, edited 2 times in total.
ASUS MAXIMUS VIII HERO | Intel Core i7-6700 | Asus STRIX GTX 970 4GB | 4 x Corsair LPX 8GB | 2 x Crucial MX200 500GB | 2 x Hitachi Deskstar 4TB | Phanteks Eclipse | Seasonic X-850 | Dell UP2516D
 
Omniman
Gerbil Team Leader
Topic Author
Posts: 292
Joined: Sat Dec 13, 2008 1:24 am
Location: White River Junction, Vermont

Re: Decent Test Lab Server

Fri Sep 08, 2017 9:07 am

Interesting! As for the workload on it I plan to load up quite a few active guest OS's going through all sorts of environments to build up and test with. I've never messed around with a Hyper-V environment since I've always used VMWare. I have a server currently running which I was using for testing but now it's become a production machine.
Intel I7-2600k, Asus P8P67, 16GB DDR3 1600mhz, Geforce GTX 1070, ASUS Xonar D2, Samsung Evo 250GB, Western Digital Black 1TB, Corsair HX750w
 
techguy
Gerbil Team Leader
Posts: 272
Joined: Tue Aug 10, 2010 9:12 am

Re: Decent Test Lab Server

Fri Sep 08, 2017 9:49 am

curtisb wrote:
You make some good points, ludi, but I also assumed hr might be looking to gain some experience with full blown server hardware. The PowerEdge line actually runs pretty quiet as far as servers go (the VRTX line is actually designed to run under a desk). If that's not the case, you're right...any of the Precision T models from even the T3500 on up would be fine.

And for what it's worth, the K series CPUs have supported the full virtualization stack since Haswell. I didn't realize that when I got my 6700, or I would've got a 6700K.


Correction: since Devil's Canyon (i.e. Haswell refresh). While architecturally Haswell, since DC came out a year after initial Haswell release it cannot truly be said that all virtualization features are supported "since Haswell".
 
Glorious
Gold subscriber
Grand Admiral Gerbil
Posts: 10303
Joined: Tue Aug 27, 2002 6:35 pm

Re: Decent Test Lab Server

Fri Sep 08, 2017 9:55 am

oh, Intel product segmentation, what a rascal!
 
curtisb
Gerbil XP
Posts: 394
Joined: Tue Mar 30, 2010 11:27 pm
Location: Oklahoma

Re: Decent Test Lab Server

Fri Sep 08, 2017 10:00 am

techguy wrote:
curtisb wrote:
You make some good points, ludi, but I also assumed hr might be looking to gain some experience with full blown server hardware. The PowerEdge line actually runs pretty quiet as far as servers go (the VRTX line is actually designed to run under a desk). If that's not the case, you're right...any of the Precision T models from even the T3500 on up would be fine.

And for what it's worth, the K series CPUs have supported the full virtualization stack since Haswell. I didn't realize that when I got my 6700, or I would've got a 6700K.


Correction: since Devil's Canyon (i.e. Haswell refresh). While architecturally Haswell, since DC came out a year after initial Haswell release it cannot truly be said that all virtualization features are supported "since Haswell".


I stand corrected...thanks for clarifying that. :)

I couldn't actually remember where it was added, and that's what I get for mentioning it when I wasn't at a place where I could look the info up easily.
ASUS MAXIMUS VIII HERO | Intel Core i7-6700 | Asus STRIX GTX 970 4GB | 4 x Corsair LPX 8GB | 2 x Crucial MX200 500GB | 2 x Hitachi Deskstar 4TB | Phanteks Eclipse | Seasonic X-850 | Dell UP2516D
 
Convert
Grand Gerbil Poohbah
Posts: 3382
Joined: Fri Nov 14, 2003 6:47 am

Re: Decent Test Lab Server

Fri Sep 08, 2017 11:13 am

Personally I don't bother with real servers for my test environment anymore, and I have access to new and used server hardware for free.

I just run i7's with lots of ram. You'll be surprised how little hardware you need to make a VM run fast. I'd suggest saving money where you can and spending it on SSDs, even used ones off eBay. Run them off the onboard 6G ports and play away.

If you want server hardware experience then obviously that can't be substituted, just be aware you can learn everything you really need to know about server hardware (for the platform you purchase) in probably 30 minutes and then you'll be over it.

Running consumer hardware leads to far less drama when you're just playing around. Dell/HP servers and workstations vary in expandability, but something is going to eventually need replacing on used servers. Want to add more hard drives? Oh, that's right, you need one of the OEM trays. What's that? The raid controller is freaking out that you don't have a genuine OEM drive? Power supply died, I'll just take one off the sh... Oh, guess I'll track one down on eBay and wait a week. This OEM rebranded LSI controller needs another RAID battery? Didn't I just replace that last year? I need a 4TB drive in this server, but it only has SFF bays and no internal molex/SATA power cables, guess I'll break out the multimeter, crimp my own custom power cable and shove the drive where I have an empty PCIE slot! I think I'll test a virtual environment with a competent GPU. NOPE.

But, as I said, you can't substitute real server hardware experience. All my gripes above aren't something I would have really run into in a production environment, because that's always by the books and well documented. Playing with a real server at home makes you really learn about limitations and workarounds. And when you need to add parts to your server you actually have to learn about what is going to work with what you can find on eBay instead of just using having the Dell rep send it over.

eBay and serversupply are good resources should you end up going with a real server platform.
Tachyonic Karma: Future decisions traveling backwards in time to smite you now.
 
curtisb
Gerbil XP
Posts: 394
Joined: Tue Mar 30, 2010 11:27 pm
Location: Oklahoma

Re: Decent Test Lab Server

Fri Sep 08, 2017 12:09 pm

Keep in mind that if Dell still supports the server, you can request an asset transfer and put it back under a maintenance contract. He mentioned in a reply that his previous test server went into production, and if I thought there was a chance that might happen again I would definitely spring for server-level hardware.

I also wouldn't worry about RAID controller freak outs or power supply failures. Pulled OEM drives can be found pretty cheap, too. So can the drive trays. And I can count the number of server power supply failures I've seen in the past 20+ years on one hand. Plus...see above about putting it back under maintenance. And if it's really that big of an issue for a test bed, then get something current but not necessarily new. Dell has their Online Outlet where you can get current models of refurbs, repo'ed, or cancelled custom orders for cheaper than the full price...by a lot in some cases. They come with a one-year warranty, and you can add onto that.

I'll be honest, though. The vast majority of my experience is with Dell hardware. I've used and supported servers from HP, IBM/Lenovo, and others, and every time it reminds me of why I stick with Dell. I certainly understand that isn't everyone's experience, I'm just speaking of my own. :)
ASUS MAXIMUS VIII HERO | Intel Core i7-6700 | Asus STRIX GTX 970 4GB | 4 x Corsair LPX 8GB | 2 x Crucial MX200 500GB | 2 x Hitachi Deskstar 4TB | Phanteks Eclipse | Seasonic X-850 | Dell UP2516D
 
Omniman
Gerbil Team Leader
Topic Author
Posts: 292
Joined: Sat Dec 13, 2008 1:24 am
Location: White River Junction, Vermont

Re: Decent Test Lab Server

Fri Sep 08, 2017 12:45 pm

I'll have to check out some Dell hardware. I've only really worked with a HP's and IBM's.
Intel I7-2600k, Asus P8P67, 16GB DDR3 1600mhz, Geforce GTX 1070, ASUS Xonar D2, Samsung Evo 250GB, Western Digital Black 1TB, Corsair HX750w
 
Convert
Grand Gerbil Poohbah
Posts: 3382
Joined: Fri Nov 14, 2003 6:47 am

Re: Decent Test Lab Server

Fri Sep 08, 2017 2:14 pm

curtisb wrote:
Keep in mind that if Dell still supports the server, you can request an asset transfer and put it back under a maintenance contract. He mentioned in a reply that his previous test server went into production, and if I thought there was a chance that might happen again I would definitely spring for server-level hardware.

I also wouldn't worry about RAID controller freak outs or power supply failures. Pulled OEM drives can be found pretty cheap, too. So can the drive trays. And I can count the number of server power supply failures I've seen in the past 20+ years on one hand. Plus...see above about putting it back under maintenance. And if it's really that big of an issue for a test bed, then get something current but not necessarily new. Dell has their Online Outlet where you can get current models of refurbs, repo'ed, or cancelled custom orders for cheaper than the full price...by a lot in some cases. They come with a one-year warranty, and you can add onto that.

I'll be honest, though. The vast majority of my experience is with Dell hardware. I've used and supported servers from HP, IBM/Lenovo, and others, and every time it reminds me of why I stick with Dell. I certainly understand that isn't everyone's experience, I'm just speaking of my own. :)


Eh, it's not like I'm talking doomsday here, they are realistic pains of playing with a full blown server compared to desktop PC. Been there done that. Ignoring that ignores reality, downplaying it is only necessary if my cautions were taken as guaranteed outcome and showstopping problems. My apologies if that's how it was worded originally. Problems unique to server platforms will need to be addressed eventually, even if you plan ahead and have parts available (like the trays). Heck, even the slower boot times of servers got annoying to me and no amount of fixing can solve that on most platforms. I just want to play and test as quickly and seamlessly as possible.

BBU's only last a couple of years it seems, and buying used can mean you'll be due for it very soon. If anything, try to find systems with an equivalent CV module. The raid controllers for Dell and HP both give you grief about non OEM drives, different generations react differently. I think both of the latest generations from HP and Dell disable some superficial features and warn about it (boot up, display panels/indicators and management software), but I haven't done thorough testing to see if it handicaps you in any other way. I've only replaced a handful of PSUs due to failure, but I've had to upgrade more of them because of additional power requirements from component upgrades, something quite possible if it's a true play environment. In a play environment he'd really be better off with SSDs where possible, time adds up waiting for things to install, reboot and transfer around. Used OEM mechanical drives (haven't priced used OEM SSDs) can be found for decent prices, but buying used drives is a terrible idea if you think the server will end up in production.

I mean, that really needs to be the distinction here, is it going to be a play/test environment or is it going into production, because if there's a chance it's going into production then treat it like a production server. The whole idea of a VM makes it pretty painless to move it over to a production server, so I don't see the reasoning to waffle on doing a play environment or a play then production one. If it's for work purposes then have work spring for a real server so that you can simulate realistic loads on hardware that would match the production environment. To me it really sounded like this was just a lab/play setup, in which case a good desktop PC would make life a lot easier.

I forgot about the outlet, very good suggestion if it makes more sense to have a real server platform with warranty.
Tachyonic Karma: Future decisions traveling backwards in time to smite you now.
 
MOSFET
Silver subscriber
Gerbil Team Leader
Posts: 208
Joined: Fri Aug 08, 2014 12:42 am

Re: Decent Test Lab Server

Fri Sep 08, 2017 4:07 pm

Seems the discussion is reasonably settled for now, but I will add that I was (and still am) a big fan of used SuperMicro servers from the 'bay. However, there really really really is no reason for ECC RAM being a requirement for a home lab. It most certainly is not. So late this spring, I migrated all VMs to one newly built host, a Ryzen 5 1600 with 64 GB of RAM, with ESXi 6.5 (now U1). This has been an overwhelmingly positive experience - one host with 64GB can now run as many VMs as 2-3 hosts with 32GB each. Needless to say, even if Sandy/Ivy Bridge Xeons are pretty efficient, moving to one host is a huge energy saver. The main advantage of the newer platform, whether Ryzen or Skylake+, is the increase in RAM ceiling from 32GB to 64GB. If you want to run VCSA, and who wouldn't want to in a VMware-based home lab, you'll really appreciate the higher RAM ceiling.
 
ludi
Darth Gerbil
Posts: 7452
Joined: Fri Jun 21, 2002 10:47 pm
Location: Sunny Colorado front range

Re: Decent Test Lab Server

Fri Sep 08, 2017 5:05 pm

Omniman wrote:
I'll have to check out some Dell hardware. I've only really worked with a HP's and IBM's.

The only pitfall with old Dell hardware is that when they end support at a certain year or OS version, they really do end it, and you are unlikely to see any further updates for system BIOS, auxiliary device firmware, drivers, etc. unless a show-stopping security vulnerability appears (e.g. the Intel AMT firmware issue that came up this year, and affected hardware as far back to 2007-ish). For a virtualization host, this is usually a non-issue. For an older laptop where some of the function keys stop working correctly after a new Windows version or point update is released, it's quite irritating.
Abacus Model 2.5 | Quad-Row FX with 256 Cherry Red Slider Beads | Applewood Frame | Water Cooling by Brita Filtration
 
curtisb
Gerbil XP
Posts: 394
Joined: Tue Mar 30, 2010 11:27 pm
Location: Oklahoma

Re: Decent Test Lab Server

Fri Sep 08, 2017 5:40 pm

ludi wrote:
Omniman wrote:
I'll have to check out some Dell hardware. I've only really worked with a HP's and IBM's.

The only pitfall with old Dell hardware is that when they end support at a certain year or OS version, they really do end it, and you are unlikely to see any further updates for system BIOS, auxiliary device firmware, drivers, etc. unless a show-stopping security vulnerability appears (e.g. the Intel AMT firmware issue that came up this year, and affected hardware as far back to 2007-ish). For a virtualization host, this is usually a non-issue. For an older laptop where some of the function keys stop working correctly after a new Windows version or point update is released, it's quite irritating.


They're all like that, though. it's also why I recommended not going any older than the Tx20 line. To put it in perspective, the PowerEdge T620 is a roughly 5-year old model. The last BIOS was released on March 2016, and they do list Server 2016 as a supported OS. Most firmware issues are going to be fixed rather quickly. Most, but not all, new BIOS updates are released to add microcode for new CPU's that will work in that model server. There are no new CPU's being released that will work on those.

I haven't looked in a long time because we run Hyper-V in our environment, but VMware used to actually list which PowerEdge models were compatible with a given version. I expect that support would be pretty good since VMware is now owned by Dell through the EMC acquisition. They still operate independently though, much like that did with EMC.
ASUS MAXIMUS VIII HERO | Intel Core i7-6700 | Asus STRIX GTX 970 4GB | 4 x Corsair LPX 8GB | 2 x Crucial MX200 500GB | 2 x Hitachi Deskstar 4TB | Phanteks Eclipse | Seasonic X-850 | Dell UP2516D
 
ludi
Darth Gerbil
Posts: 7452
Joined: Fri Jun 21, 2002 10:47 pm
Location: Sunny Colorado front range

Re: Decent Test Lab Server

Fri Sep 08, 2017 11:26 pm

curtisb wrote:
They're all like that, though.

True, but with Dell the Precision and Lattitude lines, and even some of the Inspiron line, are built to a very good standard and circulate for years after Dell stops supporting them. I'm using an E6420 right now that works with Windows 10 in every way except for the brightness controls coming and going. There are two variations of this issue and it's one of the top complaints about this model line, both on the Internet generally and the Dell Support forums. We even have a handful of the 64xx series still deployed at the office, albeit those still use Windows 7 where the problem doesn't occur.
Abacus Model 2.5 | Quad-Row FX with 256 Cherry Red Slider Beads | Applewood Frame | Water Cooling by Brita Filtration
 
curtisb
Gerbil XP
Posts: 394
Joined: Tue Mar 30, 2010 11:27 pm
Location: Oklahoma

Re: Decent Test Lab Server

Sat Sep 09, 2017 4:39 pm

ludi wrote:
I'm using an E6420 right now


That's a what? 6-7 year old model? I have an E6530 through work and it's about 5 years old. We bought over 50 of them. They're still working great for our use. In fact, my battery will still last for hours after all this time. I actually keep it plugged in running 24/7. Most of ours still have spinning metal, but I'm about to spring for SSD's for them. It is about time to start looking at replacing them, though...or at least adding some newer ones to the inventory. As far as laptops go, I definitely see more older Dells roaming around in operation than from other manufacturers, although I am still rocking a Surface Pro 3, too. :)

Anyway, the point being that anything that was made within the last 5 years should be plenty fine for his testing environment. These are the key points to keep in mind:

  • Memory. This will be your very first limitation in how many VM's you can run, and how well they run. And remember, you need to leave some available for the host, too. And if you throw clustering into the mix, the hosts should have at least twice as much RAM as you're allocating to your VM's. Why? It allows you to move VM's from another host to do maintenance on it without impacting performance (provided you also take care of the next two items). If you have more than two hosts then you can get away with a little less RAM because you can, and should, spread out the VM's that you move.
  • Disk IO. This will be your next limitation in how well your VMs run. SSD's are good. Multiple SSD's in a RAID10 are better. On most workstation level machines the most you're going to get is 4-6 drives without having to add a RAID controller. At that point, you might as well have gone with a server and a good server chassis.
  • Network IO. You probably won't run into this in a test environment, unless you start adding mulitiple hosts for clustering, etc. You'll run into it fairly quick in a production environment though, especially since you'll likely throw in iSCSI traffic on top of serving client requests. My Hyper-V hosts each have 10 x 10GigE links to separate the IO and for both NIC and switch-level redundancy.

I won't try to sway you one way of the other as to which hypervisor to use. Different places have different requirements. VMware just adds a cost to my environment that I can't justify when Hyper-V works perfectly fine for our needs. Vendors are starting to provide virtual versions of things that have traditionally been hardware appliances in the past. I have a virtual wireless controller from Extreme Networks, several virtual appliances from Polycom, and have run a virtual appliance from Barracuda* while we waited for the hardware appliance to be replaced (we purchased the hardware before the virtual appliance was available). All of these run some flavor of Linux, and all you have to do is import them from a vendor supplied image, configure network, and you're off. The virtual appliances are often cheaper, too.


* A quick anecdote about the Barracuda. I found out last year that as long as we keep it under a maintenance contract, Barracuda automatically replaces the hardware at no additional cost every four years...at least for the Spam Firewalls (which has been renamed to Email Security Gateway). I've never had another manufacturer do that and I think it's great.
ASUS MAXIMUS VIII HERO | Intel Core i7-6700 | Asus STRIX GTX 970 4GB | 4 x Corsair LPX 8GB | 2 x Crucial MX200 500GB | 2 x Hitachi Deskstar 4TB | Phanteks Eclipse | Seasonic X-850 | Dell UP2516D
 
Convert
Grand Gerbil Poohbah
Posts: 3382
Joined: Fri Nov 14, 2003 6:47 am

Re: Decent Test Lab Server

Sun Sep 10, 2017 9:12 pm

Would be nice to know exactly what kind of testing the OP will be doing and how involved it will be.

Agreed with the memory, but you'd be surprised how little you need barring any third party software requirements. I think I was running a lab environment with 10 servers and 32GB of ram and had some to spare about 3 years ago before I upgraded my machines. There are points of diminishing returns in the types of lab environments I'm talking about, so even excessive ram allocated to a system makes no difference in boot-up time or testing.

Agreed with SSDs, for the most part. But I'd argue again that you'd be surprised what you can get away with on a single SSD. Think about it, we were running (and people still are) servers on mechanical drives. A single SSD surpasses even decent spindle count mechanical arrays. As an example on one HyperV test machine, I have a single 250GB SSD running 7 machines, a mix of server and desktop OS along with SBS 2011. I have space to spare and I can boot all of them simultaneously faster than any physical box could boot a single one (the beauty of VMs!). If you were standing in front of the system watching them boot you'd never guess it was a single SSD, and an old one at that.

A really good LSI SAS controller goes for dirt cheap on eBay so I completely disagree with the sentiment that if you need a RAID controller then you are better off buying a real server. All of my test machines have dedicated RAID controllers on top of the the provided SATA ports for a very small investment. I also have M.2 SSDs which surpass many server setups for a fraction of the cost.

Network: If he's getting into that kind of testing with large VMs and network sensitive applications then the network recommendation makes sense. And subsequently a real server as well. I would never "play" this hard on something consumer if you need 10x 10GBE links. I'd be buying a current model server, as you suggested. If however the OP isn't doing enterprise level testing of identical real world workloads and scenarios, even a 4port GBE card will be sufficient should he need the connectivity between physical machines. Even then though that's rarely necessary when the hypervisor can provide fast VM to VM networking on the same box. Depending on the setup, you can even throw in a few single desktop grade GBE controllers as there will definitely be some free PCIE slots.

To put my POV into perspective, I'm thinking test/lab scenarios from a academic perspective. The test environments I'm talking about would allow someone to test out any scenario and feature with Microsoft server operating systems that a real server would. They could actively run more than enough servers on a single box for any scenario to test the concepts and features. It would not be capable of seeing how a clustered server could handle simulated workloads of a LOB application as an example, as the hardware wouldn't be comparable to the real world server you'd eventually spin up. To me, the only reason to go with a real server for testing is when you actually need to simulate the production environment or you are testing a production scenario so large that consumer hardware simply can't handle it.

It's been a few years though since I did a in-depth cost comparison. Used servers can go for a fraction of their original cost so I'd be curious how much a comparable desktop system costs. Perhaps used desktop parts and some server components compared to a full server aren't all that much different!
Tachyonic Karma: Future decisions traveling backwards in time to smite you now.

Who is online

Users browsing this forum: Bing [Bot] and 3 guests