Personal computing discussed
Moderators: renee, JustAnEngineer
flip-mode wrote:CPU: 1P will likely be sufficient, I won't complain if 2P is possible. As you said, I'm not much worried about clockspeed, but cores/threads are more=better. AMD or Intel I don't much care.
RAM: I'm aiming for 64 GB, but haven't gotten to looking at registered vs unregistered. My guess is that I'll opt for unregistered for $ savings.
shodanshok wrote:VMWare also have a dinamically-controlled ballon driver to reclaim unused guest memory.
flip-mode wrote:RAID:
I think it's going to be hardware RAID; I'm just going to assume that for the simplicity's sake for the moment, but regardless I'm 98% sure it'll be hardware. I've heard this from you guys:
RAID 6 or RAID 10.
Anyone have anything against RAID 1?
I know it becomes space-inefficient pretty quickly, but in terms of reliability and rebuild time, any red flags?
flip-mode wrote:RAID:
I think it's going to be hardware RAID; I'm just going to assume that for the simplicity's sake for the moment, but regardless I'm 98% sure it'll be hardware. I've heard this from you guys:
RAID 6 or RAID 10.
Anyone have anything against RAID 1?
I know it becomes space-inefficient pretty quickly, but in terms of reliability and rebuild time, any red flags?
Flatland_Spider wrote:
Xen 4.0+ has memory balloning, and memory sharing and memory paging are in the tech preview stage with the notes stating, "Preview, due to limited tools support. Hypervisor side in good shape."
Xen Release Features
http://wiki.xenproject.org/wiki/Xen_Release_Features
Scrotos wrote:flip-mode wrote:RAID:
I think it's going to be hardware RAID; I'm just going to assume that for the simplicity's sake for the moment, but regardless I'm 98% sure it'll be hardware. I've heard this from you guys:
RAID 6 or RAID 10.
Anyone have anything against RAID 1?
I know it becomes space-inefficient pretty quickly, but in terms of reliability and rebuild time, any red flags?
RAID 10/1+0 is just RAID 1 with more than 2 disks. A 4-disk RAID 1 is the same thing as a 4-disk RAID 10 in practice. They both lose 50% of their capacity but at least you get some striping action to give a speed boost.
paco wrote:Check out the Dell R515
PE R515 with up to 12 Hot Swap Hard Drives, LED
64GB Memory,(4x16GB) 1600MT/s, Dual Rank LVRDIMMs at Std Volt, for 2 Processors
2x AMD Opteron™ 4386, 3.1GHz, 8C, Turbo CORE, 8M L2/8M L3, 1600Mhz Max Mem
PERC H700 Integrated RAID Controller 512MB Cache,12HD
RAID 10 for PERC H200 and H700 Controllers, x8 and x12 Chassis
750 Watt Redundant Power Supply
(6) 600GB 15K RPM SAS 6Gbps 3.5in Hot-plug Hard Drive (1.8TB HDD Space)
3Yr Basic Hardware Warranty Repair: 5x10 HW-Only, 5x10 NBD Onsite
Sub-total $4,890.54
Make sure to call them, I'm pretty sure you could get that price out the door and maybe add on some more warranty.
You have to remember you don't want to purchase for what is good enough currently. Any business expects to grow and this will allow you to add more hard drive space and more memory easily, this might be good enough for quite some time as it is.
I first compared the R520 and R720 but the AMDs get you a lot more for your money it seems.
paco wrote:Check out the Dell R515
Sub-total $4,890.54
shodanshok wrote:In comparison, HP is more open minded. However the last time I checked they did not have a server with the same performance/cost ratio than DELL R515.
shodanshok wrote:I second this. The only possible change is to use 8x 2TB WD SE or RE driver, which are slower (7200RPM) but with much higher space/cost ratio. However, DELL is very angry to let you use 3dr part disk, so:
- you had to run the very latest RAID firmware version (previous version blocked 3dr part driver)
- you had to buy the disk tray/slot from amazon, ebay or other 3dr part resellesr (DELL will not sell you the enclosure without a drive).
In comparison, HP is more open minded. However the last time I checked they did not have a server with the same performance/cost ratio than DELL R515.
mmmmmdonuts21 wrote:I have had great success with both supermicro and tyan in the pass. Can you use old equipment from ebay? If so look for the Dell C6100. It is a steal for the price what these badboy servers are. http://www.ebay.com/itm/Dell-Poweredge-C6100-2U-8x-XEON-QC-L5520-2-26GHz-4xNODES-NO-HDD-96GB-Ram-Tested-/251283578250?pt=COMP_EN_Servers&hash=item3a81ab1d8a
Scrotos wrote:mmmmmdonuts21 wrote:I have had great success with both supermicro and tyan in the pass. Can you use old equipment from ebay? If so look for the Dell C6100. It is a steal for the price what these badboy servers are. http://www.ebay.com/itm/Dell-Poweredge-C6100-2U-8x-XEON-QC-L5520-2-26GHz-4xNODES-NO-HDD-96GB-Ram-Tested-/251283578250?pt=COMP_EN_Servers&hash=item3a81ab1d8a
Holy crap I just checked this out. NICE.
kc77 wrote:Having run KVM for four years now in production it is has been rock solid. I'm talking Tonka truck tough. Had we not moved to a new file system layout they would still be running. I'm not saying it is THE way to go but it should be considered especially if Hyper-V is anywhere in the discussion. If uptime is a factor then Windows being the hypervisor for your VM's isn't exactly the epitome of uptime on Tuesdays.
Beelzebubba9 wrote:*Unless you're awesome with KVM or Xen, then you should be working for a IaaS provider making $250K a year and not on baby VM hosts.
Unlimited number of cores per physical CPU
Unlimited number of physical CPUs per host
Maximum vCPUs per virtual machine: eight
Limitation of 32GB RAM limit per server/host has been removed from the free Hypervisor.
Operating system support: Microsoft OS (18 versions), Linux (54 versions), Mac OS X 10, Solaris, FreeBSD, etc. (See a complete list of supported versions.)
- See more at: http://www.vmware.com/products/vsphere- ... I5gBz.dpuf
flip-mode wrote:It's going to be a challenge for me to thoroughly evaluate the differences between XenServer and ESXi. My real job is building design and detailing.
flip-mode wrote:Good Monday morning, gerbils. XenServer won't let me create an "ISO Library". This is slightly annoying. I'll have to burn an OS to disk and then put the disk in the physical host to create a VM, I guess.
On the topic of Vsphere Hypervisor, I just read this:Unlimited number of cores per physical CPU
Unlimited number of physical CPUs per host
Maximum vCPUs per virtual machine: eight
Limitation of 32GB RAM limit per server/host has been removed from the free Hypervisor.
Operating system support: Microsoft OS (18 versions), Linux (54 versions), Mac OS X 10, Solaris, FreeBSD, etc. (See a complete list of supported versions.)
- See more at: http://www.vmware.com/products/vsphere- ... I5gBz.dpuf
So, no CPU limitations, no core count limitations, and no RAM limitations. Heh, cool.
It's going to be a challenge for me to thoroughly evaluate the differences between XenServer and ESXi. My real job is building design and detailing.