Server benchmarking is an odd domain. There are lots of very well developed, non-proprietary benchmarks available, but there are also teams of engineers with huge resources at companies like Intel, AMD, HP and Sun working to make their products look as good as possible. Stuck in the middle are system engineers and IT managers who don't really have any good sources for objective and practically usable performance metrics to base design and purchasing decisions on.
We try to cut through that nonsense wherever possible and provide transparent, useful comparisons of new hardware, but that's especially difficult for server products. Real-world server workloads are complex, and synthetic benchmarks are often not indicative of practical performance. Server CPUs are especially hard. Modern processors are so fast that it takes 100's of concurrent connections to find the performance ceiling of the CPU, and you're likely to run into network, memory or other I/O bottlenecks before you do. Since one thing TR doesn't have is a 200-client test lab, we have to make do with more synthetic means of comparison.
One aspect of server workloads, especially prevalent in web services, is dealing with XML documents. XML has become the data lingua franca spoken between programs running across different hardware platforms, operating systems and programming languages. Parsing, generating and transforming XML text, by themselves, are CPU tasks that rely only on communicating with main memory, so they're good candidates for creating a synthetic benchmark.
I had originally hoped to find a public XML benchmark that Scott could add to server reviews as-is, but my search didn't turn up anything usable. The open source XML Benchmark came closest, but wasn't anything we could readily reduce to a meaningful comparison. Resigned to writing one mostly from scratch, I decided to fill another gap in our lineup, the lack of any benchmark testing performance of code running inside Microsoft's .NET framework. Microsoft has been making huge strides with ASP.NET, and winning many converts from Java J2EE based development (if you struggled through that wikipedia link, maybe you can start to understand why).
I took the four of the basic units of work in XML benchmark and ported them to C#. After some back-and-forth with Scott we arrived at a framework that allows running a variable number of iterations of each work unit (or a mix of each) across a variable number of threads. The program reports how long it took for all threads to finish, as well as more detailed statistics about the CPU time spent across all threads, and the average runtime of the work units aggregated and broken out by type. We only reported the total start-to-finish time in the Shanghai review because we couldn't figure out a good way to describe the more granular results.
Because we want our results to be independently reproducible and verifiable, I'm publishing the source code for the XML benchmark program. We're planning on refining it over time, as well, so please discuss possible improvements in the comments, or email me directly with feedback or patches. I developed it in Visual Studio 2008, but it should also compile in the excellent, and free, Visual C# 2008 Express Edition. The program relies on some test XML files; to maintain comparable results I've included the ones we used for the article with the source code.
I haven't included a license statement, because I'm not a lawyer and I honestly have no idea what license this should fall under. If you want to redistribute it, let me know and we can almost certainly put it under the GPL in an official way.
TR v2.1, the poll returns!
As I'm sure you all remember we ditched our old poll software when we cutover to the new site design and CMS. The old poll software was long-abandoned by its author, full of ancient PHP workarounds and completely out of step with the new site's look and development approach. Our redesigned poll widget is full of shiny new buzzwords like asynchronous, JSON and rich client experience, it's tightly integrated with our CMS and is stylish and attractive thanks to Cyril.
Please take the test poll questions below for a test drive and let me know if you have any problems with it. Expect the poll to return to the front page in another week or two.
Behold Thumper: Sun's SunFire X4500 storage server
In July of 2006, Sun added an interesting new machine to their lineup of Opteron-based x86 servers. It was codenamed 'Thumper' during development and designated the SunFire X4500 when it was launched. It is essentially a 4U rack-mount, dual-AMD Socket 940 server with 48 SATA disks. In my real life as a sysadmin for a large company, I was intrigued by the new direction in storage systems that Sun's experimenting with. As a PC enthusiast, I was impressed by the simplicity and scale of it.
The recipe for the thumper is simple. It's a list of commodity bits we're all familiar with:
Sun has published an architecture whitepaper for the x4500, which includes this block diagram:
Sun, as we can see, has built the Thumper around the copious system bandwidth available on the Opteron platform, an area where AMD is still competitive with the fastest Intel Xeon CPUs. The 48 SATA drives are connected via a passive backplane to power and to the six Marvell SATA controller chips on the mainboard. From there, the design of the system shows an attention to balancing I/O throughput along the entire path back the CPUs. Each drive is connected to a dedicated SATA port, unlike even most high-end storage systems which group multiple drives on a common bus. All six of the eight-port controller chips are connected via a dedicated PCI-X connection, which results in 133MB/s per drive--plenty to keep the drive reading and writing data as fast as it can move bits on and off the platters. Those PCI-X connections are, in turn, bridged onto the 8GB/s CPU hypertransport links by AMD 8132 chips, again leaving enough headroom for all the controller chips to feed data into the CPUs at once. The two PCI-X slots also have dedicated connections and system peripherals, including four Gigabit Ethernet connections, connected via downstream HT links on the tunnel chips.
So what does this add up to? A file server with 48 individual disks and a theoretical 6GB/s of disk bandwidth. Because the disk controllers are simple SATA host adapters with no RAID intelligence, the installed OS is going to see all 48 as individual devices. If you were to install Windows, I suppose you would have drive letters from C: to AX:, but would you really want the poor machine to suffer like that? The solution to this is to use your operating system of choice's software RAID functionality. Software RAID has fallen out of favor these days, in lieu of dedicated hardware to offload that task. This made a lot of sense when 200MHz Pentium Pro processors cost $1000, but most servers these days have plenty of CPU cycles available. Additionally, the RAID controller has become the bottleneck between disk and CPU in many current server configurations.
Another downside of software RAID has always been increased complexity in the OS configuration. Sun has given us another neat piece of technology to assist here: ZFS. ZFS is a new filesystem available in Solaris 10. All of the various layers of storage management have been rolled up into the filesystem with ZFS. Configuring RAID sets, aggregating them into one logical volume and then formatting and mounting it as a filesystem is accomplished a single step with ZFS. There are some examples here, and while those are some of the longest commands you might ever have to type, most of it is taken up listing all the disk device names (nothing like this).
I know this all reads like an advertisement, and maybe I've drunk the purple kool-aid, but it's hard for a server geek not to get excited about this. The combination of the X4500 and ZFS results in a level of performance and capacity that matches some high-end enterprise storage arrays. There are simple benchmarks published that put the real-world read throughput of this configuration over 1GB/s. That's a level of performance that would take an exotic configuration of four 4Gb/s host bus adapters to equal in the fiber channel world, and that's if your array controllers were capable of feeding data at that rate. All this comes at a cost that is very low by enterprise storage standards. The model with 48 1TB drives lists for about $60,000, a delivered cost per gigabyte of about $1.25. This presents new vistas of capability to system engineers, and new challenges as well. We can offer a petabyte of online storage for a little over $1M and only taking up two racks in the computer room. Problems that would have broken the entire IS budget are approachable now, but while we can afford the primary disk, the tape infrastructure to back it all up remains unattainable, not to mention it would take weeks to move 1PB from disk to tapes.
Good problems to have, I guess, at least I don't have to worry about where to save all my linux install .iso's anymore.
I've been putting in some time working with phpBB 3 lately. I expect we'll be taking some downtime in the next month or so to upgrade.
Turn, Turn, Turn
As PC enthusiasts we understand technology lifecycles. They're our bread and butter. We plan major purchases around new CPUs and microarchitecures from Intel and AMD every year or two and something new from NVidia and AT.. AMD every 6 to 9 months. Against the backdrop of this relentless change we have the anomalous fact that the majority of us have been using the same operating system for more than 5 years. This is not normal. OS vendors like Apple and RedHat have been making releases every 18-24 months in the interim. For Windows, though, it's been so long since a release that everyone seems to have forgotten how to handle the transition across major OS revisions.
All of this was made painfully clear to me last week when I installed Vista on my 6-month old Fujitsu Stylistic 5032 tablet PC. I assumed I had every reason to upgrade: new drivers, new display framework, better power management and improved tablet functionality. I had no idea what I was in for. The installation went smoothly with the minor exception that I had to plug in a USB mouse because the driver for the tablet pen digitizer wasn't loaded for the first few screens. I figured that with a PC that's been on the market for about 2 years, Vista would include WHQL drivers for everything in it. I was mostly right, video, storage, input and networking all worked immediately. Needing drivers for just fingerprint, bluetooth and audio devices I went to Fujitsu's driver website where I found... nothing. No Vista drivers for my tablet, and a short list of products that do support the new OS. While they do have Vista support for their new slate model tablet, that model wasn't even available when my 5032 was purchased.
Marching onwards in sound-card-driver-free silence, I moved on to configuring wireless. My employer uses the Cisco LEAP authentication protocol, which I quickly learned is no longer of interest to Microsoft, Intel or even Cisco. While LEAP has been supported natively by Mac OS, and pretty widely on Windows via third party utilities, like ProSet for Intel wireless chipsets, support on Vista is nonexistent. Vista does not support it itself, and even though the OS has been available in final form for 6 months, not even Cisco has a functional LEAP client available yet.
Since I'm too cheap to pay for a therapist I unloaded all this on Scott and he had his own set of lifecycle-induced woes to commiserate on. People think he's nuts for moving his test rigs to 64-bit Vista. The limitations of 32-bit operating systems are well known and widely experienced, especially when 4 gigabytes of RAM can be had for less than $300. Despite that, 64-bit Vista is still not being taken as a serious platform for enthusiasts. More than 4 gigabytes of RAM has benefits in real-world workloads like virtualization and even games like supreme commander. There is no way we can reap those benefits while still being stuck with a 32-bit operating system because some required utility or driver refuses to support anything else.
All of these issues lead me to this conclusion: our entire industry has a fundamental problem with how it approaches technology lifecycles. It hasn't helped that Microsoft failed so spectacularly with vista's execution and led some to believe a 5-year OS lifecycle was natural. Microsoft has stated that their intended lifecycle for windows is 18-24 months. If Microsoft is able to execute that plan it will be clearly unacceptable for hardware vendors like Fujitsu to support only a single OS revision. Vista's successor will be released and XP will be out of mainstream support before my tablet is 3 years old. Software vendors have failed to deliver just as badly. By neglecting to test on and support Vista and 64-bit systems they have tried to pretend these lifecycle issues don't exist. IT staff like me are to blame as well. If I had loaded vista 6 months ago when it became available, I could have brought attention to the wireless problems before the users we support discovered the problem.
I don't have any great suggestions for resolving this, but awareness is a start. We are all so busy working on current problems that we devote little or no time to planning for the future. Technology lifecycles are unavoidable. Hardware vendors, software developers and IT specialists are all failing to deliver what customers need when we ignore them.system update
I haven't posted a blog for a while, but that's mostly because things have been pretty quiet behind the scenes here. Our transition to two new servers at the beginning of November went relatively well. We have survived the hordes from slashdot and digg on a couple occasions while maintaining good site responsiveness. With that taken care of, I can start focusing on some of the new features Scott would like to see added to the site.
PC gaming seems to be in a bit of a slump lately. I've never played the Battlefield series, and BF2142 didn't seem like the time to start. Enemy Territory: Quake Wars seems promising, but isn't here yet. A friend of mine got Neverwinter Nights 2, and while he had nothing but praise for the graphics, he felt the gameplay and interface were a big step back from NWN1. On the other hand, we got a Wii on launch day and it's been incredible fun for everybody in the family.
There are still a few lingering issues from the server move. The most visible is that, currently, jazz comments are not immediately visible after they're posted. I'll be looking into that and hope to have it solved quickly.Build update
I had all the hardware for my new build in hand by Thursday and got the bits stuck together by Friday night. I used NLite to slipstream SP2 and the Intel SATA drivers onto windows XP and create an ISO that I burnt and installed from. I had some trouble getting the audio drivers to install correctly, I was missing some windows HD audio component. I flailed around enough that I'm not sure what fixed it, but I think automatic updates may have eventually installed the hotfix I needed.
I'm happy with the case, it's easy to work in and very quiet. The board was also nice to work with. The asus q-connector (about halfway down the page here) is a small touch that made the build process much easier.
I did get a substantial (at least to me) overclock out of the rig as well. I'm running 3.0Ghz (9*333) with a 4:5 memory ratio for 833. It survived a dual-prime95 run overnight with no errors, and core temps somewhere between 65 and 70C. I was able to boot and run prime without errors at 3.3 (9*366) and the same memory ratio (900Mhz), but my core temps were pushing to 80C and above, and I think it eventually throttled. I'm pretty sure it could be stable at 3.3 with better cooling. I tried 3.6Ghz (9*400) with RAM at 1000Mhz just for kicks. It booted but prime95 failed immediately. The C2D OC thread in the forums was helpful.
I made some progress on jazz last weekend. The code was in better shape than I expected, and only required minor tweaks to compile (and link even). I scrapped a non-functional build system based on autoconf and automake and replaced it with a static Makefile. Once I get it completely functional we should be ready to move to some faster servers in pretty short order.Benadryl, newegg and c++
I went ahead and ordered the core of my new C2D rig:
Now that blogs are done, I'm turning my attention to ja.zz. There are some negative consequences to the fact that ja.zz is written in c++. It's relatively difficult to make changes and it's dependent on the specific libraries in our current environment. The codebase we have now hasn't been maintained in a couple years and won't compile on current g++ versions. I'm planning on going through it file by file, working up through dependencies, until the whole thing compiles and links. It's somewhere between 15-20 KLOC. So It'll probably take most of my free time (what's left over after CoH) for at least a few weeks. It's using odd libraries for SQL access and threads. Is there a solid c++ database access layer I should consider porting to? Is the thread support in the core Linux libraries adequate, or do I need to continue to use a helper library?
|Alphacool Eiswolf 120 GPX-Pro takes the RX Vega to the pool||3|
|Deal of the day: a 144-Hz IPS FreeSync monitor for $400||8|
|The Tech Report's summer 2017 mobile staff picks||26|
|Go pro with the Asus ROG Strix XG27VQ gaming monitor||9|
|VivoBook W202NA is ready to brave the toughest of classrooms||3|
|MSI Infinite A desktops flaunt their gaming chops||11|
|Dual chambers and glass meet in the Lian Li PC-Q39||8|
|Razer Atheris is ready to strike on the move||14|
|Alphacool goes big with Eisbaer 420 AIO liquid cooler||6|