We PC enthusiasts have been co-opting enterprise-class hardware for our own personal systems for years now. We got our first taste of the creamy smoothness of SMP on dual-socket workstation boards long before you could get two cores conveniently packaged on the same chip. Tempted by lightning-fast access times and 10K-RPM spindle speeds, we adopted Western Digital’s Raptor hard drive. And who can forget AMD’s Toledo-based dual-core Opterons—overclocking marvels compatible with the same 939-pin socket as Athlon 64 X2 desktop chips. I’ve had what was a relatively inexpensive Opteron 165 designed to run at 1.8GHz happily chugging away at 2.4GHz on a standard desktop motherboard for a couple of years now.
Although they share the same architecture and performance characteristics as their desktop counterparts, server and workstation processors like the Opteron typically undergo additional validation testing and run at lower operating voltages. In a sense, chips that make the grade for the enterprise world are the best of the breed. That doesn’t guarantee overclocking success, but it at least hints at untapped potential. When that potential plugs into a standard desktop motherboard loaded with overclocking options, we just can’t resist.
It’s no wonder, then, that Intel’s Xeon X3320 caught my eye recently. This LGA775 chip features a 45nm Yorkfield core running at 2.5GHz with 6MB of L2 cache, making it the Xeon equivalent of the Core 2 Quad Q9300. What’s more, while the Q9300 has been in short supply of late, the X3320 has been more consistently available at roughly the same price.
Few things make us happier than forcing normally conservative enterprise-class hardware to jump through flaming enthusiast hoops, so we scored an off-the-shelf Xeon X3320 retail box from the folks at NCIX to see what kind of overclocking headroom we could find. Read on for the surprising results.
Penryn dons a suit
The key to overclocking server and workstation processors is finding ones compatible with standard desktop motherboards. AMD and Intel both employ different socket designs for some of their enterprise-class chips, and the server and workstation motherboards they slip into tend to be completely devoid of overclocking options. Fortunately, Intel’s Xeon 3000 series uses the same LGA775 package as desktop Core 2 chips, so you should be able to plug them into any motherboard that’s compatible with their desktop equivalents.
The quad-core members of the Xeon 3000 series derived from Intel’s new 45nm Penryn core are a part of the Xeon X3300 line, which includes the 2.5GHz X3320, the 2.66GHz X3340, and the 2.83GHz X3360. All three run on a 1333MHz front-side bus and are rated for a thermal design power of 95W. While the Xeon X3340 and X3360 each have a total of 12MB of L2 cache (6MB per dual-core component), the X3320 must make do with only 6MB of L2 (3MB per dual-core component).
The Xeon X3320’s specs exactly match those of the Core 2 Quad Q9300, and their prices are comparable. You can find the Xeon online for as little as $290, while the Q9300 can be had for around $285. So what’s the difference between the chips then?
Well there’s the seal of approval that additional validation testing ostensibly provides, of course. Quantifying any difference in operating voltage between desktop and enterprise chips is a little more difficult now that Intel sets default voltages on a per-chip basis. However, I can tell you that CPU-Z reports our X3320’s default voltage as 1.12V. We also have a Q9300 in house for testing (look for a full review soon), and its default voltage is 1.2V.
Another difference between Intel’s desktop chips and their Xeon counterparts is the cooler bundled with retail-boxed CPUs. The Xeon’s heatsink is a low-profile design that measures only 45mm tall (1.77 inches for the metric-impaired) to allow it to fit into rack-mount enclosures with less headroom.
The cooler features a copper core and the same basic design as the heatsinks bundled with desktop Core 2 processors, just in a much lower profile. The heatsinks bundled with Intel’s desktop chips typically measure 60mm tall (2.36 inches).
To give you an idea of the size difference between Core 2 and Xeon heatsinks, we’ve lined them up with Scythe’s popular Ninja cooler. The fans on the Intel coolers are similar in size, but the Xeon’s heatsink is half the height—and therefore has half the surface area—of its desktop brethren.
The Ninja is just a monster by comparison, but it’s relatively inexpensive and a worthwhile upgrade over the Xeon’s stock cooler if you’re going to be overclocking. Obviously, the Ninja offers significantly more surface area than either of Intel’s stock coolers, with a much larger fan that can generate more airflow at lower noise levels. An aftermarket heatsink was necessary for our testing because the default Xeon cooler wouldn’t actually install properly in the Gigabyte X48T-DQ6 motherboard we used to overclock the X3320. The Xeon cooler is an incredibly tight fit, and the Gigabyte board’s socket backplate prevented it from flexing enough to set all the retention tabs. Despite its generous proportions, the Ninja installed without a hitch.
Our testing methods
All tests were run three times, and their results were averaged.
Xeon 3320 2.5GHz
|Intel X48 Express|
|Memory size||2GB (2 DIMMs)|
Corsair CM3X1024-1600C7DHX DDR3 SDRAM at 1333MHz
RAS to CAS
Integrated ALC889A with 1.88
GeForce 8800 GT 1GB PCIe with ForceWare 169.25 drivers
Western Raptor X 150GB
Windows Vista Ultimate x86
KB936710, KB938194, KB938979, KB940105
Thanks to Corsair for providing us with memory for our testing.
All of our test systems were powered by OCZ GameXStream 700W power supply units. Thanks to OCZ for providing these units for our use in testing.
We’d like to thank Western Digital for sending Raptor WD1500ADFD hard drives for our test rigs. The Raptor’s still the fastest all-around drive on the market, and the only 10K-RPM Serial ATA drive you can buy.
Finally, thanks to the good folks at NCIX for hooking us up with an off-the-shelf Xeon X3320 retail box to use for testing.
We used the following versions of our test applications:
The test systems’ Windows desktop was set at 1280×1024 in 32-bit color at an 85Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.
All the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.
Opening the taps
We were able to overclock the front-side bus of Gigabyte’s X48T-DQ6 motherboard to a whopping 500MHz with default chipset voltages when we reviewed the board last month, so it seemed like a perfect platform on which to push the X3320. The X3320’s default 7.5X multiplier is only unlocked for lower values (and then only down to 6X), so we pushed the chip by increasing the speed of the front-side bus, testing for stability with Prime95 crunching on all cores along the way. We also adjusted the memory bus divider to ensure that our DIMMs operated at or below their rated speed, removing them as a potential hindrance.
Overclocking the X3320 was a breeze up to a front-side bus speed of 435MHz. The chip didn’t even need any extra voltage, although as soon as we deviated from its stock 2.5GHz clock speed, CPU-Z reported that the core voltage had increased from 1.120V to 1.136V.
Our 435MHz front-side bus yielded a processor clock speed of 3.26GHz, which is a healthy 30% boost over the CPU’s default frequency. Heck, 3.26GHz is even faster than Intel’s flagship Core 2 Extreme QX9770, which runs at 3.2GHz and sells for an absolutely obnoxious $1500.
Unfortunately, our X3320 just wasn’t stable beyond 3.26GHz. At 3.3GHz, Prime95 started spitting out errors with at least one of the four instances we ran to stress test the CPU. Since we were still using the chip’s default voltage, we cranked up the juice to see if that might help. But it didn’t. Even at as much as 1.4V, our X3320 just wasn’t 100% stable at 3.3GHz. We tried higher speeds, too, and although we could post and even get into Windows at up to 3.5GHz, the system crashed under load. Getting a quad-core CPU to boot into Windows is one thing, but having all four of its cores behave while under load is considerably more difficult, it would seem.
Just to be sure, we swapped the X3320 into an nForce 790i SLI motherboard to see if it would run any faster on an Nvidia platform. No such luck. The chip stalled out at 3.26GHz, just like on the Gigabyte X48 board.
To provide a glimpse of what kind of performance increase you can expect from a Xeon X3320 overclocked from 2.5 to 3.26GHz, I ran a few benchmarks at stock and overclocked speeds. When overclocked, the system ran with a 1305MHz memory bus, which is a little slower than the effective 1333MHz memory clock of our stock config.
That’s a healthy performance increase from what’s essentially a “free” overclock at default voltage. Not bad at all.
But what about operating temperatures? Surely there’s a penalty involved with pushing the X3320’s clock speed 30% higher than what Intel prescribes for the chip. We busted out Everest to monitor CPU temperatures at idle and under load and were a little surprised by the results.
At idle, there’s no difference in operating temperature between the chips. However, under load, the overclocked config runs about six degrees warmer—not much, all things considered.
Everest also reports individual core temperatures, and while these values were much higher than the reported “CPU temperature,” there wasn’t much difference between the stock and overclocked setups. Our stock-clocked Xeon reported core temperatures between 47° and 50°C at idle and between 63° and 67°C under load. When pushed to 3.26GHz, core temperatures rose to between 48° and 51°C at idle and between 67° and 71°C when subjected to our four-way Prime95 load.
Next, we used a Watts Up? Pro power meter to test total system power consumption, sans monitor and speakers, at the wall outlet.
There’s only a marginal difference in idle power consumption between our Xeon X3320 at 2.5 and 3.26GHz. Under a Prime95 load, however, the overclocked system pulls nearly 17W more power than our stock config. We’re more than willing to swallow a 9% increase in peak power consumption for a 30% boost in clock speed, though.
I wasn’t quite sure what to expect from overclocking the Xeon X3320. Server and workstation processors have proven to be potent overclockers in the past, but quad-core chips present a greater challenge. System stability can suffer even if only one of those cores isn’t comfortable at a given speed. AMD’s Phenom processors can set independent clocks for each core, but that’s not possible with Intel’s Core 2 processors or the Xeon X3320.
Our off-the-shelf X3320 was perfectly stable up to 3.26GHz with its stock voltage and a large—but relatively inexpensive—Scythe Ninja cooler. Some of the chip’s cores were clearly capable of running at higher speeds, but as a whole, 3.26GHz was all she wrote. This 30% jump in clock speed isn’t the most impressive overclock we’ve seen, but it’s a decent margin for a quad-core chip. The fact that our X3320 didn’t require any extra voltage to hit 3.26GHz suggests there’s plenty of headroom to be exploited in the Penryn core, as well.
Even though we didn’t hit jaw-dropping clock speeds, this is exactly the kind of overclock I love to see as an enthusiast. Exotic water and phase-change coolers are marvels to behold, but I’m more interested in the “free” overclocking headroom that can be exploited with modest air cooling. And since plenty of affordable P35-based motherboards have no problem running at front-side bus speeds beyond 400MHz, you don’t need a fancy motherboard to push the Xeon X3320, either.
Whether the Xeon X3320 has more overclocking potential than its Core 2 Quad Q9300 desktop counterpart remains to be seen. If our experience is any indication, though, you should have no problem cranking the X3320 up to 3GHz and beyond.