Overclocking Intel’s Xeon X3320 processor

Manufacturer Intel
Model Xeon X3320 2.5GHz
Price (Street)
Availability Now

We PC enthusiasts have been co-opting enterprise-class hardware for our own personal systems for years now. We got our first taste of the creamy smoothness of SMP on dual-socket workstation boards long before you could get two cores conveniently packaged on the same chip. Tempted by lightning-fast access times and 10K-RPM spindle speeds, we adopted Western Digital’s Raptor hard drive. And who can forget AMD’s Toledo-based dual-core Opterons—overclocking marvels compatible with the same 939-pin socket as Athlon 64 X2 desktop chips. I’ve had what was a relatively inexpensive Opteron 165 designed to run at 1.8GHz happily chugging away at 2.4GHz on a standard desktop motherboard for a couple of years now.

Although they share the same architecture and performance characteristics as their desktop counterparts, server and workstation processors like the Opteron typically undergo additional validation testing and run at lower operating voltages. In a sense, chips that make the grade for the enterprise world are the best of the breed. That doesn’t guarantee overclocking success, but it at least hints at untapped potential. When that potential plugs into a standard desktop motherboard loaded with overclocking options, we just can’t resist.

It’s no wonder, then, that Intel’s Xeon X3320 caught my eye recently. This LGA775 chip features a 45nm Yorkfield core running at 2.5GHz with 6MB of L2 cache, making it the Xeon equivalent of the Core 2 Quad Q9300. What’s more, while the Q9300 has been in short supply of late, the X3320 has been more consistently available at roughly the same price.

Few things make us happier than forcing normally conservative enterprise-class hardware to jump through flaming enthusiast hoops, so we scored an off-the-shelf Xeon X3320 retail box from the folks at NCIX to see what kind of overclocking headroom we could find. Read on for the surprising results.

Penryn dons a suit

The key to overclocking server and workstation processors is finding ones compatible with standard desktop motherboards. AMD and Intel both employ different socket designs for some of their enterprise-class chips, and the server and workstation motherboards they slip into tend to be completely devoid of overclocking options. Fortunately, Intel’s Xeon 3000 series uses the same LGA775 package as desktop Core 2 chips, so you should be able to plug them into any motherboard that’s compatible with their desktop equivalents.

The quad-core members of the Xeon 3000 series derived from Intel’s new 45nm Penryn core are a part of the Xeon X3300 line, which includes the 2.5GHz X3320, the 2.66GHz X3340, and the 2.83GHz X3360. All three run on a 1333MHz front-side bus and are rated for a thermal design power of 95W. While the Xeon X3340 and X3360 each have a total of 12MB of L2 cache (6MB per dual-core component), the X3320 must make do with only 6MB of L2 (3MB per dual-core component).

The Xeon X3320’s specs exactly match those of the Core 2 Quad Q9300, and their prices are comparable. You can find the Xeon online for as little as $290, while the Q9300 can be had for around $285. So what’s the difference between the chips then?

Well there’s the seal of approval that additional validation testing ostensibly provides, of course. Quantifying any difference in operating voltage between desktop and enterprise chips is a little more difficult now that Intel sets default voltages on a per-chip basis. However, I can tell you that CPU-Z reports our X3320’s default voltage as 1.12V. We also have a Q9300 in house for testing (look for a full review soon), and its default voltage is 1.2V.

Another difference between Intel’s desktop chips and their Xeon counterparts is the cooler bundled with retail-boxed CPUs. The Xeon’s heatsink is a low-profile design that measures only 45mm tall (1.77 inches for the metric-impaired) to allow it to fit into rack-mount enclosures with less headroom.

The cooler features a copper core and the same basic design as the heatsinks bundled with desktop Core 2 processors, just in a much lower profile. The heatsinks bundled with Intel’s desktop chips typically measure 60mm tall (2.36 inches).

To give you an idea of the size difference between Core 2 and Xeon heatsinks, we’ve lined them up with Scythe’s popular Ninja cooler. The fans on the Intel coolers are similar in size, but the Xeon’s heatsink is half the height—and therefore has half the surface area—of its desktop brethren.

The Ninja is just a monster by comparison, but it’s relatively inexpensive and a worthwhile upgrade over the Xeon’s stock cooler if you’re going to be overclocking. Obviously, the Ninja offers significantly more surface area than either of Intel’s stock coolers, with a much larger fan that can generate more airflow at lower noise levels. An aftermarket heatsink was necessary for our testing because the default Xeon cooler wouldn’t actually install properly in the Gigabyte X48T-DQ6 motherboard we used to overclock the X3320. The Xeon cooler is an incredibly tight fit, and the Gigabyte board’s socket backplate prevented it from flexing enough to set all the retention tabs. Despite its generous proportions, the Ninja installed without a hitch.

Our testing methods

All tests were run three times, and their results were averaged.

Processor

Xeon 3320 2.5GHz
System bus 1333MHz (333MHz
quad-pumped)

Motherboard


Gigabyte X48T-DQ6
Bios revision F4D

North bridge
Intel X48 Express

South bridge
Intel ICH9R
Chipset drivers Chipset 8.3.1.1009

AHCI 7.8.0.1012

Memory size 2GB (2 DIMMs)

Memory type


Corsair CM3X1024-1600C7DHX DDR3 SDRAM
at 1333MHz
CAS latency
(CL)
7
RAS to CAS
delay (tRCD)
7
RAS precharge
(tRP)
7
Cycle time
(tRAS)
21
Command rate 1T

Audio codec
Integrated ALC889A with 1.88
drivers
Graphics

GeForce 8800 GT 1GB PCIe
with ForceWare 169.25 drivers
Hard drive
Western Raptor X 150GB
OS

Windows Vista Ultimate x86
OS updates
KB936710, KB938194, KB938979, KB940105

Thanks to Corsair for providing us with memory for our testing.

All of our test systems were powered by OCZ GameXStream 700W power supply units. Thanks to OCZ for providing these units for our use in testing.

We’d like to thank Western Digital for sending Raptor WD1500ADFD hard drives for our test rigs. The Raptor’s still the fastest all-around drive on the market, and the only 10K-RPM Serial ATA drive you can buy.

Finally, thanks to the good folks at NCIX for hooking us up with an off-the-shelf Xeon X3320 retail box to use for testing.

We used the following versions of our test applications:

The test systems’ Windows desktop was set at 1280×1024 in 32-bit color at an 85Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.

All the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Opening the taps

We were able to overclock the front-side bus of Gigabyte’s X48T-DQ6 motherboard to a whopping 500MHz with default chipset voltages when we reviewed the board last month, so it seemed like a perfect platform on which to push the X3320. The X3320’s default 7.5X multiplier is only unlocked for lower values (and then only down to 6X), so we pushed the chip by increasing the speed of the front-side bus, testing for stability with Prime95 crunching on all cores along the way. We also adjusted the memory bus divider to ensure that our DIMMs operated at or below their rated speed, removing them as a potential hindrance.

Overclocking the X3320 was a breeze up to a front-side bus speed of 435MHz. The chip didn’t even need any extra voltage, although as soon as we deviated from its stock 2.5GHz clock speed, CPU-Z reported that the core voltage had increased from 1.120V to 1.136V.

Our 435MHz front-side bus yielded a processor clock speed of 3.26GHz, which is a healthy 30% boost over the CPU’s default frequency. Heck, 3.26GHz is even faster than Intel’s flagship Core 2 Extreme QX9770, which runs at 3.2GHz and sells for an absolutely obnoxious $1500.

Unfortunately, our X3320 just wasn’t stable beyond 3.26GHz. At 3.3GHz, Prime95 started spitting out errors with at least one of the four instances we ran to stress test the CPU. Since we were still using the chip’s default voltage, we cranked up the juice to see if that might help. But it didn’t. Even at as much as 1.4V, our X3320 just wasn’t 100% stable at 3.3GHz. We tried higher speeds, too, and although we could post and even get into Windows at up to 3.5GHz, the system crashed under load. Getting a quad-core CPU to boot into Windows is one thing, but having all four of its cores behave while under load is considerably more difficult, it would seem.

Just to be sure, we swapped the X3320 into an nForce 790i SLI motherboard to see if it would run any faster on an Nvidia platform. No such luck. The chip stalled out at 3.26GHz, just like on the Gigabyte X48 board.

To provide a glimpse of what kind of performance increase you can expect from a Xeon X3320 overclocked from 2.5 to 3.26GHz, I ran a few benchmarks at stock and overclocked speeds. When overclocked, the system ran with a 1305MHz memory bus, which is a little slower than the effective 1333MHz memory clock of our stock config.

That’s a healthy performance increase from what’s essentially a “free” overclock at default voltage. Not bad at all.

But what about operating temperatures? Surely there’s a penalty involved with pushing the X3320’s clock speed 30% higher than what Intel prescribes for the chip. We busted out Everest to monitor CPU temperatures at idle and under load and were a little surprised by the results.

At idle, there’s no difference in operating temperature between the chips. However, under load, the overclocked config runs about six degrees warmer—not much, all things considered.

Everest also reports individual core temperatures, and while these values were much higher than the reported “CPU temperature,” there wasn’t much difference between the stock and overclocked setups. Our stock-clocked Xeon reported core temperatures between 47° and 50°C at idle and between 63° and 67°C under load. When pushed to 3.26GHz, core temperatures rose to between 48° and 51°C at idle and between 67° and 71°C when subjected to our four-way Prime95 load.

Next, we used a Watts Up? Pro power meter to test total system power consumption, sans monitor and speakers, at the wall outlet.

There’s only a marginal difference in idle power consumption between our Xeon X3320 at 2.5 and 3.26GHz. Under a Prime95 load, however, the overclocked system pulls nearly 17W more power than our stock config. We’re more than willing to swallow a 9% increase in peak power consumption for a 30% boost in clock speed, though.

Conclusions

I wasn’t quite sure what to expect from overclocking the Xeon X3320. Server and workstation processors have proven to be potent overclockers in the past, but quad-core chips present a greater challenge. System stability can suffer even if only one of those cores isn’t comfortable at a given speed. AMD’s Phenom processors can set independent clocks for each core, but that’s not possible with Intel’s Core 2 processors or the Xeon X3320.

Our off-the-shelf X3320 was perfectly stable up to 3.26GHz with its stock voltage and a large—but relatively inexpensive—Scythe Ninja cooler. Some of the chip’s cores were clearly capable of running at higher speeds, but as a whole, 3.26GHz was all she wrote. This 30% jump in clock speed isn’t the most impressive overclock we’ve seen, but it’s a decent margin for a quad-core chip. The fact that our X3320 didn’t require any extra voltage to hit 3.26GHz suggests there’s plenty of headroom to be exploited in the Penryn core, as well.

Even though we didn’t hit jaw-dropping clock speeds, this is exactly the kind of overclock I love to see as an enthusiast. Exotic water and phase-change coolers are marvels to behold, but I’m more interested in the “free” overclocking headroom that can be exploited with modest air cooling. And since plenty of affordable P35-based motherboards have no problem running at front-side bus speeds beyond 400MHz, you don’t need a fancy motherboard to push the Xeon X3320, either.

Whether the Xeon X3320 has more overclocking potential than its Core 2 Quad Q9300 desktop counterpart remains to be seen. If our experience is any indication, though, you should have no problem cranking the X3320 up to 3GHz and beyond.

Comments closed
    • provoko
    • 11 years ago

    Opteron 165 represent! Yes, I beat Dissonance by 100mhz (2.5ghz)! Haha.

    It’s too bad the Xeon didn’t overclock more, but great article.

    • ludi
    • 12 years ago

    Seeing as how I entered the PC enthusiast scene in 1997/1998, it is still a very odd feeling to find multicore Xeon processors selling at $300 and being reviewed for overclocking potential in a casual gaming station.

      • Krogoth
      • 11 years ago

      These “Xeons” are really just Core 2 chips that passed higher QA tests.

        • ludi
        • 11 years ago

        That’s been true for many Xeons over the years but those additional QA tests traditionally commanded a hefty premium.

    • MadManOriginal
    • 12 years ago

    Toldya in the recent ‘deals’ post that the multi would hold this thing back. I’d be as willing to bet it was a mobo limitation or setting as much as the CPU itself holding back the oc.

    • Sargent Duck
    • 12 years ago

    Thanks for running showing the power consumption and temperature. While I have always overclocked my processors a little bit, This past year (while I was in school) I’ve forced my Opteron 170 (2Ghz) to run at 1Ghz. I just don’t need that much power for writing essays and surfing the ‘net.

    My next processor purchase will most likely be bought on the power consumption/performance concept instead of the usual price/performance.

    Seeing the difference in power consumption is very nice to see.

      • mattthemuppet
      • 12 years ago

      I don’t think you can go wrong with an E8200 or an E7*** chip (45nm E4*** equivalent) if you can wait – performance per watt is pretty impressive for those, particularly if you can’t make use of >2 cores.

      • Mikael33
      • 12 years ago

      If you run xp(vista has them by default) install the amd drivers which will enable CoolnQuiet- it will clock your chip down when you’re not using it heavily. My 1.8ghz opteron@ 2.7ghz gets clocked down to 1500mhz at idle.

        • Sargent Duck
        • 12 years ago

        Oh yeah, I’ve got the Cool ‘n Quite drivers installed. I use Rightmark to force it at 1ghz, because in the past 8 months at school, I haven’t used an application requiring anything higher.

        And Mattthemuppet, yeah, I’m thinking a new build in mid-summer, once I get some $$ saved.

    • ColeLT1
    • 12 years ago

    I have my x3350 running at 3.6ghz right now, and I hit the same wall at 425ish bus speed. This is because of the VTT and GTL settings need to be adjusted, bumping the VTT to 1.3v @ 95-95-108 GTL’s I was able to do 450*8 with low voltage on the chip (1.3vCore), but my NB is in the 1.55range. These chips love high speeds, and do it at low voltage, but you have to use a nice board with VTT and GTL adjustments to be able to push them. I would try to get to 475*8, but the VTT absolute max from intel datasheet is 1.45v for 45nm’s, so I may just stay where I am.

    The duals do not have this problem, most of the time you can leave vtt at 1.1, but since the quads use the fsb to talk between the 2 sets of dual cores, the GTL’s are very critical if you want a high fsb. Also, I am running a DFI x48, and as it turns out, if you want to overclock one of these chips the P35’s handle the quads slightly better.

      • crazybus
      • 12 years ago

      QFT.

      Note that AFAIK the particular Gigabyte X48 mobo Geoff tested with lacks AGTL+ reference voltage adjustments. I can’t figure that one out since I’m pretty sure Gigabyte puts that in the BIOS of most of there P35 boards (not including my DS3L).

      With tweaking most of the 790i boards I’ve seen can get up to 490-500mhz fsb with a 45nm quad. I have a hard time believing this chip maxed out at 3.2ghz (especially since extra voltage did nothing).

    • impar
    • 12 years ago

    Greetings!

    Regarding temperatures, the value that matters is the Core temperature, not the CPU temperature.

    Think about it, ambient temperature would have to be 5-10ºC for the idle to be reported as 20ºC.

    • wallyoz
    • 12 years ago

    I am afraid your Xeon CPU cooler theory leaks water.

    Purchased today new shipment in Australia Boxed quad-core Desktop Q9450 SLAWR.

    Inside exact same low profile cooler.

    Maybe it has more to do with 45nm power consumption generally which doesn’t require the large cooler.

      • eitje
      • 12 years ago

      the X3220 supposedly has 95W TDP.

        • ludi
        • 12 years ago

        TDP is somewhat like having four standard shoebox sizes in which to sell 12 common sizes of shoes.

          • eitje
          • 12 years ago

          We should just ask Geoff to test the two heatsinks on another motherboard, and let us know if there’s a difference in CPU temps. 🙂

          Hey, Diss, can you do the aforementioned, please? 🙂

          My guess – smaller HSF means cheaper HSF. It’s a cost-saving measure, and the two HSFs perform similarly.

      • wallyoz
      • 12 years ago

      My Q9450 SLAWR default voltage is 1.172V

    • ssidbroadcast
    • 12 years ago

    This was a cool little article. Next time can we see a similar article overclocking the nearest equivalent Opteron?

    • eitje
    • 12 years ago

    i love these kinds of quick-hit pieces.

    can i request another one-pager, where you attempt undervolting the same proc and show the results from that? 🙂

    • Chryx
    • 12 years ago

    retail E8x00 chips are coming with a similarly slimline HSF guys. s[

      • Voldenuit
      • 12 years ago

      I have a E8400 (dual core), and the retail heatsink is the same height as the Xeon X3320’s, except it does not have a copper core.

      Not sure if the Q9xxx series come with copper cores or not.

      But yes, it was promptly thrown into the parts bin and replaced with a Thermalright Ultima 90 :p

    • jodiuh
    • 12 years ago

    So what are the chances of someday hitting 4.21jiggawatts on a Q9450? New Bios? Chipset? This one is sad because the X3110 sounds SOO much more fun to play with.

    • Forge
    • 12 years ago

    Your overclocking results now have me all paranoid as well. I took the Q9450 to 3.2GHz/1600FSB with no effort, but haven’t been able to go higher due to memory limits (P35 = memory runs at least 1/2 of FSB speed, so 1600=DDR2-800).

    Now I’m starting to wonder if there’s anything more to be had. Buying 1066 to replace my 800 and finding it was a waste would be painful.

      • zqw
      • 12 years ago

      On 45nm quads, chip FSB wall is usually your limiting factor. You should find yours before dropping the extra $$$ on premium RAM. Just run your existing RAM with relaxed timings and/or extra volts.

        • Forge
        • 12 years ago

        Yeah, unfortunately my DDR2-800 doesn’t want to clock much of any. Tried going from 800MHz to 850MHz last night. I took out half my ram so that it was two 1GB sticks. Loosened timings way out, from 5-5-5-18-1T to 7-7-7-24-2T. Voltage from 1.85V to 2.1V.

        Errors in memtest86+ immediately. Failed to boot XP64.

        I just can’t win.

    • Forge
    • 12 years ago

    FWIW, I got that HSF but minus the copper core with my E8400. I doubt it could have done much.

    Also, I haven’t had a 6MB quad to check against, but judging by how much the 12MB on my Q9450 helps versus the 8MB on my Q6600, I’d strongly encourage everyone to drop a few dollars more and get a less-FSB-starved quad.

    • Nitrodist
    • 12 years ago

    Is it just me, or are the images stretched?

      • 5150
      • 12 years ago

      Your head is just squished.

        • BoBzeBuilder
        • 12 years ago

        I think my head may be squished too.

        • cynan
        • 12 years ago

        No, no, you’ve got it all wrong:

        You crush heads and squish faces. (Anyone?)

        You can’t just mix and match willy-nilly.

    • UberGerbil
    • 12 years ago

    My, the .-[

      • mattthemuppet
      • 12 years ago

      I don’t think you can argue with getting a top of the range chip for $300 – even if you never use the headroom, it’s nice to know it’s there.

      • oldDummy
      • 12 years ago

      Without utilizing uber ram, and everthing else default, the Q9300 is very happy with 3G.

      Seems kind of silly to run it at anything less.

Pin It on Pinterest

Share This