Intel’s G965 Express chipset

AS AMD AND NVIDIA trade blows in a seemingly perpetual but always animated battle for graphics dominance, it’s easy to forget that the 800-pound gorilla sitting in the corner still commands the lion’s share of the market. This unlikely king of the jungle has risen to power not on the strength of ultra-high-end GPUs strapped to elaborate cooling systems, nor on the back of popular mid-range products that offer unparalleled value for money. No, it’s the ubiquity of Intel’s integrated graphics chipsets that have allowed it to carve out the largest share of the desktop graphics market.

The latest addition to Intel’s integrated graphics arsenal is the Graphics Media Accelerator X3000, which can be found in the company’s G965 Express chipset. This isn’t your average integrated graphics core, though. Intel went all out with the X3000, crafting a graphics core with a unified shader architecture that sports eight Shader Model 3.0-compliant scalar execution units and a blistering 667MHz clock speed. Combine that with a Clear Video processing engine and support for HDMI output with HDCP, and you have quite an attractive graphics proposition for budget systems.

Can the X3000-equipped G965 Express hold its own against competing chipsets from AMD and Nvidia? Has Intel produced its first truly competitive integrated graphics core? Read on to find out.

A unified approach to integrated graphics
Intel’s GMA X3000 graphics core sits at the heart of the G965 Express north bridge, and it’s quite a departure from IGPs of old. Like the G80 graphics processor that powers Nvidia’s high-end GeForce 8800 series, the X3000 has a unified shader architecture populated with eight scalar execution units that can perform both pixel and vertex operations. In such an architecture, dynamic load balancing can ensure the most efficient use of the chip’s execution units based on the demands of a given scene, be it biased toward pixel shading calculations, vertex calculations, or a balance of the two.

Intel says it designed the GMA X3000 to be compliant with DirectX 10’s Shader Model 4.0. That said, its status as a DX10-compliant part is questionable. For now, the GMA X3000’s internal architecture manifests itself as a DirectX 9-class part that’s quite fully compliant with the Shader Model 3.0 spec. Vertex texture fetch, instancing, and flow control are all implemented in hardware. 32 bits of floating point precision are available throughout, and shader programs are supported up to 512 instructions in length.

Integrated graphics processors typically lack dedicated vertex processing hardware, instead preferring to offload those calculations onto the CPU. As a unified architecture, the GMA X3000 is capable of performing vertex processing operations in its shader units, but it doesn’t do so with Intel’s current video drivers. Intel has a driver in the works that implements hardware vertex processing (which we saw in action at GDC), but it’s not yet ready for public consumption.

Intel says the question of DirectX 10 support for the GMA X3000 is a driver issue, as well. Intel could release a driver to enable DX10 support, but may never do so. Although this may sound like a brewing scandal at first blush, it’s almost assuredly not. Intel hasn’t sold the G965 as a DX10-ready solution, and even if the IGP could replicate the behavior and produce the output required to meet the DX10 specification, it’s probably not powerful enough to do so in real time. Given that, we would be surprised to see Intel release DirectX 10 drivers for the GMA X3000 to the public. When addressing the DX10 question, Intel simply points out that this shader architecture is a good basis for future products with proper DX10 support.

Here’s a quick look at how the GMA X3000 compares with the current DX9-class competition, with some caveats to follow:

Radeon X1250 GeForce 6150 SE GeForce 6150 GMA X3000
Pixels per clock 4 2 2 1.6
Textures per clock 4 2 2 3.2
Shader model support 2.0b 3.0 3.0 3.0
Core clock speed 400MHz 425MHz 475MHz 667MHz
TV encoder Y Y* Y Y
HDCP support Y N N Y
Video processing Avivo N PureVideo Clear Video

The first caveat we should mention involves shader execution units, which we’ve not even included in the table above because simple comparisons between the GMA X3000 and the others are tricky. The eight shader execution units in the GMA X3000 may sound like a lot, but those execution units are scalar—they can only operate on one pixel component at a time. A typical pixel has four components (red, green, blue, and alpha), so the GMA X3000 can really only process two complete pixels per clock cycle. The GeForce 6150 has two traditional pixel shader processors, so it can handle just as many pixels per clock, and the Radeon X1250 IGP in the AMD 690G has four pixel shader processors, for twice the per-clock capacity.

These things get even more complex when you look under the covers, and a whole host of qualifications and mitigating circumstances become apparent. For instance, the GMA 3000’s scalar architecture could allow it to allocate execution resources more efficiently than the two more traditional architectures, giving it a performance edge. On the flip side, the individual pixel shader processors in the Nvidia and AMD IGPs are relatively rich in both programmable and special-purpose execution resources, and they may deliver more FLOPS per clock than the GMA X3000, depending on the instruction mix. Also, according to an intriguing discussion here, Intel looks to be using the GMA X3000’s execution units to handle triangle setup, a chore assigned to dedicated hardware in the other IGPs. Sharing can be good, but too much sharing can drift into pinko-commie excess. Sharing execution resources with both vertex shading and triangle setup could overtax the X3000’s pixel shading capacity.

Then again, the chip does have more clock cycles to work with. Running at 667MHz, the GMA’s graphics core is clocked a full 40% higher than the Radeon X1250 and close to 30% higher than the fastest GeForce 6100.

We expect, though, that not all of the GMA X3000 runs at 667MHz, as the strange numbers in the “pixels per clock” and “textures per clock” entries in the table above suggest. Intel says the G965 can compute two raster operations per clock maximum, but only for clears. For any other 3D raster op, it’s limited to 1.6 pixels per clock. Similarly, it can process depth operations at 4 pixels per clock, but is limited to 3.2 pixels per clock for single, bilinear-filtered textures. What we may be seeing here is the result of different clock domains for the shader processors and the IGP’s back end; the GeForce 8800 has a similar arrangement. Whatever the case, these numbers work out to theoretical fill rates of 1067 Mpixels/s and 2133 Mtexel/s. That puts the G965 ahead of the AMD 690G (1600 Mtexels/s) and the GeForce 6150 (950 Mtexels/s) in peak texturing capacity.

The X3000 looks impressive in the output department, as well, packing support for DVI, HDMI, and VGA outputs alongside a TV encoder. Additional outputs are also supported via the chip’s sDVO (Serial Digital Video Output) interface, although motherboard makers will ultimately decide which of the X3000’s various output options will be made available to end users.

Deep inside the G965 Express north bridge lurks a unified shader architecture

Complementing the X3000’s generous assortment of video outputs is a Clear Video processing engine that offers advanced de-interlacing algorithms and a measure of color correction. Clear Video can also accelerate VC-1 high-definition video decoding, allowing it to shoulder some of the burden associated with WMV HD video playback. Hardware assist is supported for high-definition MPEG2 video playback, as well.

Dynamic Video Memory Technology (DVMT) rounds out the X3000’s feature set, enabling the chip to dynamically allocate system memory as needed. DVMT works by dedicating a small portion (in this case 1MB or 8MB, configured through the BIOS) of system memory to the graphics core at all times. Users can then elect to cordon off an additional chunk of system memory to the graphics core or allow DVMT to allocation additional video memory as needed on its own.

Riding the G965 Express
Although the GMA X3000 is undoubtedly the star of Intel’s integrated graphics chipset, the G965 Express packs plenty of other goodies under the hood. At the north bridge, you’ll find support for front-side bus speeds up to 1066MHz. The G965 also features a dual-channel DDR2 memory controller that supports speeds up to DDR2-800. The G965 Express will face off largely against chipsets built for AMD processors. Those processors feature on-die memory controllers, which means their accompanying chipsets don’t need memory controllers of their own.

690G GeForce 6150 SE GeForce 6150 G965 Express
CPU interconnect 1GHz HT 1GHz HT 1GHz HT 1066MHz FSB
Memory controller NA NA NA dual-channel DDR2-800
PCI Express lanes 24* 18 17 16
Chipset interconnect PCIe x4 NA HyperTransport DMI
Peak interconnect bandwidth 2GB/s NA 8GB/s 2GB/s

If you’d prefer to avoid the G965’s integrated graphics, the north bridge is equipped with enough PCI Express lanes for a standard x16 slot. The G965 doesn’t actually have as much north bridge PCIe connectivity as competing chipsets from AMD and Nvidia, but it more than makes up the difference at the south bridge.

Getting to the south bridge involves negotiating the chipset’s Desktop Management Interface (DMI) interconnect, which offers 2GB/s of bandwidth. That doesn’t quite match the 8GB/s of bandwidth available with Nvidia’s GeForce 6150 chipset, but Intel uses the very same interconnect in its mid-range and high-end P965 and 975X chipsets, so it should be able to handle whatever the G965 throws at it.

SB600 nForce 430 ICH8
PCI Express lanes 4* 0 6
Serial ATA ports 4 4 6
Peak SATA data rate 300MB/s 300MB/s 300MB/s
Native Command Queuing Y Y N
RAID 0/1 Y Y N
RAID 0+1/10 10 0+1 N
ATA channels 1 2 0
Max audio channels 8 8 8
Audio standard AC’97/HDA HDA AC’97/HDA
Ethernet N 1000/100/10 N
USB ports 10 10 10

The G965 Express north bridge is typically paired with Intel’s ICH8 south bridge, which is a feature-reduced version of the ICH8R common on mid-range and high-end boards. Losing the R, in this case, costs you support for multi-drive RAID arrays. AHCI support is also missing from the vanilla ICH8, and since Intel implements Native Command Queuing (NCQ) through AHCI, you lose that, as well.

We understand there’s little need for RAID support in a budget integrated graphics chipset, but we wish Intel hadn’t axed AHCI. If implemented well, Native Command Queuing can be a very good thing, and it’s a much smarter way to access a hard drive. We also wish that Intel hadn’t dropped ATA support from its ICH8 series south bridge chips. Don’t get us wrong—we’re eager to see the end of awkward IDE ribbons. However, Intel jumped the gun a little on this one. When the ICH8 was introduced, SATA optical drives were few and far between, and only a few models are widely available even now. Motherboard makers have had to resort to third-party storage controllers to retain ATA support, and the JMicron chips most commonly used have compatibility problems with older versions of Ghost and some boot images.

Intel makes up for the ICH8’s missing RAID, AHCI, and ATA support with a couple more SATA ports than the competition. However, that doesn’t mean motherboard manufacturers will bother implementing all of them; most of the G965 Express boards we’ve seen have only four SATA ports.

On the PCI Express front, the ICH8 serves up an additional six lanes to complement the 16 available at the north bridge. That leaves the chipset with plenty of connectivity options for expansion slots and onboard peripherals. At least one of those PCIe lanes will probably need to be dedicated to an onboard Gigabit Ethernet controller, since Intel doesn’t integrate one into the ICH8.

Update &mdash Some of Intel’s datasheets incorrectly indicate that the ICH8 lacks support for AHCI, but Intel tells us that’s not the case. The ICH8 apparently does support AHCI, and thus NCQ as well, but an AHCI driver is required. That driver is supposedly built into Windows Vista, but we’re awaiting confirmation on whether an equivalent exists for Windows XP. In either case, our ICH8-equipped P5B-VM motherboard lacks the necessary BIOS switch to enable AHCI.

Asus’ P5B-VM motherboard
The G965 Express arrived at my Benchmarking Sweatshop riding Asus’ P5B-VM motherboard. This board is currently selling for between $90 and $138 online, and it’s a pretty typical budget Micro ATX board.

With few onboard peripherals to worry about, the P5B-VM’s layout is quite roomy. We do wish Asus had put the auxiliary 12V power connector up along the top edge of the board, though. That position would reduce cable clutter around the socket and rear chassis exhaust, and the power connector wouldn’t have to be any farther away from the socket.

Even with a 667MHz integrated graphics core, the G965 Express makes do with a relatively modest passive heatsink. The entire board is passively-cooled, so it won’t add to overall system noise levels. You won’t have to worry about a tiny, whiny chipset fan succumbing to premature and sudden failure, either.

A low-profile heatsink on the south bridge leaves plenty of clearance for longer expansion cards, and the SATA ports are even low enough on the board to avoid conflict with double-wide graphics cards. Note that there are only four SATA ports, though—two of the ICH8’s six ports are left untapped on the P5B-VM, and we just hate to see wasted potential.

From this angle, you can also see that the board’s lowest expansion slot, a PCIe x4, is an open-ended slot that should be able to accommodate longer cards. Don’t get too excited about running a pair of PCIe graphics cards, though. Although the slot can handle longer cards, the motherboard battery bracket gets in the way.

There’s one more Serial ATA port on the backplane, this time of the eSATA variety. This port is connected to the board’s auxiliary JMicron JMB363 storage controller, which also powers the IDE port and an additional internal SATA port.

Note that, despite the G965’s support for both DVI and HDMI output, the P5B-VM has just a single VGA monitor output. Budget integrated graphics motherboards tend to cater to low-end systems, and DVI and HDMI outputs apparently aren’t a priority for those segments. Digital S/PDIF audio outputs didn’t make the cut, either.

Our testing methods
The G965 Express’ primary competition comes from integrated graphics chipsets from AMD and Nvidia, neither of which is compatible with Intel processors. That eliminates a strictly “apples-to-apples” chipset comparison, but we can still approach the issue from a platform perspective. Mmm… platformization.

You can get your hands on a motherboard based on AMD’s 690G or Nvidia’s GeForce 6150 SE and 6150 chipsets for between $80 and $100. Boards based on the G965 chipset cost a little more—starting at around $100—but they’re close enough. Things get more interesting on the processor front, where the Core 2 Duo E6400 can be had for as little as $217. That’s a little cheaper than what’s listed on Intel’s official processor price list, but only by $7. With an official price of $232, AMD’s Athlon 64 X2 5200+ looks like the best match for the E6400. If you look around, you can also find the 5200+ for just a smidge over $200, making it an even better deal.

We’ve tried to get our test systems as close as possible on price, but the Intel platform is more expensive, in part thanks to some recent and selective processor discounting on AMD’s part.

Integrated graphics are the raison d’etre for the platforms we’ll be comparing today, so we’ve used their respective IGPs throughout our testing. In all cases, the IGPs were configured to use 256MB of system memory.

Since the GeForce 6150 SE is the newest member of the GeForce 6100 family, we’ve run it through our full suite of application and peripheral performance tests. Time constraints prevented us from giving our GeForce 6150 platform the same treatment. We’ve had to limit our GeForce 6150 testing to our application and graphics performance tests, but because it shares the same basic nForce 430 core logic as the single-chip GeForce 6150 SE, its I/O performance should be comparable.

Of course, we used the ForceWare 93.71 graphics drivers that magically transform the MCP61 from a GeForce 6100 to a GeForce 6150 SE. For the sake of brevity, we’ll be referring to the GeForce 6150 SE/nForce 430 and GeForce 6150/nForce 430 chipset combos simply as the GeForce 6150 SE and GeForce 6150, respectively.

Unfortunately, MSI’s AMD 690G-based K9AGM2 motherboard doesn’t support memory voltage control, so we were unable to give our Corsair DIMMs the 1.9V they require to run at their usual 5-5-5-12-1T timings. Tweaking options are usually few and far between on budget Micro ATX boards, so this wasn’t entirely surprising. The tightest timings we could wring from our DIMMs on the MSI board were 5-6-5-18-2T, which is a little loose by enthusiast standards, but quite reasonable for the budget memory typically found in systems with integrated graphics. We set the memory on our GeForce 6150 SE- and 6150-based Asus motherboards to match.

You’ll notice that we’ve also done most of our testing in Windows XP. AMD, Intel, and Nvidia all offer Vista drivers for their respective integrated graphics platforms, each of which can handle the operating system’s fancy Aero interface. However, a number of the applications in our chipset test suite aren’t yet compatible with Microsoft’s latest OS, so we’ll be kicking it old-school with XP.

All tests were run at least twice, and their results were averaged, using the following test systems.

Processor Core 2 Duo E6400 2.13GHz Athlon 64 X2 5200+ 2.6GHz
System bus 1066MHz (266MHz quad-pumped) HyperTransport 16-bit/1GHz
Motherboard Asus P5B-VM MSI K9AGM2 Asus M2N-MX Asus M2NPV-VM
Bios revision 0613 1.1B1 0302 0702
North bridge Intel G965 Express AMD RS690 Nvidia MCP61G Nvidia GeForce 6150
South bridge Intel ICH8 AMD SB600 Nvidia nForce 430
Chipset drivers Catalyst 7.2 ForceWare 11.09 ForceWare 9.35
Memory size 2GB (2 DIMMs) 2GB (2 DIMMs) 2GB (2 DIMMs) 2GB (2 DIMMs)
Memory type CorsairTWIN2X2048-6400PRO DDR2 SDRAM at 800MHz CorsairTWIN2X2048-6400PRO DDR2 SDRAM at 742MHz
CAS latency (CL) 5 5 5 5
RAS to CAS delay (tRCD) 6 6 6 6
RAS precharge (tRP) 5 5 5 5
Cycle time (tRAS) 18 18 18 18
Command rate 2T 2T 2T 2T
Audio codec Integrated ICH8/AD1988 with drivers Integrated SB600/ALC888 with Realtek HD 1.59 drivers Integrated MCP61G/AD1986A with drivers Integrated nForce 430/AD1986A with drivers
Graphics Integrated GMA X3000 with 14.27 drivers Integrated Radeon X1250 with Catalyst 7.2 drivers Integrated GeForce 6150 SE with ForceWare 93.71 drivers Integrated GeForce 6150 with ForceWare 93.71 drivers
Hard drive Western Digital Caviar RE2 400GB
OS Windows XP Professional
OS updates Service Pack 2

A special thanks goes out to NCIX for hooking us up with the Core 2 Duo E6400 we used for testing. I’ve been shopping at NCIX for years, and have always found them to have great prices and excellent service, especially when compared with other Canadian retailers.

Thanks to Corsair for providing us with memory for our testing. 2GB of RAM seems to be the new standard for most folks, and Corsair hooked us up with some of its 1GB DIMMs for testing.

Also, all of our test systems were powered by OCZ GameXStream 700W power supply units. Thanks to OCZ for providing these units for our use in testing.

We used the following versions of our test applications:

The test systems’ Windows desktops were set at 1280×1024 in 32-bit color at an 85Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.

All the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Memory performance
These integrated graphics processors commandeer a portion of system memory for their own use, and that makes our memory subsystem tests all the more interesting.

The results of these tests are mixed. In Sandra, the G965 turns in the least bandwidth of the lot. Intel bounces back in Cachemem’s latency test, though, offering read bandwidth that easily outpaces that of the AMD-based competition. And in Cachemem’s latency test, the tables turn again, with AMD’s on-die memory controller providing much lower latency than the memory controller inside the G965 north bridge.

The following Cachemem latency graphs are a little indulgent, but they do a good job of highlighting access latency across various block and step sizes. Our Athlon 64 X2 5200+ runs out of on-chip cache after a block size of 1024KB, so you’ll want to pay more attention to the memory access latencies that follow with larger block sizes. The Core 2 Duo doesn’t run out of L2 cache until block sizes eclipse 2048KB, so you’ll want to move one over for the G965.

I’ve arranged the following graphs in order of highest to lowest latency with a common Z-axis to aid comparison.

The Athlon 64 platform definitely has a memory latency advantage, but the Core 2 Duo’s larger cache should help it mask the G965’s higher access latencies.

Cinebench rendering

The G965 falls off the pace a little in Cinebench’s rendering test, which tends to favor Athlon 64 processors. Intel doesn’t do particularly well in the OpenGL shading tests, either. There, it’s just dominated by Nvidia’s GeForce 6100s.

Sphinx speech recognition

But Intel strikes back with a vengeance in Sphinx, where it easily slips ahead of the competition.


WorldBench overall performance
WorldBench uses scripting to step through a series of tasks in common Windows applications. It then produces an overall score. WorldBench also spits out individual results for its component application tests, allowing us to compare performance in each. We’ll look at the overall score, and then we’ll show individual application results alongside the results from some of our own application tests.

Intel holds its lead into WorldBench, where it beats the fastest AMD platform by a couple of points.

Multimedia editing and encoding

MusicMatch Jukebox

Windows Media Encoder

Adobe Premiere

VideoWave Movie Creator

Of course, results for WorldBench’s individual application tests are mixed. The G965 platform does well with Windows Media Encoder and VideoWave Movie Creator, but it’s a little slower in Premiere and MusicMatch Jukebox.

3D rendering

3ds max

The G965 consistently trails in WorldBench’s 3ds Max tests, as well, although you’d have to be crazy to do 3D modeling with integrated graphics.

Image processing

Adobe Photoshop

ACDSee PowerPack

Intel bounces back in WorldBench’s image processing tests, where the G965 platform manages to lead the field.

Multitasking and office applications

Microsoft Office


Mozilla and Windows Media Encoder

Results of the office and multitasking tests aren’t as rosy, though. The G965 doesn’t fare well with Mozilla, and that affects performance in two tests.

Other applications



Nero and WinZip round out the WorldBench suite, and the G965 Express leads the way in both tests. The margin of victory in Nero is particularly striking.

3D performance
Integrated graphics isn’t known for exceptional 3D performance, but can the GMA X3000’s unified shader architecture surprise?

Yes, at least to start. The G965 turns in an inspired performance in 3DMark06’s game tests, and that contributes to a very impressive overall score.

Digging down into the results of 3DMark06’s feature tests gives us a few clues as to why the G965 Express fares so well. The GMA X3000 boasts the highest single-texturing fill rate of the lot, although despite its clock speed advantage, multi-texturing fill rate falls short of the AMD 690G’s integrated Radeon X1250.

Pixel processing power is clearly X3000’s real strength, and it easily whips the competition in 3DMark06’s pixel shader test. However, without drivers capable of using the X3000’s hardware to process vertex loads, the G965 Express’ performance in 3DMark06’s vertex shader tests isn’t spectacular.

We could have limited our game testing to two- and three-year-old titles that might have been a better match for the horsepower of our chipsets’ integrated graphics cores. However, even casual gamers want to play new releases, so we rounded up a series of more recent titles to see how playable they were on the G965.

Things didn’t get off to a good start, with the G965 failing to run both Battlefield 2 and Oblivion. The games would load their respective splash screens—briefly—before dumping us back to the desktop. That left us with F.E.A.R., Guild Wars, and Lego Star Wars II. For the latter two, we used FRAPS to log frame rates over 60 seconds of gameplay. Average and low frame rates were then calculated, and we’ve presented the mean of the averages and the median of the low scores. With F.E.A.R., we used the game’s internal performance test, which provides average and low frame rates.

All games were configured with the same in-game detail levels, with the exception of Lego Star Wars II. The G965’s graphics options are rather limited in that game, and we weren’t presented with the options to enable bump mapping, bloom filtering, and plastic effects. We went ahead and enabled those effects on the 690G and GeForce 6150 chipsets.

Even with lower in-game detail levels, the G965 still gets creamed in Lego Star Wars II. Things don’t get much better in Guild Wars, where the X3000 struggles to keep up with the GeForce 6150 SE. At least the chipset’s performance in F.E.A.R. is encouraging, though the G965 still can’t catch the AMD 690G.

HQV DVD playback quality
The HQV benchmark is a DVD designed to test the image quality of televisions, monitors, and DVD players with a series of specific feature tests. It can also be a handy tool to evaluate how a graphic’s card’s video processor handles tasks like de-interlacing, motion correction, antialiasing, and film cadence detection. We tested HQV using WinDVD 8 under Windows XP and with Windows Media Player under Windows Vista. On the 690G, we tested with a newer version of AMD’s graphics driver that contains video playback enhancements that will appear in the Catalyst 7.3 driver release.

Enabling hardware acceleration in WinDVD 8 with the G965 Express caused DVD playback to stutter and stall, preventing us from actually watching any of HQV’s tests. To get a WinDVD score for the G965, we had to test with hardware acceleration disabled.

And what a difference disabling hardware acceleration makes. With the G965’s Clear Video features taken out of the equation, the chipset’s HQV scores are dismal at best. The chipset does much better in Windows Vista, where its newer drivers had no problems playing back the HQV benchmark. In fact, were it not for persistent jaggies in HQV’s film cadence tests, the G965 Express could have added another 40 points to its Vista playback score.

Incidentally, at this year’s Game Developer’s Conference, Intel showed the G965 running HQV’s “Weaving flag” scene, which tests a graphics processor’s ability to smooth jagged edges. In Intel’s demo, the G965 was run side-by-side with a Radeon X1600 discrete graphics card, with the former doing a much better job of eliminated jagged edges. In our testing—at least in Vista—the G965 executed the weaving flag test perfectly for 10 points. The AMD 690G scored a zero, as did the GeForce 6150 SE, with the GeForce 6150 managing a 5. Intel seems particularly adept at smoothing jagged edges, as the G965 scored well in all of HQV’s jaggies tests. Intel has work to do on other aspects of its video performance, though, as our overall HQV scores indicate.

Video playback
WMV HD testing was conducted in Windows Media Player 10 with the Terminator 3 DVD trailer. H.264 testing was done with QuickTime 7 and the Hot Fuzz movie trailer. We logged CPU utilization for the first minute of video playback and have presented an average of those results.

The G965 Express consumes the fewest CPU cycles with 720p content, but CPU utilization rises comparatively when higher resolution 1080p clips are used. 1080p WMV HD playback is also very choppy on the G965—bad enough that we wouldn’t consider it watchable. With CPU utilization sitting at 24% in that test, a lack of CPU resources clearly isn’t the issue, so it’s likely that the Clear Video processing engine is at fault.

Serial ATA performance
The Serial ATA disk controller is one of the most important components of a modern core logic chipset, so we threw each platform a selection of I/O-intensive storage tests.

We’ll begin our storage tests with IOMeter, which subjects our systems to increasing multi-user loads. Testing was restricted to IOMeter’s workstation and database test patterns, since those are more appropriate for desktop systems than the file or web server test patterns.

Despite the ICH8’s lack of command queuing, the G965 manages to largely keep pace with the GeForce 6150 SE when we look at IOMeter transaction rates.

The G965 competitive in response times, as well.

CPU utilization is low across the board in IOMeter.

iPEAK multitasking
We developed a series of disk-intensive multitasking tests to highlight the impact of command queuing on hard drive performance. You can get the low-down on these iPEAK-based tests here. The mean service time of each drive is reported in milliseconds, with lower values representing better performance.

The G965 can’t decide whether it wants to be at the front or middle of the pack in our iPEAK tests. Finishing second in six of nine tests, the G965 never falls to last place. Our iPEAK multitasking workloads are a great test case for command queuing, so it’s likely the G965 would have fared better had it been paired with an ICH8R.

SATA performance
We used HD Tach 3.01’s 8MB zone test to measure basic SATA throughput and latency.

Serial ATA performance in HD Tach is pretty close between all three platforms. Even CPU utilization is within HD Tach’s +/- 2% margin of error in that test.

ATA performance
ATA performance was tested with a Seagate Barracuda 7200.7 ATA/133 hard drive using HD Tach 3.01’s 8MB zone setting.

Since it doesn’t have an integrated IDE controller, the G965 has to lean on an auxiliary JMicron JMB363 chip. Performance is competitive, though.

USB performance
Our USB transfer speed tests were conducted with a USB 2.0/Firewire external hard drive enclosure connected to a 7200RPM Seagate Barracuda 7200.7 hard drive. We tested with HD Tach 3.01’s 8MB zone setting.

The G965 rips through our USB tests, although its CPU utilization is a little higher than that of the others.

3D Audio performance

Audio codec chips and their associated drivers are responsible for handling positional audio, so the chipset tends not to affect performance in 3D audio tests. Here, we see performance aligning along codec lines—implementations from Analog Devices tend to consume fewer CPU resources than those from Realtek.

Recently, we discovered that Realtek’s current HD audio drivers don’t properly support EAX occlusions and obstructions. This renders some games, such as Battlefield 2, all but unplayable with EAX effects enabled, and it’s something to keep in mind if you’re considering using Realtek-based integrated audio.

Ethernet performance
We evaluated Ethernet performance using the NTttcp tool from Microsoft’s Windows DDK. The docs say this program “provides the customer with a multi-threaded, asynchronous performance benchmark for measuring achievable data transfer rate.”

We used the following command line options on the server machine:

ntttcps -m 4,0, -a

..and the same basic thing on each of our test systems acting as clients:

ntttcpr -m 4,0, -a

Our server was a Windows XP Pro system based on Asus’ P5WD2 Premium motherboard with a Pentium 4 3.4GHz Extreme Edition (800MHz front-side bus, Hyper-Threading enabled) and PCI Express-attached Gigabit Ethernet. A crossover CAT6 cable was used to connect the server to each system. The boards were tested with jumbo frames disabled.

Our G965 has relies on a Realtek 8111B Ethernet controller. When paired with a PCI Express interface, third-party Ethernet chips can provide exceptionally good throughput. However, the G965 does finish with higher CPU utilization than either AMD platform. PCI Express performance
We used the same ntttcp test methods from our Ethernet tests to examine PCI Express throughput using a Marvell 88E8052-based PCI Express x1 Gigabit Ethernet card.

There’s plenty of PCIe bandwidth for our GigE networking card, although again, we see the G965 exhibiting slightly higher CPU utilization during the test. PCI performance
To test PCI performance, we used the same ntttcp test methods and a PCI-based VIA Velocity GigE NIC.

The G965’s slightly higher CPU utilization continues when we look at PCI performance. Here, though, there’s more of a difference in throughput, with the Intel chipset leading the way.

Power consumption
We measured system power consumption, sans monitor and speakers, at the wall outlet using a Watts Up power meter. Power consumption was measured at idle and under a load consisting of a multi-threaded Cinebench 9.5 render running in parallel with Guild Wars. Our usual graphics load, the “rthdribl” high dynamic range lighting demo, wouldn’t run on the G965 Express.

Somewhat surprisingly, the AMD platforms have markedly lower idle power consumption. The Core 2 Duo E6400 can only drop its default 7x multiplier to a minimum of 6x through either the C1E halt state or SpeedStep, so the chip’s actual clock speed doesn’t fall all that much at idle.

However, when we crank up our system load, the G965 Express platform manages lower power consumption than any of the other chipsets.

On paper, the G965 Express chipset’s GMA X3000 graphics core is a marvel. It’s the only integrated graphics core with a unified shader architecture, it supports Shader Model 3.0 with 32-bit precision throughout, and it’s loaded with video output options. With a 667MHz core clock speed, the X3000 should offer blistering performance, as well. But by and large, it doesn’t. Sure, the GMA does well in 3DMark06. However, in actual games, the X3000 is consistently beaten by the Radeon X1250 in AMD’s 690G chipset, and it struggles to keep up with Nvidia’s older GeForce 6150 series IGPs. That’s if the X3000 can run the games at all. As we saw with Battlefield 2 and Oblivion, Intel still has some very basic compatibility issues to address.

The GMA X3000’s problems don’t end with 3D performance, either. Intel’s Clear Video processing suite also has issues, including choppy 1080p WMV HD playback and lower scores in HQV’s DVD playback tests than competing solutions.

Apart from a flaky integrated graphics core, the G965 Express chipset is a solid offering. Performance is good, in part thanks to the platform’s Core 2 Duo processor. Boards like the Asus P5B-VM can get you everything you need for an affordable price. Intel is also working to improve the GMA X3000’s 3D performance and video playback quality, and future drivers promise to take better advantage of the graphics core’s unified shader architecture, so it’s possible that element of the chipset will become more appealing over time.

If you’re a mainstream user buying into an integrated graphics platform, however, you don’t want to fuss around with driver updates, and you don’t want to have to wait for games to work. And that just kills it for the G965 Express. If you want to play games or watch video, you’re better off with an integrated graphics platform from AMD or Nvidia. Intel may have a superior graphics architecture on paper, and they may even have the better core logic chipset in silicon, but the performance and compatibility issues need to be resolved before we can recommend the G965 Express.

Comments closed
    • eitje
    • 12 years ago

    i never noticed before, but the SiSoft link in your methods section has .demon in it, and doesn’t resolve. all of your recent articles seem to have this mistake. just Fyi – I was checking, because I need to test out my new tablet PC. 🙂

    • Hattig
    • 12 years ago

    I don’t think I’ve seen so many “did not run on this board” statements in a review since reviews of VIA or SiS/XGI graphics chipsets several years ago. Couple that with ‘could not enable basic graphical effects’ in Lego Star Wars, and it’s a joke. If you pick the games you casually play carefully though … and low-end PCIe graphics cards that fix these issues are dirt cheap anyway.

    Intel clearly have a long way to go with their integrated graphics. It’s certainly not suitable for HTPC applications (poor HQV results, stuttering 1080P WMV, and in the case of this motherboard – no DVI or HDMI output) where quiet no-chipset-fan integrated graphics would be lovely. It’s all very well hoping or saying that improved drivers will come, but when will they? By the time AMD sorts out the ugly flag in HQV in their drivers? Alternatively you can get something that works straight away.

    A rare bad chipset from Intel. It’ll still be sold in the tens of millions though, OEMs and average consumers to the rescue! They won’t notice the poor quality because they won’t know that it could be better.

      • UberGerbil
      • 12 years ago

      One nice thing about fixing things in software, though — if Intel actually does it, all those millions of boards out in the hands of ignorant users will suddenly be a good deal, or at least a much better deal than it was when they bought it. They won’t magically get DVI, of course, but they might get better video quality and other improvements they might actually notice. Which is no substitute for having hardware that worked up to its potential in the first place, but it’s something.

      But yeah, if you’re looking for a fanless HTPC board /[

    • nexxcat
    • 12 years ago

    Wow. From embedded graphics performance standpoint, AMD’s decision to acquire ATI is looking better and better. However, we all know that the market doesn’t quite play that way, and in terms of overall performance, Intel is still king.

      • shank15217
      • 12 years ago

      how so? In every benchmark that counted the IGP board with the Intel chip have really bad performance. Unfortunately system makers will use these chips in consumer systems and the bar will be lowered instead of raised. Intel GPUs are aweful, the best one was the 740 variant. Intel has nothing on AMD when it comes to gpu performance or value for that matter.

        • SPOOFE
        • 12 years ago

        Value is defined partly by need, though. Most people do not need anything other than an integrated Intel chipset, which explains their dominance. Not to say that the discrete GPU market is small, not by any means, but not everyone plays games or does anything other than Office, Internet, and Solitaire.

    • stmok
    • 12 years ago

    Hmmm…Well, let’s hope the upcoming Intel G35 IGP performs at least on an equal footing with its AMD/Nvidia counterparts. The current solution is being held back by driver issues.

    • ThelvynD
    • 12 years ago

    What’s with the Digg link anyways? I’m not too keen on the whole digg effect etc. I know where to get good news and I’m sure everyone else here does too. I don’t feel that Digg is really a worthwhile site and if some group doesn’t like something then they send out burial brigades after you. And alot of the diggs there seem quite manipulated. That’s just my 2c on that.

      • nexxcat
      • 12 years ago

      If TR doesn’t get new readers by grabbing more people, then eventually, the old readers will all go away. They’re just experimenting with another way of getting new eyeballs to the site, and I don’t think that’s necessarily a bad thing.

      Like look at the once-great I’m not sure what had happened, but they once had extraordinarily impressive suite of benchmark suites aimed squarely at people in mid-sized IT shops. Now? It’s just a forum site, with their last article hailing from 2005.

      • Inkling
      • 12 years ago

      I agree with about everything you’re saying… but the fact is that when an article makes the front page of digg it brings in hoards of new readers. Some of them like what they see and come back. We’re just looking for a broader audience for our material and sites like digg can be very effective to that end. We’ll entertain other ideas for broadening our reader base if you have some.

      • UberGerbil
      • 12 years ago

      Everything you say may be true — but how does that detract from TR in any way? You don’t have to visit Digg, and you don’t have to click on the link here at TR. Is the presence of the link really that bothersome to you? Would you be happier if it was smaller? Is it somehow worse than the ads, or would you like TR to get rid of those too?

        • ThelvynD
        • 12 years ago

        While I’m not complaining about getting new readers and such it just that having that Digg box on the front page is a bit of a distraction. I like the clean layout that TR has and looking down and seeing that Digg box just looks out of place. A link in the article may be a better solution rather than plastering it onto the news front page perhaps would be a better way to display it.

      • Klopsik206
      • 12 years ago

      I am sorry for off-topic question, but can anyone can explain or point me to the information what’s (Cannot find such explanation on their site…)

      Thanks for enlighting 😉

        • eitje
        • 12 years ago

        it’s just social networking.

    • Dissonance
    • 12 years ago

    Some of Intel’s datasheets incorrectly indicate that the ICH8 lacks support for AHCI, but Intel tells us that’s not the case. The ICH8 apparently does support AHCI, and thus NCQ as well, but an AHCI driver is required. That driver is supposedly built into Windows Vista, but we’re awaiting confirmation on whether an equivalent exists for Windows XP. In either case, our ICH8-equipped P5B-VM motherboard lacks the necessary BIOS switch to enable AHCI.

      • MadManOriginal
      • 12 years ago

      The Intel WinXP F6 RAID floppy disc has the install for both RAID and AHCI.

    • Anonymous Hamster
    • 12 years ago

    Please note that there are budget boards out there with the 965GZ version of the chip, which does not have the PCI Express lanes needed for a true 16x graphics slot! Instead, they only run 4 lanes to the 16x slot.

    • Bensam123
    • 12 years ago

    Perhaps I’m missing something again, but why don’t all motherboard makers leave their PCI-E slots open like the x4 so they can accommodate bigger cards? I thought PCI-E was backwards and forwards compatible from x1-x32?

    • Proesterchen
    • 12 years ago

    Personally, I couldn’t help laughing when I saw that you used a 700W PSU for these tests. I just love how that thing could bascially power all of the tested systems together, and then some. *lol*


    • Oldtech
    • 12 years ago

    FYI: The DG965WH motherboard will not run when an Antec Neo series power supply is connected to it.
    Intel says they are aware of the problem


    • MixedPower
    • 12 years ago

    Practically all modern GPUs ship with dual DVI ports and DVI-VGA adapters. Would it really be that hard and/or expensive to put a DVI connector on a board that supports it and include an adapter?

      • nonegatives
      • 12 years ago

      Why HDMI on a “budget” board? If I can afford a flat panel TV, I could spend a few dollars more for my motherboard.
      I also do not like that HDMI has no cable locking, I guess all peripheral connectors will soon be this way. USB and 6-pin IEEE1394 aren’t too bad, but the 4pin version is too flaky. SATA isn’t much better. I’m always afraid to move my computer too much while it is running because I might pull a cable out somewhere.

        • Lazier_Said
        • 12 years ago

        These boards don’t go to home assemblers weighing the cost of their board against their flat panel TV. They go to hole in the wall computer shops operating on a <10% margin and are then sold to hardware illiterate types who just want it to work.

        • MixedPower
        • 12 years ago


    • demani
    • 12 years ago

    Time and again the integrated graphics do just the bare minimum. I actually wish someone would look at all the HTPC builds and even at some other projects and throw something better in there as onboard graphics. I realize that not everyone needs something better than that, but what if a 7600 or x1600 was the onboard card-you’d certainly get a lot more use out of that board, and in situations like a HTPC where the feature set can be fixed since the task set is already defined and not likely to change significanty it seems like the extra oomph could be put to good use (and keep things small). Hell even the AppleTV thing has something that suits its task. At least let me use an MXM board to keep it low profile. I had hopes for this chipset, particularly that the drivers would be polished enough to let me make a 1u HTPC without losing much in image quality for video. Alas, no such luck.

    • Krogoth
    • 12 years ago

    G965 Express = designed for Vista’s Aero and DirectX 10. It is nothing more.

      • SPOOFE
      • 12 years ago

      Does it need to be anything more, for the market segment it’s obviously targeted at?

    • Shintai
    • 12 years ago

    I´m alittle surprised it does so well in gaming. Its a plain simple IGP for the basic. Support Areo and movies.

    It´s the corporate solution.

    Tho its sad to see that GPU/IGP companies simply makes so bad drivers these days. Amazing what you can get away with :/

    • Usacomp2k3
    • 12 years ago

    Seems like a pretty decent chip. If only the drivers were a little better, this would make a great HTPC chip, IMHO.

    • UberGerbil
    • 12 years ago

    Wow, that’s pretty disappointing. Usually Intel has their driver ducks in a row, but I suppose this is a bigger task than most (and it’s not like nVidia and ATI have been faultless on that front either). It will be interesting to see if driver updates allow it to catch up with the competition, or if a respin of the chipset will be required. It’s interesting that the Vista drivers (at least for video playback) seem more mature than the XP drivers.

    Nevertheless, I’m sure Intel will get their usual ton of OEM sales. And given the results on the non-gaming benchmarks (and even more “workstation” benches like photoshop), those sales will be justified.

Pin It on Pinterest

Share This