review amds 690g chipset

AMD’s 690G chipset

FOR YEARS, AMD’S ATHLON desktop processors have had to negotiate access to other system components through third-party chipsets from ATI, Nvidia, SiS, and VIA. That arrangement has largely been successful, at least in part because AMD was able to move the traditionally chipset-level memory controller right onto the processor die. This move allowed chipset makers to focus on peripherals and connectivity options, and it gave AMD control over the one chipset feature most likely to affect overall system performance.

With AMD seemingly content to stay out of the core logic game, its chipset partners were left to battle each other for market share. Then, on July 24 of last year, AMD announced its intent to acquire ATI. That changed everything.

In acquiring ATI, AMD gained control over not only one of its more aggressive chipset partners, but also one of the big two in PC graphics. With that asset now in its pocket, it was only a matter of time before AMD rolled out a new chipset with integrated graphics for Athlon processors. Today that chipset arrives as the AMD 690G, which packs a familiar SB600 south bridge paired with a new Radeon X1250 graphics core with four DirectX 9-class pixel pipelines. How well does the 690G stack up against the competition? Read on to find out.

Comparing the competition
Although both SiS and VIA make integrated graphics chipsets for Athlon processors, neither offers much in the way of pixel-pushing horsepower. The Radeon-bred graphics core in the 690G has superior capabilities and the backing of ATI Catalyst drivers with solid support for a variety of PC games. Thus, AMD’s most direct competition is Nvidia’s GeForce 6100 series chipsets and their associated nForce 400 south bridge I/O chips. Nvidia’s offerings on this front are, er, complicated, so I’ve included two of them here, for reasons I’ll explain later.

Here’s how our three contenders measure up in terms of north bridge features.

690G GeForce 6150 SE GeForce 6150
CPU interconnect 16-bit/1GHz HT 16-bit/1GHz HT 16-bit/1GHz HT
PCI Express lanes 24* 18 17
Pixel shader processors 4 2 2
Textures per clock 4 2 2
Shader model support 2.0b 3.0 3.0
Core clock speed 400MHz 425MHz 475MHz
Chipset interconnect PCIe x4 NA HyperTransport
Peak interconnect bandwidth 2GB/s NA 8GB/s
Video outputs DVI, HDMI, VGA DVI*, VGA DVI, VGA
TV encoder Y Y* Y
HDCP support Y N N
Video processing Avivo N PureVideo

All three chips offer a 16-bit, 1GHz HyperTransport processor link, but the AMD 690G definitely has an edge when it comes to PCI Express lanes. Four of those lanes are dedicated to the chipset interconnect, though, so they can’t be used to power onboard peripherals or PCIe slots. A four-lane PCIe interconnect gives the 690G 2GB/s of bandwidth, which looks a little pokey next to the GeForce 6150’s 8GB/s HyperTransport interconnect. However, even Intel’s high-end desktop chipsets are perfectly happy using a 2GB/s DMI interconnect, so it’s unlikely the 690G will be starved for bandwidth.

Of course, the real jewels of these chipsets are their integrated graphics processors. The 690G’s Radeon X1250 graphics core is derived from the Radeon X700—that is, from ATI’s previous generation of desktop GPU technology, before the Radeon X1000 series. It sports four pixel shader processors that meet DirectX 9’s Shader Model 2.0b spec. In the world of specifications one-upsmanship, Shader Model 2.0b support puts the Radeon X1250 slightly behind the GeForce 6100 family, which supports Shader Model 3.0. However, the differences between those two specifications have to do with esoteric things like flow control and program length in pixel shader programs—things that will almost certainly never become an issue for either of these integrated graphics cores, given their basic performance levels.

In fact, the Radeon X1250 should be faster than the GeForce 6100 family, even with its slightly slower 400MHz core clock speed, thanks to its four full pixel pipes. The X1250 has four pixel shader units, can apply one texture per clock in each pixel pipe, and can write one pixel per clock to the frame buffer.

By contrast, the GeForce 6100 series integrated GPUs decouple various stages of the traditional pixel pipeline, so their resources are configured somewhat differently. They key difference here is that they have just two pixel shader units running at either 475MHz (in the GeForce 6150) or 425MHz (in the 6150 SE). Those pixel shader units also handle texturing duties, so the 6100 family can apply a maximum of two textures per clock. The 6100-family IGPs have one ROP, as well, which considerably limits their ability to write pixels to the screen. Interestingly, though, the GeForce 6100 series graphics cores do enjoy an on-chip vertex shader, while the Radeon X1250 leans on the CPU to handle vertex processing.

Before the pixels processed by an integrated graphics core can be displayed on a screen, they have to make their way through a video output. The 690G offers plenty of options on this front, with native support for VGA, DVI, and HDMI output. DVI and HDMI output are independent, so you can run both at the same time. AMD also throws in a TV encoder for those who aren’t lucky enough to be running a high definition set. HDMI output isn’t supported by the GeForce 6100 series, and you only get a TV encoder and DVI output with the GeForce 6150. The GeForce 6150 SE is capable of powering either a TV encoder or a DVI output, but only through auxiliary display chips connected to its sDVO (Serial Digital Video Output) interface.

AMD’s RS690G north bridge

Given its wealth of video output options, it’s only fitting that the 690G is also equipped with an Avivo video processing engine. Avivo handles tasks like video scaling, decode acceleration, 3:2 pulldown detection, and other widgets that enhance video playback quality. Nvidia calls its video processing engine PureVideo; it offers many of the same features Avivo does. Only AMD’s graphics drivers are needed to enable the 690G’s Avivo capabilities, but you actually have to purchase PureVideo decoder software from Nvidia or supported third party apps like WinDVD, PowerDVD, or Nero ShowTime to unlock the GeForce 6150’s video processing engine. PureVideo isn’t supported on the GeForce 6150 SE, either.

Before we go on, I should explain why we’ve included two different GeForce 6100-series chipsets here. Nvidia launched the 6100 family a year and a half ago, so it’s getting a little long in the tooth. This past summer, Nvidia updated the 6100 line with a new single-chip solution dubbed the MCP61, and that’s where things get a little confusing. The MCP61 is essentially a GeForce 6100 north bridge and nForce 430 south bridge crammed into a single chip. But it’s not quite that simple. You see, Nvidia also integrated hardware Z-culling into the graphics core of this new chip. When combined with the latest 93.71 ForceWare graphics drivers, this capability magically transforms the GeForce 6100 graphics core into what Nvidia is calling the GeForce 6150 SE.

Phew. Got it?

So the GeForce 6150 SE is Nvidia’s newest competitor for the AMD 690G, but the GeForce 6150’s video processing capabilities also put it in the running.

South bridge I/O capabilities and other excitement
Moving to the south bridge, we’ve consolidated the GeForce 6150 SE and 6150 under the same nForce 430 umbrella. Both share the same basic nForce 430 core logic, albeit in different chip implementations, and their capabilities are essentially identical.

SB600 nForce 430
PCI Express lanes 4* 0
Serial ATA ports 4 4
Peak SATA data rate 300MB/s 300MB/s
Native Command Queuing Y Y
RAID 0/1 Y Y
RAID 0+1/10 10 0+1
ATA channels 1 2
Max audio channels 8 8
Audio standard AC’97/HDA HDA
Ethernet N 1000/100/10
USB ports 10 10

The SB600’s PCI Express lanes are tied up in the chipset interconnect, so they don’t allow for additional peripherals or PCIe x1 slots. Integrated graphics chipsets are most commonly found on budget Micro ATX motherboards that tend not to offer much in the way of auxiliary peripherals, both to keep costs down and because board real estate is limited, so there really isn’t a need for loads of PCIe connectivity.

Both the SB600 and nForce 430 offer four 300MB/s Serial ATA ports, but only the AMD chip supports the Advanced Host Controller Interface (AHCI). AHCI is licensed from Intel, and it provides a framework for Native Command Queuing (NCQ) implementations. Rather than licensing AHCI, Nvidia’s chipsets use the company’s own NCQ implementation.

The SB600 south bridge

RAID probably isn’t the most important feature for budget systems with integrated graphics, but if you’re looking to roll your own closet file server, cheap Micro ATX boards with onboard graphics certainly have their appeal. Nvidia noses ahead in the RAID department, but only because it offers support for RAID 5. We’ve found chipset RAID 5 performance to be rather poor, though, even if it does maximize the storage capacity of an array. If you’re looking to combine mirroring and striping, we actually prefer the SB600’s RAID 10 implementation to the RAID 0+1 offered by the nForce 430. RAID 0+1 arrays can only tolerate the failure of a single drive, but in some cases, a RAID 10 array can survive the failure of two drives.

Onboard audio is an important consideration for budget systems. The SB600 and nForce 430 both support the “Azalia” spec, known formally as High Definition Audio, and the SB600 will also work with older AC’97 chips, giving motherboard makers the freedom to skimp even more on components (not that we’d recommend it).

With the SB600, motherboard makers are also in charge of networking. The chip doesn’t include an Ethernet controller, leaving motherboard manufacturers free to dip into the vast array of PCI- and PCIe-based GigE chips on the market. Some of those chips offer lower CPU utilization than others, and those with PCIe interfaces tend to provide better throughput than those stuck on the PCI bus, so the onus is on mobo makers to pick the right one. Nvidia, meanwhile, squeezes a Gigabit Ethernet controller into the nForce 430, and it even offers a checksum offload engine to reduce CPU utilization.

Rolling out on MSI’s K9AGM2
AMD expects 690G-based motherboards to hit the market starting in a couple of weeks. Asus will probably be first, followed by MSI, Foxconn, ECS, Gigabyte, and others. We’ve done our testing with MSI’s K9AGM2, and it’s quite a good little Micro ATX motherboard.

The AMD 690G north bridge is manufactured by UMC on a power-efficient 80nm process node, so it doesn’t require more than a tiny passive heatsink to keep cool. Our sample motherboard didn’t even come with a south bridge heatsink, despite the fact that the SB600 is built on a larger 130nm node by TSMC. Production boards will apparently come with a low-profile south bridge heatsink, though.

In addition to the AMD 690G chipset, the K9AGM2 sports ALC888 codec and RTL8111B Gigabit Ethernet chips from Realtek. Thankfully, the RTL8111B rides a PCI Express interface, so there’s no need to worry about sharing PCI bus bandwidth with other peripherals.

If you do want to add expansion cards, the K9AGM2 provides a couple of PCI options alongside PCI Express x1 and x16 slots.

Unfortunately, the board’s layout hasn’t been optimized for longer graphics cards with wide coolers. A GeForce 7900 GTX blocks access to not one, not two, but all four of the board’s Serial ATA ports. We were able to gain access to two of the ports using right-angle SATA cables, but even that required some creativity. Longer graphics cards with double-wide coolers rarely find their way into budget Micro ATX boards, so this is hardly a show-stopping problem. However, it’s something to consider if you’re looking to build a pint-sized gaming system for LAN parties using this mobo.

Moving to the port cluster reveals the AMD 690G’s ace in the hole—an HDCP-compliant HDMI video output port that also pipes out sound from the onboard audio. This HDMI output joins a standard VGA output, but MSI doesn’t take advantage of the chipset’s DVI output or integrated TV encoder. Indeed, few motherboard makers make take full advantage of the 690G’s video output capabilities, and those that do may only do so through riser cards with additional outputs.

Our testing methods
We’ll be comparing the AMD 690G’s performance to that of Nvidia’s GeForce 6150 and 6150 SE chipsets. Integrated graphics are the raison d’etre for these chipsets, so we’ve used their respective IGPs throughout our testing. In all cases, the IGPs were configured to use 256MB of system memory.

Since the GeForce 6150 SE is the newest member of the GeForce 6100 family, we’ve run it through our full suite of application and peripheral performance tests. Time constraints prevented us from giving our GeForce 6150 platform the same treatment. We’ve had to limit our GeForce 6150 testing to our application and graphics performance tests, but because it shares the same basic nForce 430 core logic as the single-chip GeForce 6150 SE, its I/O performance should be comparable.

Of course, we used the ForceWare 93.71 graphics drivers that magically transform the MCP61 from a GeForce 6100 to a GeForce 6150 SE. We elected not to use Nvidia’s PureVideo decoder—which is necessary to unlock the non-SE GeForce 6150’s video processing engine—in testing because it’s an optional extra that costs between $20 and $50, depending on which version you want. That might not seem like a lot of money, but considering that the boards we’re testing today can be had for around $80, it bumps the price up quite a bit.

For the sake of brevity, we’ll be referring to the GeForce 6150 SE/nForce 430 and GeForce 6150/nForce 430 chipset combos as simply the GeForce 6150 SE and GeForce 6150, respectively.

Unfortunately, MSI’s AMD 690G-based K9AGM2 motherboard doesn’t support memory voltage control, so we were unable to give our Corsair DIMMs the 1.9V they require to run at their usual 5-5-5-12-1T timings. Tweaking options are usually few and far between on budget Micro ATX boards, so this wasn’t entirely surprising. The tightest timings we could wring from our DIMMs on the MSI board were 5-6-5-18-2T, which is a little loose by enthusiast standards, but quite reasonable for the budget memory typically found in systems with integrated graphics. We set the memory on our GeForce 6150 SE- and 6150-based Asus motherboards to match.

You’ll notice that we’ve also done all our testing in Windows XP. Both AMD and Nvidia offer Vista drivers for their respective integrated graphics platforms, each of which can handle the operating system’s fancy Aero interface. However, a number of the applications in our chipset test suite aren’t yet compatible with Microsoft’s latest OS, so we’ll be kicking it old-school with XP.

All tests were run at least twice, and their results were averaged, using the following test systems.

Processor Athlon 64 X2 5200+ 2.6GHz
System bus HyperTransport 16-bit/1GHz
Motherboard MSI K9AGM2 Asus M2N-MX Asus M2NPV-VM
Bios revision 1.1B1 0302 0702
North bridge AMD RS690 Nvidia MCP61G Nvidia GeForce 6150
South bridge AMD SB600 Nvidia nForce 430
Chipset drivers Catalyst 7.2 ForceWare 11.09 ForceWare 9.35
Memory size 2GB (2 DIMMs) 2GB (2 DIMMs) 2GB (2 DIMMs)
Memory type CorsairTWIN2X2048-6400PRO DDR2 SDRAM at 742MHz
CAS latency (CL) 5 5 5
RAS to CAS delay (tRCD) 6 6 6
RAS precharge (tRP) 5 5 5
Cycle time (tRAS) 18 18 18
Command rate 2T 2T 2T
Audio codec Integrated SB600/ALC888 with Realtek HD 1.59 drivers Integrated MCP61G/AD1986A with drivers Integrated nForce 430/AD1986A with drivers
Graphics Integrated Radeon X1250 with Catalyst 7.2 drivers Integrated GeForce 6150 SE with ForceWare 93.71 drivers Integrated GeForce 6150 with ForceWare 93.71 drivers
Hard drive Western Digital Caviar RE2 400GB
OS Windows XP Professional
OS updates Service Pack 2

Thanks to Corsair for providing us with memory for our testing. 2GB of RAM seems to be the new standard for most folks, and Corsair hooked us up with some of its 1GB DIMMs for testing.

Also, all of our test systems were powered by OCZ GameXStream 700W power supply units. Thanks to OCZ for providing these units for our use in testing.

We used the following versions of our test applications:

The test systems’ Windows desktops were set at 1280×1024 in 32-bit color at an 85Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests. Most of the 3D gaming tests used the Medium detail image quality settings, with the exception that the resolution was set to 640×480 in 32-bit color.

All the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Memory performance
The Athlon 64’s on-die memory controller doesn’t leave much room for chipsets to affect memory subsystem performance. However, integrated graphics processors do commandeer a portion of system memory for their own use, and that makes things considerably more interesting.

The 690G hangs with the GeForce 6100s through most of our memory subsystem tests, and even leads the way in the Cachemem latency test by a hair. However, the chipset’s Cachemem write bandwidth is much lower than either of the Nvidia chips.

The following Cachemem latency graphs are a little indulgent, but they do a good job of highlighting access latency across various block and step sizes. Our Athlon 64 X2 5200+ runs out of on-chip cache after a block size of 1024KB, so you’ll want to pay more attention to the memory access latencies that follow with larger block sizes.

I’ve arranged the following graphs in order of highest to lowest latency with a common Z-axis to aid comparison.

Memory access latencies are a little lower on the 690G, but they’re pretty close across most of the block and step sizes.

Cinebench rendering

The CPU dictates performance in Cinebench’s rendering and Cinema 4D shading tests, but that’s not the case in the OpenGL shading tests. In both OpenGL tests, the 690G lags behind the GeForce 6150s. The gap is particularly dramatic in the OpenGL hardware test.

Sphinx speech recognition

Sphinx performance doesn’t vary much between the chipsets, but the 690G does trail the GeForce 6150s.


WorldBench overall performance
WorldBench uses scripting to step through a series of tasks in common Windows applications. It then produces an overall score. WorldBench also spits out individual results for its component application tests, allowing us to compare performance in each. We’ll look at the overall score, and then we’ll show individual application results alongside the results from some of our own application tests.

The 690G is locked in a dead heat with the GeForce 6150 in WorldBench, but both trail the 6150 SE by a few points. Digging deeper into our WorldBench results should give us a better idea of where the 690G did well, and where it went wrong.

Multimedia editing and encoding

MusicMatch Jukebox

Windows Media Encoder

Adobe Premiere

VideoWave Movie Creator

Things look promising for the 690G in WorldBench’s multimedia editing and encoding tests, and AMD even pulls off a win in Premiere.

3D rendering

3ds max

OpenGL performance continues to dog the AMD chip. The 690G is more than 60% slower than the GeForce 6150s in WorldBench’s OpenGL 3ds max test, but right in the thick of things in a similar DirectX test.

Image processing

Adobe Photoshop

ACDSee PowerPack

WorldBench’s image processing tests don’t favor one chipset over another by a significant margin.

Multitasking and office applications

Microsoft Office


Mozilla and Windows Media Encoder

The 609G proves faster in the Mozilla and multitasking tests. The AMD chipset sits at the back of the pack, however, in WorldBench’s Office XP test.

Other applications



The 690G continues to trail the GeForce 6150s in WorldBench’s WinZip and Nero tests, although the scores are very close.

3D performance
Integrated graphics isn’t known for exceptional 3D performance, but how does the 690G fare when we pit it against the GeForce 6150s?

Pretty well, at least in 3DMark06’s graphics tests. Despite not being able to participate in the Shader Model 3.0 tests, which contribute to 3DMark06’s overall score, the 690G still comes out ahead of the GeForce 6100s.

Part of the reason for the 690G’s strong showing in 3DMark06 is its considerable fill rate advantage over the GeForce 6150s. This advantage is sizable in the single-texturing test, but even more pronounced with multi-texturing, where the 690G comes close to doubling the texel fill rate of the GeForce 6150 SE.

The 690G doesn’t actually have a hardware vertex shader, so it uses the CPU to handle vertex processing. That pays dividends in 3DMark06’s complex vertex shader test, but the GeForce 6100 family’s on-chip vertex shader proves faster in the simple shader test.

Unfortunately, 3DMark06 has only one pixel shader test, and it returns the 690G to the back of the pack. The 690G’s integrated Radeon X1250 does have more pixel shader units than the GeForce 6150s. The pixel shader processors from ATI and Nvidia differ in terms of the amounts and kinds of operations they can perform in a clock cycle, though, so pixel shader unit count alone doesn’t determine performance.

We could have limited our game testing to two- and three-year-old titles that might have been a better match for the horsepower of our chipsets’ integrated graphics cores. However, even casual gamers want to play new releases, so we rounded up a series of recent titles to see how playable they were on the 690G. I’ll give you a hint: not very.

For Oblivion and Battlefield 2, we used FRAPS to log frame rates over 90 seconds of gameplay. Average and low frame rates were then calculated, and we’ve presented the mean of the averages and the median of the low scores. With F.E.A.R., we used the game’s internal performance test, which provides average and low frame rates.

We had to run Oblivion at its lowest detail levels, and even at an 800×600 display resolution, the 690G didn’t average over 30 frames per second. At least it fared better than the GeForce 6150s, which struggled to hit 20 frames per second. What’s even more painful, however, is to see Oblivion’s once-gorgeous graphics reduced to absolute ugliness by the lowest detail settings.

Battlefield 2’s low detail level allowed us to run it at a reasonable resolution of 1024×768, and the 690G was able to average over 30 frames per second without dipping below 20. That puts it ahead of the GeForce 6150s by a decent margin—one you definitely notice when playing the game.

The 690G takes top honors in F.E.A.R., as well, this time by a substantial margin over the GeForce 6150s. Note that the 690G’s low frame rate is higher than the average for either of the GeForces.

Video playback
AMD is quick to hype the 690G’s Avivo features and wealth of video output options, but how does the chipset handle high-definition video playback? To find out, we rounded up a handful of movie trailers in WMV HD and H.264 formats at resolutions of 720p and 1080p. WMV HD testing was conducted in Windows Media Player 10 with the Terminator 3 DVD trailer. H.264 testing was done with QuickTime 7 and the Hot Fuzz movie trailer. We logged CPU utilization for the first minute of video playback and have presented an average of those results.

HD video playback clearly isn’t the 690G’s forté. In each instance, the AMD chipset uses more CPU time than either of the GeForce 6150s. This fact is especially apparent with higher resolution 1080p clips.

Serial ATA performance
The Serial ATA disk controller is one of the most important components of a modern core logic chipset, so we threw each platform a selection of I/O-intensive storage tests.

We’ll begin our storage tests with IOMeter, which subjects our systems to increasing multi-user loads. Testing was restricted to IOMeter’s workstation and database test patterns, since those are more appropriate for desktop systems than the file or web server test patterns.

The 690G’s transaction rate is flat until the number of concurrent I/O requests ventures beyond 32, while the GeForce 6150 SE’s performance starts to scale upward right off the bat.

IOMeter response times don’t favor the 690G, and there’s a notable difference in the performance of the GeForce 6150 SE and the AMD 690G between eight and 64 outstanding I/O requests.

Even with quicker response times and higher transaction rates, the GeForce 6150 SE doesn’t really consume more CPU cycles than the 690G.

iPEAK multitasking
We developed a series of disk-intensive multitasking tests to highlight the impact of command queuing on hard drive performance. You can get the low-down on these iPEAK-based tests here. The mean service time of each drive is reported in milliseconds, with lower values representing better performance.

The 690G may not kick its command queuing into gear until there are more than 32 I/O requests in the queue, but that doesn’t seem to matter through our first wave of iPEAK multitasking tests. With the exception of our dual file copy workload, the 690G delivers better performance with workloads that include compressed file creation and extraction.

When we switch to iPEAK workloads that include Outlook PST file import and export operations, the 690G stumbles, falling behind the GeForce 6150 SE in three of four tests.

SATA performance
We used HD Tach 3.01’s 8MB zone test to measure basic SATA throughput and latency.

Serial ATA performance is pretty tight in HD Tach, but the 690G is a little slower in the burst rate test. Don’t make too much of the 0.5% gap in CPU utilization; HD Tach’s margin of error in that test is +/- 2%.

ATA performance
ATA performance was tested with a Seagate Barracuda 7200.7 ATA/133 hard drive using HD Tach 3.01’s 8MB zone setting.

The 690G’s ATA transfer rates are a little slower than those of the 6150 SE in HD Tach’s read burst and write speed tests, but the rest of the scores are too close to call.

USB performance
Our USB transfer speed tests were conducted with a USB 2.0/Firewire external hard drive enclosure connected to a 7200RPM Seagate Barracuda 7200.7 hard drive. We tested with HD Tach 3.01’s 8MB zone setting.

USB performance was long a problem for ATI south bridge chips, but the now-AMD-branded SB600 gets it right. Thanks to that south bridge, the 690G is able to stay one step ahead of the GeForce 6150 SE in our USB performance tests. The 690G’s CPU utilization is higher, but keep in mind HD Tach’s +/- 2% margin of error in that test.

3D Audio performance

These days, 3D audio processing is largely handled the sound drivers for third-party codec chips. On our MSI motherboard, the 690G is paired with Realtek’s ALC888 codec, and CPU utilization is higher than that of our GeForce 6150 SE board, which is using an Analog Devices codec. These results are about how Realtek and Analog Devices codecs usually stack up, regardless of which chipset is involved.

Recently, we discovered that Realtek’s current HD audio drivers don’t properly support EAX occlusions and obstructions. This renders some games, such as Battlefield 2, all but unplayable with EAX effects enabled, and it’s something to keep in mind if you’re considering using Realtek-based integrated audio.

Ethernet performance
We evaluated Ethernet performance using the NTttcp tool from Microsoft’s Windows DDK. The docs say this program “provides the customer with a multi-threaded, asynchronous performance benchmark for measuring achievable data transfer rate.”

We used the following command line options on the server machine:

ntttcps -m 4,0, -a

..and the same basic thing on each of our test systems acting as clients:

ntttcpr -m 4,0, -a

Our server was a Windows XP Pro system based on Asus’s P5WD2 Premium motherboard with a Pentium 4 3.4GHz Extreme Edition (800MHz front-side bus, Hyper-Threading enabled) and PCI Express-attached Gigabit Ethernet. A crossover CAT6 cable was used to connect the server to each system.

The boards were tested with jumbo frames disabled.

The 690G doesn’t have an on-chip Ethernet controller, so our MSI motherboard employs Realtek’s RTL8111B GigE controller for networking. That chip is every bit as fast as the Gigabit controller integrated in the GeForce 6150 SE. However, the Nvidia Gigabit controller does offer lower CPU utilization.

PCI Express performance
We used the same ntttcp test methods from our Ethernet tests to examine PCI Express throughput using a Marvell 88E8052-based PCI Express x1 Gigabit Ethernet card.

No problems here. Both chipsets offer comparable throughput with our PCIe Gigabit Ethernet card, although the GeForce 6150 SE’s CPU utilization is a little lower.

PCI performance
To test PCI performance, we used the same ntttcp test methods and a PCI-based VIA Velocity GigE NIC.

PCI performance problems plagued older SB400 series south bridge chips, but AMD seems to have things under control with the SB600. The 690G’s throughput isn’t quite as high as that of the GeForce 6150 SE here, but it’s pretty close. AMD does come out on top when we look at CPU utilization, likely because its lower throughput means it’s actually pushing fewer bits.

Power consumption
We measured system power consumption, sans monitor and speakers, at the wall outlet using a Watts Up power meter. Power consumption was measured at idle and under a load consisting of a multi-threaded Cinebench 9.5 render running in parallel with the “rthdribl” high dynamic range lighting demo.

Our 690G-based motherboard consumes slightly less power than the one based on the GeForce 6150 SE at idle, and slightly more under load. Power consumption hasn’t been a strong suit of Nvidia chipsets, but the new single-chip MCP61 seems to have reined things in, offering little advantage to AMD.

AMD has put together a very competent integrated graphics chipset in the 690G—one that will no doubt help the company push a holistic platform that consolidates processing, core logic, and graphics. With its low power consumption, Avivo video processing engine, and support for HDMI output with HDCP, the 690G seems particularly well suited to mainstream desktop systems tasked with multimedia playback. The 690G could also serve well in a dedicated home theater PCs for the living room. We do wish that CPU utilization with high-definition video playback were lower, though.

One might also be tempted to suggest the 690G to the casual gaming crowd. After all, it does have a fancy Radeon X1250 graphics core derived from a one-time mid-range GPU. Despite being faster than either of Nvidia’s GeForce 6150 IGPs, though, it’s still woefully underpowered for recent titles. If you’re willing to turn Oblivion’s detail levels all the way down then, yes, you can play the game. But Oblivion at its lowest detail setting isn’t really Oblivion. Even Battlefield 2, which was released more than a year and a half ago, looks just awful with the in-game detail set to levels that are playable with the 690G’s integrated graphics.

Perhaps I was expecting too much from the 690G’s integrated graphics core, but I’m left feeling a little underwhelmed by the chipset as a whole. This is a very capable integrated graphics platform for Socket AM2, but apart from HDMI output, free Avivo video processing, and a graphics core that’s still too slow for real gaming, AMD hasn’t done much to distance the 690G from a GeForce 6150/nForce 430 chipset that was introduced a year and a half ago. Motherboards based on the 690G won’t be available for a few weeks, as well—round about the time Nvidia says it will reveal a new single-chip GeForce 7050 IGP that supports PureVideo, HDMI, and HDCP at the CeBIT show.

AMD has laid its cards on the table with the 690G, but this chipset isn’t a royal flush. Whether Nvidia can respond with something that beats it remains to be seen, but we won’t have to wait long to find out.