With AMD seemingly content to stay out of the core logic game, its chipset partners were left to battle each other for market share. Then, on July 24 of last year, AMD announced its intent to acquire ATI. That changed everything.
In acquiring ATI, AMD gained control over not only one of its more aggressive chipset partners, but also one of the big two in PC graphics. With that asset now in its pocket, it was only a matter of time before AMD rolled out a new chipset with integrated graphics for Athlon processors. Today that chipset arrives as the AMD 690G, which packs a familiar SB600 south bridge paired with a new Radeon X1250 graphics core with four DirectX 9-class pixel pipelines. How well does the 690G stack up against the competition? Read on to find out.
Comparing the competition
Although both SiS and VIA make integrated graphics chipsets for Athlon processors, neither offers much in the way of pixel-pushing horsepower. The Radeon-bred graphics core in the 690G has superior capabilities and the backing of ATI Catalyst drivers with solid support for a variety of PC games. Thus, AMD's most direct competition is Nvidia's GeForce 6100 series chipsets and their associated nForce 400 south bridge I/O chips. Nvidia's offerings on this front are, er, complicated, so I've included two of them here, for reasons I'll explain later.
Here's how our three contenders measure up in terms of north bridge features.
|690G||GeForce 6150 SE||GeForce 6150|
|CPU interconnect||16-bit/1GHz HT||16-bit/1GHz HT||16-bit/1GHz HT|
|PCI Express lanes||24*||18||17|
|Pixel shader processors||4||2||2|
|Textures per clock||4||2||2|
|Shader model support||2.0b||3.0||3.0|
|Core clock speed||400MHz||425MHz||475MHz|
|Chipset interconnect||PCIe x4||NA||HyperTransport|
|Peak interconnect bandwidth||2GB/s||NA||8GB/s|
|Video outputs||DVI, HDMI, VGA||DVI*, VGA||DVI, VGA|
All three chips offer a 16-bit, 1GHz HyperTransport processor link, but the AMD 690G definitely has an edge when it comes to PCI Express lanes. Four of those lanes are dedicated to the chipset interconnect, though, so they can't be used to power onboard peripherals or PCIe slots. A four-lane PCIe interconnect gives the 690G 2GB/s of bandwidth, which looks a little pokey next to the GeForce 6150's 8GB/s HyperTransport interconnect. However, even Intel's high-end desktop chipsets are perfectly happy using a 2GB/s DMI interconnect, so it's unlikely the 690G will be starved for bandwidth.
Of course, the real jewels of these chipsets are their integrated graphics processors. The 690G's Radeon X1250 graphics core is derived from the Radeon X700that is, from ATI's previous generation of desktop GPU technology, before the Radeon X1000 series. It sports four pixel shader processors that meet DirectX 9's Shader Model 2.0b spec. In the world of specifications one-upsmanship, Shader Model 2.0b support puts the Radeon X1250 slightly behind the GeForce 6100 family, which supports Shader Model 3.0. However, the differences between those two specifications have to do with esoteric things like flow control and program length in pixel shader programsthings that will almost certainly never become an issue for either of these integrated graphics cores, given their basic performance levels.
In fact, the Radeon X1250 should be faster than the GeForce 6100 family, even with its slightly slower 400MHz core clock speed, thanks to its four full pixel pipes. The X1250 has four pixel shader units, can apply one texture per clock in each pixel pipe, and can write one pixel per clock to the frame buffer.
By contrast, the GeForce 6100 series integrated GPUs decouple various stages of the traditional pixel pipeline, so their resources are configured somewhat differently. They key difference here is that they have just two pixel shader units running at either 475MHz (in the GeForce 6150) or 425MHz (in the 6150 SE). Those pixel shader units also handle texturing duties, so the 6100 family can apply a maximum of two textures per clock. The 6100-family IGPs have one ROP, as well, which considerably limits their ability to write pixels to the screen. Interestingly, though, the GeForce 6100 series graphics cores do enjoy an on-chip vertex shader, while the Radeon X1250 leans on the CPU to handle vertex processing.
Before the pixels processed by an integrated graphics core can be displayed on a screen, they have to make their way through a video output. The 690G offers plenty of options on this front, with native support for VGA, DVI, and HDMI output. DVI and HDMI output are independent, so you can run both at the same time. AMD also throws in a TV encoder for those who aren't lucky enough to be running a high definition set. HDMI output isn't supported by the GeForce 6100 series, and you only get a TV encoder and DVI output with the GeForce 6150. The GeForce 6150 SE is capable of powering either a TV encoder or a DVI output, but only through auxiliary display chips connected to its sDVO (Serial Digital Video Output) interface.
Given its wealth of video output options, it's only fitting that the 690G is also equipped with an Avivo video processing engine. Avivo handles tasks like video scaling, decode acceleration, 3:2 pulldown detection, and other widgets that enhance video playback quality. Nvidia calls its video processing engine PureVideo; it offers many of the same features Avivo does. Only AMD's graphics drivers are needed to enable the 690G's Avivo capabilities, but you actually have to purchase PureVideo decoder software from Nvidia or supported third party apps like WinDVD, PowerDVD, or Nero ShowTime to unlock the GeForce 6150's video processing engine. PureVideo isn't supported on the GeForce 6150 SE, either.
Before we go on, I should explain why we've included two different GeForce 6100-series chipsets here. Nvidia launched the 6100 family a year and a half ago, so it's getting a little long in the tooth. This past summer, Nvidia updated the 6100 line with a new single-chip solution dubbed the MCP61, and that's where things get a little confusing. The MCP61 is essentially a GeForce 6100 north bridge and nForce 430 south bridge crammed into a single chip. But it's not quite that simple. You see, Nvidia also integrated hardware Z-culling into the graphics core of this new chip. When combined with the latest 93.71 ForceWare graphics drivers, this capability magically transforms the GeForce 6100 graphics core into what Nvidia is calling the GeForce 6150 SE.
Phew. Got it?
So the GeForce 6150 SE is Nvidia's newest competitor for the AMD 690G, but the GeForce 6150's video processing capabilities also put it in the running.
|Star Wars Battlefront trailer will leave your jaw on the desk||111|
|This week produced a bumper crop of security holes, patches||16|
|Two men have real-life flame war over iOS, Android||54|
|Report: DOJ may oppose Comcast's Time Warner acquisition||35|
|Deal of the week: A terabyte-class SSD for $300, plus more||33|
|This is my favorite fanless NUC chassis so far||29|
|AMD posts $180 million loss, shutters SeaMicro business||242|
|Razer's BlackWidow Chroma spawns a tenkeyless variant||18|