Intel’s 915G and 925X Express chipsets

YOU MAY BE aware that Intel is introducing a new PC expansion spec called PCI Express, designed to replace the not-so-gracefully-aging PCI bus and its prodigal son, AGP. This move has been planned for some time now and needed for even longer. PCI is older than the hills and slower than Jessica Simpson counting her change. What you may not know is that Intel was not content just to replace the PCI bus. Instead, the company has undertaken to freshen up nearly the entire PC platform, with new specifications for everything from memory to storage, graphics, power, enclosures, cooling, processor sockets, and audio.

The intent of these wide, sweeping changes is clear: to inflict as much pain on the industry as possible in the shortest time window.

Err, sorry.

What I meant to say was that Intel clearly intends to clean up the last vestiges of the circa-1990s PC platform at once, weeding out weaknesses and pulling open bottlenecks. The marketing spin on all of this says it’s about enhancing the user experience and making the PC a better citizen in the “digital home,” where networked PCs replace VCRs and other such media devices. For once, I’m somewhat persuaded by the spin, because many of these changes should make computing smoother and easier, better suited to the playback of high-definition audio and video. However, this major overhaul of the PC isn’t just about making a better TiVo replacement. There’s much more to it than that.

We’ve tested the whole shebang, from the Intel 915 and 925X Express chipsets to new processors including the Pentium 4 model 560 at 3.6GHz. We’ve tested PCI Express graphics cards from ATI and NVIDIA, and we’ve benchmarked Maxtor’s impressive new MaXLine III Serial ATA hard drives with support for Native Command Queuing. Read on to learn more about what each of these changes means for you and to see how this first wave of next-generation PC hardware performs.

The heart of the matter
At the heart of Intel’s PC overhaul is that too-often overlooked component, the core logic chipset. These two chips act as the traffic cop inside a personal computer, allowing all the devices to communicate and function together. Most of the major features you see on a checklist from Dell or HP are conferred by a system’s chipset, as well. Today, Intel is introducing a lineup of three new 900-series Express chipsets, the 925X, 915P, and 915G. I’ll give you a brief overview of this lineup’s new features, and then we’ll look at the new stuff in more depth. If you’re confused by some of the terminology, hang on, because we’ll be explaining much of it on the following pages.


A block diagram of the 925X Express chipset. Source: Intel.

The three 900-series chipsets have a lot in common, as one might expect. Their north bridge chips—or memory controller hubs (MCH), as Intel likes to call them—include a PCI Express X16 interface for graphics, replacing AGP, and a new memory controller capable of working with DDR2 memory.

As in the past, Intel has enabled and disabled features on its MCH chips to create three distinct products. The 925X is the high-end chip; it will have faster internal timings in its memory controller and support for ECC memory to enhance data integrity for workstations. The 915P is Intel’s mainstream chip; unlike the 925X, it retains support for DDR memory. And the 915G is essentially the 915P plus built-in graphics.

All three of these north bridge chips talk to the other chip in the set, the south bridge—or I/O Controller Hub (ICH) in Intel-speak—via a new, PCI Express-like link, dubbed DMI, that has a data rate of 1GB/s in each direction for a total of 2GB/s. Until now, Intel’s chipsets had been saddled with Intel’s Accelerated Hub interconnect running at 266MB/s, so this change is welcome.

The change is also necessary, because the I/O-oriented south bridge will now be doing lots more input and output. There are four flavors of the new ICH6 chip, and all them share some common features, including four Serial ATA 150 ports, one ATA/100 port, eight USB 2.0 ports, Gigabit Ethernet, support for Intel’s new High Definition Audio, and four lanes of PCI Express expansion capacity. That features list represents an upgrade in almost every category save one: it’s down one ATA/100 port. Intel’s obviously ready to move the market away from ATA hard drives. Still, the ICH6 series retains support for all sorts of legacy I/O standards, including up to six PCI slots, just in case you’re a glutton for punishment.

As in the past, the ICH models with support for disk arrays, or RAID, get an “R” at the end of their names. Also, some models of the ICH will now come with 802.11g wireless networking capability. Those models will get a W attached to their names. So at the end of the day, you have four variants of the new ICH: the vanilla ICH6, ICH6R, ICH6W, and the super-deluxe ICH6RW. Make mine an RW, please.

That’s the 10,000-foot overview of the 915 and 925X Express series chipsets that bring all these new features to the PC for the first time. Now let’s talk about some of the important features in more detail.

PCI Express arrives
I wasn’t kidding when I said ye olde PCI bus is slow. In its most common implementation in desktop PCs, at 32 bits and 33MHz, the PCI bus has a theoretical peak bandwidth of 133MB/s, which is shared between all devices. To give you some perspective, a single Serial ATA hard drive interface runs at 150MB/s, and Gigabit Ethernet runs at roughly 125MB/s. Asking a PCI-based bus to host an SATA RAID array and a Gigabit Ethernet controller is like asking Alec Baldwin to read through F.A. Hayek’s The Road to Serfdom—you could practically watch its lips move.

Not only does PCI lack bandwidth, but its shared bus architecture requires arbitration between devices that want to transfer data and involves contention between upstream and downstream communications. PCI’s limitations have forced chipset makers to integrate ever more functionality into their chipsets, as Intel did when it hung a Gigabit Ethernet interface off the north bridge in its previous-generation 875P chipset.

At 33MHz and 32 bits, ye olde PCI is decidedly slow and wide. PCI Express, meanwhile, is the epitome of the new thinking in internal PC communications links, the “fast and narrow” approach, a more serialized way of transmitting data with lower pin counts and higher signaling rates. Despite the dorky name, PCI Express doesn’t actually share all that much with PCI, save some memory addressing and device initialization similarities so drivers and operating systems don’t need major plumbing changes to work with the new standard.

In fact, PCI Express is downright network-like on several levels. On the lowest, physical layer, PCI Express uses pairs of dedicated, unidirectional links to transfer data between devices. A pair of links in a PCI-E connection is known as a “lane,” and each lane offers 250MB/s of bandwidth in each direction, upstream and downstream. Because PCI Express lanes are point-to-point affairs, there are no worries about shared bandwidth, and because the lanes are bidirectional, there’s no contention between sending and receiving data.

The slowest possible PCI Express configuration is a PCI Express X1 slot, where a device gets 250MB/s of bandwidth in each direction, or 500MB/s in full duplex. However, like NIC teaming in an Ethernet network, PCI Express lanes can be teamed up to deliver more bandwidth between devices. For graphics, sixteen PCI Express lanes will connect a graphics card to the rest of the system for a total bandwidth of 8GB/s, full duplex. That’s a whopping amount more than the current, PCI-derived standard for graphics, AGP 8X, which tops out at 2.1GB/s.

The similarities between PCI-E and a network don’t stop at the physical layer, either. PCI Express also employs a packet-based protocol for data transmission, and it uses packet header information to reserve bandwidth for delay-sensitive data streams with eight different traffic classes. These facilities should make PCI-E ideal for more than just dedicated connections between devices. PCI Express should become a standard for internal PC communications, just as AMD’s HyperTransport is now.


The PCI Express X16 (top) and X1 (bottom) slots sandwich
a pair of legacy PCI slots on Intel’s D915GUX motherboard

The PCI-E physical layer spec allows for X1, X2, X4, X8, X12, X16, and X32 lane widths, but the initial connector specs call for only for X1, X4, X8 and X16 slots. X4 and X8 slots may make appearances in servers soon, but for desktop systems, expect to see X1 slots for expansion and X16 slots for graphics.

Obviously, PCI Express will bring more bandwidth to the PC platform, but more importantly, it establishes a new foundation for PC expansion standards. PCI on desktop PCs hasn’t changed radically since its inception, but PCI Express has the engineering headroom and a practical set of options for expansion, when needed.

DDR2 memory debuts
Intel’s engineers have given the 915 and 925X Express memory controllers the ability to work with memory conforming to the new DDR2 standard. Presently, the original DDR memory type generally tops out at 400MHz, but DDR2 memory starts there and goes up. The first round of DDR2 memory runs as fast as 533MHz, but DDR2 is expected to climb to 667 and 800MHz in the future.

DDR2 memory has been tweaked in various ways to allow for higher clock speeds and, one would hope, eventually more performance than DDR memory. Among other changes, DDR2 memory chips include on-die termination, higher densities, longer burst lengths, a different signaling scheme, and lower operating voltages (1.8V) than original-recipe DDR memory. Many of these changes could cause higher access latencies, but those should be offset by higher clock frequencies.


The 240-pin DDR2 DIMM (top) is notched in a different location
than the 184-pin DDR module (bottom)

The DDR2 spec also requires fine-pitch ball-grid array (FPBGA) packaging for DDR2 memory chips, so the TSOP chip package common on many DDR modules (save for our DIMM pictured above) won’t be present.

DDR2 memory is not backward compatible with DDR, so you’ll have to chuck your DIMMs if you’re looking to upgrade. DDR2 modules have 240 pins instead of 184, and they have a different notch placement to prevent confusion or inadvertent calamity.

At 533MHz, DDR2 modules will have a peak theoretical bandwidth capacity of 4.3GB/s. Since the 915 and 925X chipsets have dual-channel DDR2 memory controllers, that’s a total peak of 8.6GB/s. However, the Pentium 4’s front-side bus currently tops out at 6.4GB/s, so the CPU won’t get to enjoy the full benefits of DDR2 memory. More likely beneficiaries include PCI Express X16 graphics cards and the built-in graphics core inside the 915G Express chipset.

More power requires… more power
Although the new motherboards Intel supplied us for review fit the ATX form factor, they have power connectors similar to those in the new BTX chassis standard. The 20-pin main ATX power connector has been upgraded to an EATX-style 24-pin connector like those found on server boards. Like some server boards, the Intel mobos were content to run off a 20-pin power connector if needed. Also, next to the four-pin ATX 12V connector on each board is a four-pin Molex connector—old-school hard drive style—for auxiliary power. Obviously, Intel is making provisions for its 100W Prescott Pentium 4 processors to get enough juice.


The Intel D915GUX motherboard’s DDR2 DIMM slots and 24-pin power connector

Fortunately, Intel is also taking steps to make sure those monster GeForce 6800 Ultra cards will get enough power. The power supply Intel shipped with the review equipment came with a distinctive new six-pin power connector, and NVIDIA’s new PCI Express-ready GeForce 6800GT came with a port for just such a plug.


The PCI Express version of the GeForce 6800GT uses a new six-pin power connector

This plug can replace the dual Molex connectors NVIDIA used on its GeForce 6800 Ultra card. For those of you who just bought a new 800W power supply, adapters from dual Molex connectors to the new six-pin plug should be available.

Serial ATA grows up
These new chipsets bring along with them an improved Serial ATA standard from Intel known as AHCI. This standard adds some new features to the Serial ATA specification, including device hot plugging and a form of tagged command queuing officially known as Native Command Queuing. Both of these features are similar to those provided by the SCSI standard prevalent in the server and workstation world, but they’re now coming to the everyday desktop PC.

The 915 and 925X chipsets also support the ATAPI standard on their four Serial ATA ports, so they should be ready to host SATA optical drives.

The biggest news here is Native Command Queuing. NCQ puts some smarts in the hard disk drive’s control logic, allowing it to reorder the execution of requests in order to optimize for what’s happening with the hard drive mechanism itself—where the head is seeking across the drive and where the platter is spinning under the head. By queuing up multiple commands and executing them out of order, the hard drive may be able to grab data more efficiently than it could by simply executing commands one after another, minimizing the near-eternal (in computer time) delays caused by seek times and rotational latencies.

The NCQ spec looks fairly robust, with all the sorts of provisions necessary to make such a feature work. Drives with NCQ can initiate DMA transfers through the host controller themselves, and they can aggregate interrupts, so only one interrupt is generated when multiple commands complete close together. NCQ has the potential to help performance substantially during periods of intensive disk activity, when multiple applications are making requests for data simultaneously. We’ll test that theory shortly.

To complement these SCSI-like features, the ICH6R south bridge has RAID support for its four SATA ports built in. Intel’s chipset RAID will do RAID level 1, or mirroring, and RAID 0, striping, but not both together. RAID 0+1, RAID 10, and RAID 5 are not mentioned in Intel’s docs, unfortunately. However, the RAID controller can support a pair of independent, two-drive RAID arrays. The ICHR also now supports the designation of a hot spare drive and auto array rebuild for RAID 1 arrays.

More impressive still is Intel’s Matrix RAID technology. Matrix RAID is the RAID type nearly every enthusiast has probably wanted, whether he knew it or not. This feature allows the user to create a pair of RAID arrays of different types across only two drives. Each drive can have two partitions. On each drive, partition 0 could be part of a RAID 0 array, and partition 1 could be part of a RAID 1 array. Thus, the user would get, effectively, a pair of RAID drives, one using striping for improved performance and the other using mirroring for data integrity. Put your OS and applications on the RAID 0 array for faster boot and load times, and store your critical data on the RAID 1 array so you won’t lose it if one of the drives crashes. Nifty, eh?

Audio gets more definition
In the annals of product naming, Intel’s new High Definition Audio distinguishes itself with the most vanilla name possible for the feature it represents. Still, it’s not confusing and involves no torturous capitalization tricks, so I’d best not complain too much.

High Definition Audio provides—wait for it—high-definition audio on the PC, built right into the chipset. This new specification aims to replace the current AC97 audio spec. HD Audio allows for up to eight channels of digital audio at up to 24-bits of precision at 192KHz sample rates. That’s enough fidelity for PCs and PC-based devices to reproduce all of the major consumer electronics audio standards, including Dolby Digital Surround EX, DTS, and THX, provided the proper software support.

HD Audio also improves over AC97 from an I/O standpoint, with support for dynamic bandwidth allocation, flexible use of DMA streams for audio input or output, and a clock signal that’s generated on the south bridge chip itself, not on the codec chip (or chips).

Of course, standards for digital audio on the PC only sound as good as their implementations, and the Intel implementation on its D925XCV motherboard is fairly representative of what most motherboard makers seem to be doing in several respects. Intel has chosen a Realtek 880 codec chip, which is the HD Audio successor to Realtek’s wildly popular ALC650-series codecs, found in what seems like every motherboard we’ve reviewed in the past year or so.

For output, the ALC880 can do digital-to-analog conversion for eight channels of audio at up to 24 bits and 192KHz, believe it or not, with a claimed signal-to-noise ratio of 100dB. Its S/PDIF output is limited to 24 bits and 96KHz. For recording, the ALC880 has three stereo analog-to-digital converters that peak at 24 bits and 96KHz; the claimed S/N ratio is 85dB.

In other words, the first implementations of HD Audio do indeed seem to deliver high-definition audio, at least in terms of precision and sample rates. This ain’t no SoundBlaster Audigy card, claiming 24 bits when the DAC can only do 16. Whether or not the ALC880, situated on a PC motherboard, really produces good sound is another story.

I haven’t had time to conduct extensive listening tests to get a good subjective take on HD Audio, but I did listen to some MP3s on it with a decent pair of speakers, and I can at least say this: it doesn’t totally suck. That is, of course, more than one can say for an awful lot of built-in motherboard audio these days, so that’s something. But you probably won’t be prying my VIA Envy24HT-based PCI card with fancy DACs away from me any time soon.

The really good news here is that Intel has established an excellent new baseline for PC audio, much better than the AC97 stuff we’ve seen to date.

Integrated graphics gets faster, less Extreme
Apparently, someone in Intel marketing figured out that calling its uber-high-end Pentium 4 chip the Extreme Edition probably didn’t jibe with calling its integrated chipset video Intel Extreme Graphics. Accordingly, Intel’s new integrated graphics core has been given the more modest name of Graphics Media Accelerator 900.

Fortunately, the more modest name is paired up with a significantly beefed up graphics core. The GMA 900 features four pixel pipelines running at 333MHz, as opposed to the single pipe of its predecessor in the 845G and 865G chipsets. Its 1.3Gtexel/s fill rate peak matches GeForce FX 5200 Ultra. The extra memory bandwidth the 915G chipset gets from DDR2 533MHz memory should help boost performance, as well.

Read further down the spec sheet, and the GMA 900 starts to sound formidable, at least in the world of integrated graphics. The GMA 900 supports DirectX 9, OpenGL 1.4, and Pixel Shader 2.0, at least on its spec sheet. Intel chooses to offload vertex shader work to the CPU, but the Prescott processor includes some SSE3 instructions specifically designed to accelerate vertex shader calculations.

However, Intel only claims the GMA 900 has 1.5 times the performance of the 865G’s Extreme Graphics, so don’t expect miracles. I think I saw some pixel shader effects on it when running UT2004 benchmarks, but the GMA 900 crashed out of Far Cry and refused to run the excellent DX9 “rthdribl” demo. The GMA 900 may give ATI’s Radeon IGP chipsets a run for their money, but Intel needs to work on its drivers a little first. And gamers can forget about it, regardless.

Fortunately, though, Intel has bolstered the GMA 900 with support for HDTV, including 1080i and 720p resolutions and component outputs, in addition to VGA and DVI. In fact, our Intel 915G test board arrived with a PCI-E X16 riser card sporting a DVI output.

The LGA775 package and socket
Intel is launching a range of new processors for its 915 and 925X Express chipsets, all of which come in a new “land grid array” type package that has, oddly enough, no pins. It simply has 775 connector pads on its underside.


The Pentium 4 in LGA775 package (top)


The Pentium 4 in LGA775 package (bottom)

The LGA775 processors fit into a funky motherboard socket that has pins protruding from it. Like so:


The new socket for LGA775 processors


The socket with LGA775 processor clamped inside

This arrangement makes the processor vastly less susceptible to bent or broken pins. The question is whether it makes motherboards more prone to the same things.

So far, that’s not been my experience at all. Having dealt with my share of bent pins and damaged CPUs (tip: never drop an Athlon 64 while trying to shoehorn it into a small form factor box), I have to say that I’ve felt more comfortable dealing with the LGA775 stuff during testing. The processors are remarkably sturdy now, of course. You could play Tiddlywinks with the darned things. And CPUs generally cost quite a bit more than motherboards, anyhow.

But the motherboards don’t seem too terribly delicate. The pins are spaced close together enough that they form a pretty solid surface, and the socket mech itself tends to protect them. I’d rather have the pins in there than out on the CPU. Still, I’m curious to see how these sockets wear over time, and how well some of the motherboard makers manage to handle their return policies now that it’s their turn to deal with bent pins.

Intel’s new models and numbers
In addition to everything else it’s introducing today, Intel is unleashing a new range of processors in the LGA775 package, including one truly new clock speed, the Pentium 4 “Prescott” processor at 3.6GHz. In fact, aside from the Pentium 3.4GHz Extreme Edition for LGA775, I believe all of the CPUs Intel will be supplying in the new package are based on the 90nm Prescott core.

As expected, Intel has assigned model numbers to these new processors, easing the emphasis off of clock speeds, as AMD has already done. The new 3.6GHz version of the Prescott Pentium 4 gets a model number of 560. Here’s a table with all the numbers.


Intel’s processor numbering plans. Source: Intel.

Although older, Northwood-based processors tend to offer better performance and dissipate less heat at a given clock speed, the Prescott will apparently be Intel’s workhorse CPU going forward.

Cooling the Pentium 4 560
To aid in dissipating the 115-plus watts of heat generated by a 3.6GHz Prescott CPU, Intel supplied an interesting new cooler with our review equipment. Check it out:


Intel’s LGA775 CPU cooler

This cooler is designed to protect the processor from damage by chopping off your freaking fingers if they get too close.

I like it, though, because the BTX-style four-pin power connector on the motherboard gives it the ability to ramp fan speeds up and down in a linear fashion as needed. This setup isn’t nearly as noticeable as the transitions between speeds common to multi-stage cooling fans, which can be annoying.


The underside of the cooler

PCI Express graphics gets real
Both ATI and NVIDIA are getting in on the action with PCI Express video cards. We were able to test one card from each company with Intel’s new PCI Express chipsets. Let’s have a look at them.


NVIDIA’s PCI Express GeForce 6800GT

The NVIDIA card is a GeForce 6800GT based on the NV40 chip plus NVIDIA’s HSI chip, which bridges between the GPU’s AGP interface and the motherboard’s PCI Express X16 connection. NVIDIA says this chip talks to the GPU at two times the speed of AGP 8X, so PCI Express data rates should be faster than with AGP 8X, despite the bridge chip.


Abit’s Radeon X600 XT

ATI’s Radeon X600 XT is the PCI Express version of the Radeon 9600 XT. Unlike NVIDIA, ATI has chosen to re-spin its Radeon chips with built in, native PCI Express interfaces. Unfortunately for us, the X600 XT also has a higher memory clock speed than the 9600 XT, so direct comparisons between the 9600 XT and X600 XT will be a little bit iffy.

We’ve heard endless discussions about the potential performance impact of these two companies’ approaches to PCI Express. We may be able to settle some of the dispute with the test results in the following pages.

Test notes
First, a few notes on how we labeled our graphs. This is, uhm, a bit of a complex product launch, so we tested multiple processors on multiple chipsets in order to include all the relevant info. I believe we’ve done it, but listen up. The graphs are labeled for both the CPU tested and the chipset involved. The common thread among Intel chipsets is the Pentium 4 3.4GHz Extreme Edition processor, which we’ve labeled as a “Pentium 4 XE 3.4GHz”. Use those scores to compare chipsets best.

The Prescott CPUs are running at different speeds here. The Pentium 4 3.4E is the Prescott on Socket 478 with the older 875P chipset. The Pentium 4 560 is the 3.6GHz Prescott in the new LGA775 package.

You’ll notice that all of the new Intel chipsets and CPUs are highlighted in the graphs for easy reading. The other stuff is in darker colors.

Keep in mind that the X600 XT is running at a higher memory clock speed than the 9600 XT. We did our CPU and chipset testing with these Radeon cards, but we tested gaming with the GeForce 6800GT cards, as well, particularly because those cards run at the same speed on both AGP and PCI-E. Note, also that the scores labeled “915G/GMA” are not using an external graphics card but the 915G chipset’s built-in graphics.

Finally, we’ve updated the BIOS on our 875P platform (the Abit IC7-G motherboard) since our last set of CPU tests, and we found we got better performance out of it. We were able to use better memory timings, as well, so the 875P is a small amount faster all around.

Our testing methods
As ever, we did our best to deliver clean benchmark numbers. Tests were run at least twice, and the results were averaged.

Our test systems were configured like so:

Processor Athlon 64 3800+ 2.4GHz
Athlon 64 FX-53 2.4GHz
Pentium 4 3.4GHz Extreme Edition
Pentium 4 3.4’E’GHz
Pentium 4 3.4GHz Extreme Edition
Pentium 4 560 3.6GHz
Pentium 4 3.4GHz Extreme Edition
Front-side bus HT 16-bit/800MHz downstream
HT 16-bit/800MHz upstream
800MHz (200MHz quad-pumped) 800MHz (200MHz quad-pumped) 800MHz (200MHz quad-pumped)
Motherboard MSI MS-6702E Abit IC7-G Intel D925XCV Intel D915GUX
BIOS revision 3.0B10 IC7_21.B00 CV92510A.86A.0159 EV9150A.86A.2029
North bridge K8T800 Pro 82875P MCH 925X MCH 82915G MCH
South bridge VT8237 ICH5R ICH6R ICH6R
Chipset drivers 4-in-1 v.4.51
ATA 5.1.2600.220
INF Update 5.1.1.1002 INF Update 6.0.0.1014 INF Update 6.0.0.1014
Memory size 1GB (2 DIMMs) 1GB (2 DIMMs) 1GB (2 DIMMs) 1GB (2 DIMMs)
Memory type Corsair TwinX XMS3200LL DDR SDRAM at 400MHz Corsair TwinX XMS3200LL DDR SDRAM at 400MHz Micron DDR2 SDRAM at 533MHz Micron DDR2 SDRAM at 533MHz
CAS latency 2 2 4 4
Cycle time 6 6 12 12
RAS to CAS delay 3 3 4 4
RAS precharge 3 2 4 4
Hard drive Maxtor MaXLine III 250GB SATA 150
Audio Integrated
Graphics 1 Radeon 9600 XT 128MB AGP with CATALYST 4.6 drivers Radeon X600 XT 128MB PCIe with CATALYST 4.6 drivers Radeon X600 XT 128MB PCI-E with CATALYST 4.6 drivers
Integrated Graphics Media Accelerator with 6.14.10.3181 drivers
Graphics 2 GeForce 6800GT 256MB AGP with 61.45 drivers GeForce 6800GT 256MB PCIe with 61.45 drivers
OS Microsoft Windows XP Professional
OS updates Service Pack 1, DirectX 9.0b

All tests on the Intel systems were run with Hyper-Threading enabled, unless otherwise specified.

Thanks to Corsair for providing us with memory for our testing. If you’re looking to tweak out your system to the max and maybe overclock it a little, Corsair’s RAM is definitely worth considering.

The test systems’ Windows desktops were set at 1152×864 in 32-bit color at an 85Hz screen refresh rate, with exception of the 915G with GMA, which was at 1024x768x32 at 85Hz. Vertical refresh sync (vsync) was disabled for all tests.

We used the following versions of our test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Memory performance
So how does DDR2 memory handle, and does the 925X chipset offer much performance advantage over the 915G?

Well, well. The old 875P chipset with DDR400 memory comes out on top in the memory bandwidth sweeps.

Linpack shows us a visual of how fast the CPU can compute data matrices of various sizes, from those stored in the on-chip cache to those that only fit into main memory. Unfortunately, in the case of the Pentium 4 Extreme Edition chips, their 2MB L3 caches can hold any data set Linpack throws at them, so we don’t get a look at main memory performance. The Pentium 4 560, though, nicely outpaces the 875P system in Linpack, showing us a hint of better performance from DDR2 memory after we get into data sizes larger than its 1024K L2 cache.

Here’s a surprise. For some reason, I half expected DDR2 memory to have higher access latencies, but that’s not the case. The Pentium 4 Extreme Edition takes longer to get to memory on the 875P chips than on then 915G and 925X. Then again, it’s all very close, and the tables are turned with the Prescott chips, where the 875P shows slightly lower access latencies. No surprise that the AMDs are fastest, though, in overall memory bandwidth and access latencies. The Athlon 64’s built in memory controller is very tough to beat.

Memory performance (continued)
Don’t let the 3D graphs scare you. The graphs are indulgent, but they’re useful, too. I’ve arranged them manually in a very rough order from worst to best, for what it’s worth. Shorter bars are generally better. I’ve also colored the data series according to how they correspond to different parts of the memory subsystem. Yellow is L1 cache, light orange is L2 cache, and orange is main memory. The red series, if present, represents L3 cache. Of course, caches sometimes overlap, so the colors are just an interesting visual guide.

Ok, so the order from highest to lowest latency is totally a rough estimate. Don’t pay much attention to the order in which the graphs are presented. Instead, look at how much higher the memory latencies are on the 900-series chipsets at the highest step sizes, 2048 and 4096. Obviously, the 915/925 memory controller behaves very differently with DDR2 than the 875P does with DDR. Although our single sample point on the last page showed decent latencies for the DDR2 chipsets, the reality is that it depends very much on how memory is being accessed. It’s hard to say which chipset is generally quicker at accessing memory, based on these results. The one thing we can say with certainty is that AMD’s integrated memory controller is very, very quick.

GeForce 6800GT gaming
We’ll get right down to gaming with the GeForce 6800GT cards. This may be a better chipset comparison than the Radeon gaming tests because of the different clock speeds between Radeon X600 XT and 9600 XT. Remember, though, that in this case the GPU is talking to PCI Express through a bridge chip.

We’ve kept the game resolution low in order to give the chipsets and PCI Express a chance to work their magic. At higher resolutions, we’d run into the limitations of the GeForce 6800GT card’s fill rate, memory bandwidth, and pixel shader performance, obscuring the impact of PCI-E and the new chipsets. I’ve tested both UT2004 and Far Cry at medium and high quality settings, to see if larger texture sizes or additional vertex and command traffic will strain the AGP bus and allow PCI Express to excel.

Unreal Tournament 2004

Far Cry

Quake III Arena

Comanche 4

Splinter Cell

In all five of the games we tested with the GeForce 6800GT, we see no performance advantage to the 915/925 chipsets and PCI Express. The 875P chipset is ever so slightly faster than the 925X most of the time, with only Quake III Arena allowing the 875P to separate a little.

Radeon 9600 XT/X600 XT and GMA 900 gaming
Again, we’ve kept the resolution low with the Radeon cards. We want to highlight the impact of PCI-E and chipset differences, and we also want to minimize the impact of the higher memory clocks on the X600 XT versus the 9600 XT.

I’ve also included performance numbers for the Intel 915G chipset’s Graphic Media Accelerator 900 in the results below. Matching up to the Radeon X600 isn’t really a fair fight, but you should get an idea about its performance. Unfortunately, Comanche 4 wouldn’t run on the GMA 900 because of its lack of a vertex shader. Also, Far Cry crashed immediately on initializing the graphics engine on the GMA 900, so I don’t have any results for it.

Unreal Tournament 2004

Far Cry

Quake III Arena

Comanche 4

Splinter Cell

The 915G and 925X look a little better here. Is it because of the X600 XT’s slight memory clock speed advantage, or because of the X600 XT’s native PCI Express interface? Hard to say. But let’s try something a little different . . . .

3D image downloading
Back when we wrote this article, the folks at Serious Magic devised a test for measuring the speed of downloading an image from the graphics card back into main system memory. At the time, the graphics drivers for Direct3D were abysmally slow at getting data back off the card, but both ATI and NVIDIA updated their drivers to fix the problem, and transfer speeds went up. Now, with PCI Express, transfer speeds should go up again. Let’s see what happens.

Both the ATI and NVIDIA cards show improvement in pulling down data from graphics memory via PCI Express, but the Radeon X600 XT is clearly the faster of the two. The GeForce 6800GT is only about 14MB/s faster via PCI-E than via AGP 8X, while the Radeon is twice as fast via PCI Express—and significantly faster overall. This synthetic benchmark looks like a vindication of sorts for ATI’s native PCI Express implementation, but it still raises some intriguing questions. First, why are both cards’ data transfer rates so low, relatively speaking. PCI Express should provide 4GB/s of bandwidth for data transfers from the video card to main memory. Second, why is the ATI card almost exactly twice as fast at pulling data back from the video card via PCI Express as it is via AGP 8X? I don’t have any answers, but I am curious to learn why that might be.

Sphinx speech recognition
Ricky Houghton first brought us the Sphinx benchmark through his association with speech recognition efforts at Carnegie Mellon University. Sphinx is a high-quality speech recognition routine that needs the latest computer hardware to run at speeds close to real-time processing. We use two different versions, built with two different compilers, in an attempt to ensure we’re getting the best possible performance.

Prescott loves Sphinx, and Sphinx loves the Prescott. The 925X system with the Prescott 3.6GHz takes the top spot by a hair, but the Prescott 3.4GHz on the 875 offers generally the same performance. LAME MP3 encoding
We used LAME to encode a 101MB 16-bit, 44KHz audio file into a very high-quality MP3. The exact command-line options we used were:

lame –alt-preset extreme file.wav file.mp3

Chipsets don’t tend to make a lot of difference for MP3 encoding, as you can see. DivX video encoding
This new version of XMPEG includes a benchmark feature, so we’re reporting scores in frames per second now.

Here’s another test where, as with Sphinx, the Pentium 4 Prescott at 3.6GHz looks to push the 925X over the top. The 875P is faster with like processors.

Lightwave rendering
NewTek’s Lightwave is another popular 3D animation package that includes support for multiple processors and is highly optimized for SSE2. Lightwave can render very complex scenes with realism, as you can see from the sample scene, “A5 Concept,” below.

We’ve tested the processors with one and two rendering threads to see if Hyper-Threading helps.

Here’s another test where chipsets just don’t matter much. The Prescott at 3.6GHz manages to beat out the Athlon 64 3800+ in both cases, though.

POV-Ray rendering
POV-Ray is the granddaddy of PC ray-tracing renderers, and it’s not multithreaded in the least, because it’s designed to be a cross-platform application. POV-Ray also relies more heavily on x87 FPU instructions to do its work, and it contains only minor SIMD optimizations.

Again, chipsets aren’t much of a factor, but look at the nice performance increase for the Prescott at 3.6GHz. The Pentium 4 560 is much faster than the 3.4E. Then again, the Athlon 64 is far and away the fastest.

Cinebench 2003 rendering and shading
Cinebench is based on Maxon’s Cinema 4D modeling, rendering, and animation app. This revision of Cinebench measures performance in a number of ways, including 3D rendering, software shading, and OpenGL shading with and without hardware acceleration. Cinema 4D’s renderer is multithreaded, so it takes advantage of Hyper-Threading, as you can see in the results.

Our final rendering test is another CPU showcase; chipsets don’t matter much. The Pentium 4 chips all do well in Cinebench rendering, and the Pentium 4 560 manages to get fairly close to the Extreme Edition chips in overall performance.

ScienceMark
ScienceMark is optimized for SSE, SSE2, 3DNow! and is multithreaded, as well. In the interest of full disclosure, I should mention that Tim Wilkens, one of the originators of ScienceMark, now works at AMD. However, Tim has sought to keep ScienceMark independent by diversifying the development team and by publishing much of the source code for the benchmarks at the ScienceMark website. We are sufficiently satisfied with his efforts, and impressed with the enhancements to the 2.0 beta revision of the application, to continue using ScienceMark in our testing.

ScienceMark’s problem-solving tests show us what we’ve come to expect by now—that performance difference between the 875P, 915G, and 925X are fairly minor, and that the 915G and 925X tend to trail the 875P slightly when chipsets do matter.

The Prescott Pentium 4s put on a show in DGEMM, displaying their improved SSE2 performance. Again, chipsets aren’t much of a factor.

Audio performance
Now we’ll dive into the south bridges, where the new ICH6 can get more of a workout. We’ve cut the contenders down to three here, because the 915G and 925X share the same south bridge chips.

RightMark3D measures CPU utilization with a number of audio tasks. We can see how well Intel’s High Definition Audio implementation works.

Overall, the 925X does pretty well in these tests, showing CPU utilization generally similar to the 875P’s AC97 audio engine. The VIA audio controller on the K8T800 Pro uses much more CPU time, consistent with what we’ve seen from it in the past. However, there are several mitigating factors here. First, I don’t believe RightMark3D tests with any sort of high-definition audio streams that require mixing of high-bit-rate audio data, so it’s not really a torture test we’re seeing. Second, I’m always wary of CPU utilization tests that report numbers with Hyper-Threading enabled. Generally, software doesn’t seem to get a very accurate representation of CPU utilization with HT turned on, because one of the two logical CPUs will be sitting idle. Then again, I’m not sure it’s accurate not to have it turned on, either. Just something to keep in mind. USB performance
We used HD Tach to measure USB transfer rates to a Maxtor DiamondMax D740X hard drive in a USB 2.0 drive enclosure.

The 925X achieves faster read speeds and lower CPU utilization than the 875P chipset. The K8T800 Pro with the Athlon 64 is faster yet, but at the price of much higher CPU utilization. Out of curiosity, I turned off Hyper-Threading and re-ran the test on the 925X. CPU utilization was then reported at 19.4%, still much lower than the K8T800 Pro system.

Disk I/O performance
Here we get to see whether Native Command Queuing has any measurable benefits. I used Iometer with both workstation and database access patterns to simulate real-world disk loads. Note that there are two sets of results for the 925X. One of them is without Native Command Queuing, using the built-in Microsoft disk driver in WinXP. The other uses Intel’s Application Accelerator for RAID 4.0 driver, which enables Native Command Queuing support—and not surprisingly, that’s the result labeled “NCQ” in the graphs.

In all cases, we’re using Maxtor’s MaXLine III SATA 150 drive that features a 16MB buffer. This is a pre-production drive with NCQ support.

Without NCQ, the 925X chipset is very closely comparable to the 875P and K8T800 Pro chipsets. But with both access patterns, Native Command Queuing shows higher transaction rates, lower response times, and only negligible spikes in CPU utilization (below about 3%) versus the non-NCQ configs.

Here is a feature that folks should be lining up for. Hard drives are the slowest components in a modern PC, and the 925X with Native Command Queuing delivers SCSI-like performance in a Serial ATA drive. We’ll have to test RAID with NCQ soon.

Conclusions
Those of you who were looking for earth-shaking performance differences out of Intel’s new chipsets may be disappointed, but realistically, most of the changes are not of the sort easily measurable via common benchmarks or applications. No, the 915G and 925X chipsets aren’t really faster in gaming with PCI Express graphics cards, but we saw the same thing back when AGP 8X arrived. That doesn’t mean we don’t need a better, faster path to the graphics card; it just shows that game developers tend to write their applications with the limitations of their target hardware in mind. As of right now, that target is probably a graphics card with 128MB of memory, AGP4X, and a DirectX 8-class GPU. Depressing, but true. Applications that take advantage of PCI Express in a big way will come along sooner or later. As for DDR2 memory, at 533MHz, it’s a little disappointing, because it isn’t really faster than DDR400. However, remember that we were testing with first-gen Micron DIMMs with relatively conservative timings. We may see better performance yet out of fancy performance DIMMs like the Kingston HyperX or Corsair XMS2 stuff. If not, well, DDR2 probably won’t be worth the price premium for a while yet. I have here an Abit motherboard based on the 915P chipset with DDR400 memory support. I’m curious to see how it performs. Boards like it may be the best choice for those looking to get into a PCI Express system right away.

Obviously, the biggest performance win of them all with the new chipsets is Serial ATA with Native Command Queuing. Its performance alone would be enough to sway me away from the older Pentium 4 platform and perhaps from an AMD-based one, as well. We’ll have to measure it more thoroughly in time, but based on what we’ve seen so far, I expect NCQ will cut boot times, among other things. It’s just the right thing to do, and now we can have it, complete with RAID 0 and 1, without paying for SCSI. We can even have data integrity and extra performance with two drives, thanks to Matrix RAID.

The 915G’s integrated graphics seems to be an improvement, but the graphics driver needs work in order to make the GMA’s claim of DirectX 9 support seem credible. For what it will be asked to do, the GMA 900 should be just fine. Just don’t ask it to run Far Cry.

The rest of the changes to the PC platform are a little harder to quantify. I need to play with High Definition Audio a little more using a proper 5.1 or 7.1 surround sound system and a high-quality audio source before I feel qualified to pronounce it a complete success, but it’s at least decent. I’m a little shocked how capable Realtek’s ALC880 codec turned out to be. Eight channels of 24-bit, 192KHz audio is a heckuva new baseline for PC audio capabilities.

So what should we make of the whole package, including Intel’s new LGA775 Pentium 4 Prescott processors? Well, right now, the AMD64 platform seems to have the lead in terms of overall CPU performance, gaming performance, and memory performance, despite the arrival of PCI Express and DDR2 memory. The Athlon 64 has power consumption and thermal characteristics superior to any system based on an LGA775 processor. Also, the Athlon 64 unambiguously has support right now for 64-bit operating systems and applications as they become available. All in all, no small set of advantages.

AMD also seem to have a big advantage in terms of product availability at the high end of the market. As of today, I couldn’t find a single Pentium 4 3.4E listed for sale on PriceWatch, and here we are reviewing a 3.6GHz model. Craziness. Intel needs to launch silicon, not paper.

However, with the 915 and 915X Express chipsets, Intel has innovated mightily in ways that deliver a better overall user experience and a better overall PC platform. Of course, everyone will benefit from some of these changes, including Athlon 64 buyers, once competent PCI Express chipsets arrive for the Athlon 64. But Intel’s implementation of all these new technologies is here now, seems reasonably solid, and is poised to become the new PC platform standard over the next six to twelve months. Taken together, all these improvements add up to a pretty compelling argument for 915/925X-based systems, assuming they’re sufficiently available. I’m cautiously optimistic, and I’m intrigued to start reviewing new 915/925X motherboards, higher performance DDR2 DIMMs, and PCI Express graphics cards. That optimism may turn into an all-out recommendation, especially if Intel can turn on its 64-bit extensions and get its CPU heat problems reined in a bit.

Comments closed
    • Wintermane
    • 16 years ago

    In the end all that will matter is does the extra die space taken up by that rather large pci express interface wind up being cheaper or more spendy then using a bridge chip incorped into the chip package as nvidia does.

    Performance wise at least for now it didnt matter at all wich way you did it.

    The only performance gains we will see from pcie right now is more from the better cooling and power available in the new motherboard designs and the abiity to thus MAYBE overclock it more.

    Oh on ddr2 concider the fact that already people are pushing normal ddr2 to ddr 722 mhz.. the real test of ddr2 systems will be on overclock friendly boards and in situations where memory bandwidth is key. I mean why the heck else would you care about new memory unless you needed more bandwidth?

    • Koly
    • 16 years ago

    The Russians (?) at X-bit labs did a great review of the new platform

    §[<http://www.xbitlabs.com/articles/cpu/display/lga775.html<]§ and at least somebody had the courage to write a clear conclusion, I enjoyed it very much: "I am only upset that the new chipsets are incompatible with the widely spread cool AGP 8x graphics solutions available in the today’s market." "A definite drawback of the new i925/i915 chipsets is DDR2 SDRAM support." "In fact, DDR2 support is a serious slow-down for i925/i915, so that these chipsets show pretty low performance compared with their predecessors." "All in all I have to say that such a great lot of innovations Intel introduced in its new i925/i915 chipsets deprive the users of the upgrade opportunities completely." I especially like this one: "This way, the launching of the new i925/i915 solutions is not just a technological breakthrough but also a perfect way of getting more money from the users." In the conclusion §[<http://www.xbitlabs.com/articles/cpu/display/lga775_28.html<]§ they also drawed a nice graph comparing 925X+DDR2 vs. 875+DDR performance and I have to say the difference is even bigger than I thought at first glance reading the various reviews.

    • indeego
    • 16 years ago

    Essentially no features/benefits for business users. Perhaps the support for dual DVI, but that was possible long ago, with old technology, just no demand brought it to popularity.

    The audio solution isn’t important.
    The form factor change is insignificant. It may be worse, because companies don’t like changing the layout of PC’s from generation to generation. The power draw on these chips is just silly.
    The performance doesn’t really get me going, although speech recognition speed is nice, accuracy is abyssmal for any real attempt.

    Disappointingg{.}g

    • Sargent Duck
    • 16 years ago

    I believe it was Anandtech (or maby Toms…..) measured the temperature of a 3.6ghz Prescott. It was in the 60’s somewhere. Intels own motherboard monitor started issuing warnings about the temperature (http://www.tomshardware.com/motherboard/20040619/socket_775-03.html)

    • Koly
    • 16 years ago

    Ok, so I’ll take a guess how Ati could have designed the PCI-E interface on X600 in a very simple, economic and effective way. I think the most simple thing they could have done is keeping the AGP controller unchanged, while designing an extremely short AGP bus connecting to an PCI-E interface inside the chip. My guess is that the most logical way would be to clock the ultrashort AGP bus at core clock, i.e. 500MHz. This would lead to 8.5x increase in AGP bandwidth (>18GB/s) and negligable latency. The inner bandwidth of the resulting PCI-E interface controller would be much bigger than the PCI-Ex16 interconnect’s, which is 8GB/s. This would be a much better solution than nVidia’s, where the real chip-to-chip AGP bus is running only on twice the nominal speed, so it would limit the whole interconnect to 4.3GB/s, less than the PCI-Ex16 specification, while introducing additional latencies.

      • BooTs
      • 16 years ago

      Wow.. thats some very blind and self-indulgent speculation. Good luck with that.

    • ludi
    • 16 years ago

    .kill

    • shaihulud
    • 16 years ago

    i do wonder how come csa has been removed from the mch? it is one of the best complementing additions to have with an intel system, specially, since one of their flagship motherboards is targetting enthusiast. i do not comprehend why nor understand why. unless, it was due to size and limited constraint of the new mch (perhaps because of the pci-eg?)

      • Proesterchen
      • 16 years ago

      CSA has been cut because of simplification in the northbridge (hub-link & CSA [effectively another hub-link] out, 16 lanes PEG + 4 lanes for DMI in) and it being no longer needed with the ability to connect a PCI-Express GbLAN controller through the southbridge without bandwidth constraints.

      • BooTs
      • 16 years ago

      q[

    • indeego
    • 16 years ago

    Why do chipset versions differ from what is available on Intel’s site since Mayg{http://downloadfinder.intel.com/scripts-df/filter_results.asp?strOSs=44&strTypes=DRV%2CUTL&ProductID=1765&OSFullName=Windows*+XP+Professional&submit=Go%21<]§ Also note new graphics drivers released recently on that same page for that chipsetg{<.<}g

      • Damage
      • 16 years ago

      For the 875P, I checked Intel’s site on Wednesday of last week and the version I used was still current. (I was surprised, too, but so it said.)

      For the 9xx-series chipsets, I used the drivers supplied and recommended by Intel.

    • malebolgia
    • 16 years ago

    Great article, though I wonder when AMD will reduce the prices on its processors.

    • indeego
    • 16 years ago

    nevermindg{<.<}g

    • Wintermane
    • 16 years ago

    The point of the new stuff is…

    pcie… The main point is more power to the card and better placement of the card and better cooling of it. The bandwidth is just future proofing and it does effect performance a bit.

    4 sata hd connections is a rather good addition and the new raid option is sweet for those who dont wana stuff 4 hds into the box yet want both streaming and mirroring.

    In case anyone missed the point intel doesnt expect everyone to run out and get one right now they have get the ball rolling and prolly dont expect over much usage this year or even into middle of next year.

    On ddr2 wanting ddr1 is why the 915 exists untill ddr2 gets alot cheaper and better performing I doubt anyone least of all intel expects a huge number of 925 boards to run amuk in the world.

    Oh and on the floppy if you look carefully most mass produced oem computers now come with card readers as standard and its very likely they will or already do drop the floppy.

      • anand
      • 16 years ago

      DDR2 modules aren’t going to get cheaper until more of it is sold. And no one is going to buy it until there are motherboards that support it.

      Anyway, this is Intel we’re talking about. Even if the enthusiast crowd doesn’t jump on this new stuff, the OEMs will, thus a market for DDR2 modules will be created and the price will go down.

      Remember everyone that DDR1 wasn’t exactly compelling when it first came out either.

    • Hattig
    • 16 years ago

    Right. Great Review, thanks.

    My thoughts:

    1) ICH6 is pretty nice, and probably right now the best southbridge available on the market. This won’t last long, of course, but 7.1 audio with low CPU utilisation, SATA with NCQ and all that is great. Shame that the SATA RAID doesn’t support mirroring and striping at the same time though.

    2) Prescott 3.6GHz on i925X is a very poor choice for a gamer compared to an Athlon 64 3800+ … if you are going to spend the money, then get the faster chip. The 3800+ more than deserves its rating. It is also available to buy right now. In other uses the 3800+ and 3.6GHz Prescott are roughly equal, depending on the application.

    3) Intel need to stop with the paper launches. It is getting bad for them in enthusiast circles, and they’ve now lost all the goodwill that Northwood got them.

    4) PCIe is good. DDR2 will be good one day. They might not be performing great wonders right now, but they have tonnes of headroom for the future, and should be around for the next 4 years or so, 10 years maybe for PCIe, although by then x1 won’t be on motherboards, it will be min x4.

    • espetado
    • 16 years ago

    q[

    • wierdo
    • 16 years ago

    Hmm, personal impressions:

    1. coolio, Raid Matrixication is neato.

    2. Nice new integrated audio specs, shame about realtek implementing it, %99 certain it’ll ruin the whole potential because of that, but hopefully some respectable mobo makers will be willing to pay the extra pennies for some models with beefier audio.

    3. Serial ATA getting better, me like 😛

    4. I disagree about the pins on the mobo thing, I’d rather bend the pins on the cpu and replace it with another rather than have to gut the mobo out the case, even it cost more it would be worth it, and I doubt it’ll even cost more in my particular case.

    5. DDR2 doesn’t look very interesting, I hope it doesn’t become popular, not worth the money or the migration investment, might be better sticking to DDR and looking forward to DDR3 or something instead imho.

    6. PCI-E is out finally, neato, it’s a good idea imo, though I’m not sure if I’ll need it personally yet, seems AGP and old PCI do the job for most things. Perhaps if I get into gig-ethernet then things might get interesting.

    7. New power connector again? Well… at least it’s not a new standard exactly, wish they did that from the beginning though, these power supplies are gonna become as annoying to replace as video cards or something 😉

    Ok I’m gonna stop before I have to publish this as a thesis 😛

    • fyo
    • 16 years ago

    ATI X600 uses a BRIDGE CHIP!

    Only the X800 features a true native PCI-E interface. The X300 and X600 are just the old chips with a bridge chip added on. NVIDIA recently showed die photos of the X600 to prove this.

    -fyo

      • Koly
      • 16 years ago

      X300 and X600 are single chip designs, they certainly don’t use bridge chips ala nVidia. What nVidia was trying to propose is that these chips have an integrated bridge into the die. A low resolution X-ray picture is nice, but proves nothing. The difference in 3D Image Download benchmarks suggest that they were probably wrong and Ati’s chips seem really native. We can only wait for some additional benchmarks to confirm or disprove it.

        • fyo
        • 16 years ago

        There is a bridge chip – but, yes, it is incorporated in the same die. So what? That doesn’t make a native solution.

        That download bandwidth is higher doesn’t suggest a darned thing. Neither NVIDIA nor ATI optimize downstream bandwidth to any significant extent (I’ve *heard* that the NVIDIA’s Quadro boards are much better optimized in this respect), so any difference is much more likely to be the result of varying degrees of (lack of) optimization.

        The most plausible explanation for the fact that the X600’s PCI-E downstream bandwidth is pretty much exactly double the 9600’s is also that the AGP is still being used, just run at double the frequency.

        These are low- to mid-range parts and have been available to board makers for many months – reportedly before ATI got the X800 anywhere near finished. Another reason to believe the bridge chip scenario.

        It just plain makes more sense! In terms of engineering, economics, performance – as well as the die photos.

        -fyo

          • Proesterchen
          • 16 years ago

          y[

            • fyo
            • 16 years ago

            The die photo certainly makes a *very* compelling case.

            Then there are the other reasons (as stated above): It fits with the Ockham’s razor argument (it’s the simplest solution [that fits the facts]).

            Tell me, why don’t YOU believe there’s a bridge chip? ATI only claimed native solution with their next-generation product, and as we all know, the X300 and X600 don’t fit that bill. Only the X800 does.

            Can YOU provide PROOF that ATI claimed that the X300 and X600 would also be native solutions?

            -fyo

            • Proesterchen
            • 16 years ago

            /[

            • fyo
            • 16 years ago

            /[

            • Proesterchen
            • 16 years ago

            /[

            • Koly
            • 16 years ago

            I am a physicist, so maybe a word about Occam’s razor. In modern science it is used as an additonal argument when one has two or more theories that explain the facts well. It is not something really convincing, it is only a guide what could be more close to the truth. Put simply, complicated explanations are regarded to be less probable as simpler ones. That does not mean that the more complicated theory can not be true, only an experiment can decide.

            To argue with Occam’s razor in a discussion which has nothing to do with science is quite stupid. We are not constructing theories about Ati’s chip, we are taking guesses. Moreover, you take a quite improbable theory of the competitor and declare it as truth, while you are trying to disregard others by arguing with the Occam’s razor. Huh? You came with a conspiration theory so you should prove it.

            • fyo
            • 16 years ago

            I am a physicist as well. So what?

            As for your “conspiracy theory” argument, well, that’s just plain silly. That, per definition, requires a *conspiracy*. I am not claiming that anyone conspired with anyone.

            There exists a body of undisputed hard, factual evidence. One such piece of evidence are the benchmarks in the article these comments are attached to. Another is the die photo (although the interpretation of such is certainly not undisputed).

            There is nothing in this body of evidence that in any way points to the existance of a “true native PCI-E controller” (that is, without an AGP bridge layer, regardless of whether this is on- or off-die).

            Simple economics, the timing of the availability of PCI-E’ified 9600 cards to allow board manufacturers to verify implementations and the die photo all lead *me* to conclude that the (by far) simplest theory that fits within the observables is that an AGP/PCI-E bridge layer exists in the X600.

            While certainly not a set-in-stone argument, the principle of Ockham’s Razor is what I used to reach my conclusion. I had no prior reason to believe one thing or the other. I simply let the available evidence convince me.

            A subsequent discussion, which I predict will be brought up by ATI if they are sufficiently pressed on the matter, has arisen as to what actually constitutes a “native PCI-E controller”, with a (to me) surprising number of people seemingly arguing that if the bridge layer is on the same die as the GPU core, then the implementation is native. If, on the other hand, two seperate die are used (regardless of how they are packaged), it is *not* native.

            To me, this line of reasoning makes absolutely no sense. To me, a native implementation <=> no bridge layer.

            Sincerely,

            fyo

            • Sargent Duck
            • 16 years ago

            Well, gee. Let’s all hop on the physics band wagon shall we? I’m a physisist too! (well, not really. A couple of years in university)

          • highlandr
          • 16 years ago

          The thing is, there is no bridge _[

      • Aphasia
      • 16 years ago

      And you take everything at face value that you hear. Especially from competing companies. Have you actually looked into how much effect a bridge would have. Have you heard ATI enginners saying its a bridge.

      No, youve seen an X-ray, and unless you are an ATI-Engineer saying, yeah, thats a bridge. So What!

      WHO CARES.

      But yeah, AGP 8x really gave an improvement over 4x. Just like a non top performance part will get an incredible gain whatever it uses. But no, the PCI Express X16 is the magic that will make everything go faster.

      • barawn
      • 16 years ago

      That was retarded NVIDIA FUD, and anyone with any EE knowledge knows it. It’s not like AGP couldn’t run at “AGP 48X” or something like that – it’s just that the high clockspeed would never work given the trace length issues. By integrating the controller on-chip, since the trace lengths are controlled, they can easily increase the “effective AGP” bus frequency so that it can provide more than enough PCI-Express bandwidth.

      This is like saying that integrating the L2 cache of a processor on-die won’t get performance boosts as compared to off-chip because alll they did was bolt on L2 cache to the processor die. Yes, of course there’s an AGP to PCI-Express conversion section. Duh. But the benefit of doing it on die is the fact that you can do it much, much faster than offchip, like NVIDIA did.

      But then again, the download speeds speak for themselves.

        • fyo
        • 16 years ago

        /[

          • PantherX
          • 16 years ago

          You don’t see a difference in redesigning a chip with a bridge INSIDE it as opposed to an add-on chip?

            • fyo
            • 16 years ago

            There *could* be a difference, but you are assuming a lot of things not in evidence.

            For one, you are assuming that the still-present AGP interface can actually run at PCI-E x16 speeds. This may or may not be the case, but by all appearances, ATI is only running it at *twice* the old speed (i.e. “AGPx16”).

            Another assumption is that the bottleneck is really with the connection to the bridge chip. I don’t see why this would necessarily be the case. While it is certainly the case that it is much easier to scale the link to high frequencies if the two components are on the same die, there is no reason to believe that the frequencies are high enough for this to be a problem. Since this is basically an AGP bus, certainly “normal” AGPx8 speeds are attainable. It seems reasonable that since the chips are located right next to each other with well-defined connections, it should be much simpler to increase the frequency than would be the case for a normal AGP bus scenario, where the other end is all the way on the North Bridge (through a much less well-defined (in terms of trace lengths and resistances) AGP slot interface).

            Would you agree, at least, that having a single PCI-E interface directly connected to the core (both on the same die, of course) and having a PCI-E interface connected via an AGP interface to the core (all on the same die) is not the same and will not necessarily perform identically?

            -fyo

            • Proesterchen
            • 16 years ago

            /[

            • Koly
            • 16 years ago

            A) I don’t think you understand what the word “chip” means. Extremely simplified, it is a piece of silicon with some transistors in it. The X600 don’t have a bridge chip because it is obviously a one chip design. Stop talking about a bridge chip, you look silly.

            B) If the bridge is integrated on die (on the chip), you cannot tell how it is done. They have modified the 9600 core, but how? They could remove the whole AGP interface controller and design a whole new PCI-E one. They could also slightly modify the AGP interface and bolt on an additional PCI-E part. Is this a native PCI-E solution? Of course it is. As already pointed out, the bandwidth from the (what was before) the AGP part to PCI-E part can be anything, 100GB/s or whatever. IT IS ON DIE, so it is easy. The whole thing is a NATIVE PCI-E INTERFACE CONTROLLER, maybe with some inner traces of previous design.

            If you don’t believe me, think about two different examples. Only few years ago, the (CPU) cache memory was a separate chip on the motherboard, it’s size was typically 500kB-1MB. Even for today’s standard, it was large. However, the CPU performance rose rapidly when the cache was integrated on the CPU die, even if it’s size was much smaller. Example two: think about the Athlon 64 integrated memory controller. It is integrated on the chip and therefore memory latencies are dramatically lower. It could be very similar to memory controllers found in the northbridges, but integrated on die it can be run at CPU clock with enormous bandwidth and minimal latencies between the CPU and the controller.

            Yes, Ati could have done a poor job by adding a PCI-E interface on the AGP one without any further modifications, but they would be really stupid. However, even in that case it would be a native PCI-E solution, only inefficient.

            • Kurlon
            • 16 years ago

            Actually, you can do multiple pieces of discrete logic and include them on one die. So one can say it’s a separate ‘chip’ as it’s a separate block that could be engineered independant of the rest of the ‘chip’. The main question is weather or not the extra block shown by the x-rays is a PCI-E bridge which wires up as AGP to the main core, or if PCI-E just requires more transistors and shows up looking that way. While WE can’t tell by an xray, rest assured that both Nvidia and ATi have the tools to inspect eachother’s cores and know which explanation is correct.

            • Koly
            • 16 years ago

            Chip is a physical piece of silicon. A chip cannot consist of several chips. Yes, a part of the chip can be manufactured separetaly as a different chip, but that’s not the point. You are right that this is a little semantics and the point is if Ati used the bridge on 9600 without modifing anything and kept running the AGP part in its original spec or not. I was only a little annoyed that fyo was talking about a “bridge chip” several times, even after I pointed out to him that there is no bridge chip.

            • fyo
            • 16 years ago

            We certainly agree on this (the chip vs die issue) – also without getting into MCMs and the like. I clearly mis-wrote.

            /[

            • Koly
            • 16 years ago

            I am glad we can set the “chip” issue, I considered my #33 post clear enough, but whatever, let’s forget it.

            You are right, we are disagreeing in what to consider a “native” controller. For me, if the controller is on die, it’s native, because it’s easy to modify it to work efficiently as it’s not limited by the interconnecting bus. It only has some traces of previous design and I would guess many units on various chips contain some traces of older designs. Why change it, if it works well?

            Here:

            §[<http://www.xbitlabs.com/articles/cpu/display/lga775_9.html<]§ is another example that the X600 solution works better than the nVidia's bridge chip. X600 has twice the bandwidth of AGP8x both in read and write tests and that's exactly what would one expect from an efficient native solution. The numbers are quite far away from the theoretical peak bandwidths, but that's probably similar to the measurements of main memory bandwidth, where the numbers can be much smaller two. Edit: I looked at graphs one more time and now I see that the read PCI-E bandwidth of X600 is less than twice of AGP8x, but I think the tendency is still clear.

            • PantherX
            • 16 years ago

            §[<https://techreport.com/reviews/2004q2/intel-9xx/index.x?pg=13<]§ I think this shows which implementation is better.

            • fyo
            • 16 years ago

            Let me start out by saying that my use of the word chip was completely incorrect. There clearly is not a bridge CHIP. This is not what I meant to imply. My focus at the time was solely the distinction between a native and a non-native implementation. It seems, however, that we do most certainly not agree on what constitutes this!

            /[

            • Proesterchen
            • 16 years ago

            /[

            • Proesterchen
            • 16 years ago

            Directly from the horse’s mouth:

            /[http://www.theinquirer.net/?article=16735<]§

            • vortigern_red
            • 16 years ago

            §[<http://www.theinquirer.net/?article=16735<]§ You seem to be taking NVs word over ATIs and telling everyone else that it is the gospel truth because its what you want to belive. I don't know if its a bridge or not anymore than you do! But you are mistaking NV PR for facts. Thats a big mistake, ask any 5800 ultra owners!

          • barawn
          • 16 years ago

          It’s extremely rude to quote someone without including either ellipsis to indicate a portion of a sentence removed or by making sure to include the entire sentence. You neglected the “It’s not like” before “AGP couldn’t run at “AGP 48X””, which changes the entire meaning of the sentence.

          /[

      • ludi
      • 16 years ago

      Oh yes indeed, that’s the biggest molehill I’ve ever seen!

      Meanwhile, for the NON-pedantic among us, whatever the ATi chip actually is, it seems to work well and the single-chip solution is cheaper to implement

      • BooTs
      • 16 years ago

      why not just post a big NVIDIA ad in the comments space? That will show ATI! That will make them look bad alright!

    • Ruiner
    • 16 years ago

    /[

    • tfp
    • 16 years ago

    A few things I would like to know is:

    1. Does the voltage in the CPU drop by a bunch when when the 775 Prescott is under heavy load? Because its seems to on the 478 socket.

    2. Does the 775 allow for more stable overclocking, probably a lot would have to do with the MB.

    3. How well would the prescott work with the DDR2 with a FSB that matched the RAM (1066). I thought ascyn was always alittle worse then not. Would the bandwidth make up for the latency a bit? (Not that it supports that buss speed yet.)

    tfp

      • bhtooefr
      • 16 years ago

      Well, the Intel boards aren’t easily overclockable. Wait for reviews of the Asus and ABit 915/925 boards for overclocking.

        • Shining Arcanine
        • 16 years ago

        I thought that the antioverclock mechanism was in the CPU…

          • tfp
          • 16 years ago

          yeah i think i read about that too. might take a non intel chipset to find out…

          • bhtooefr
          • 16 years ago

          I just read (although it was Tom’s Hardware, so it might not be legit) that the 9xx chipsets actively limit the FSB to 110% (880 QPB on today’s chips) of normal, and will crash the system if it’s higher. The A stepping supposedly allowed all of this to be disabled by flipping a bit, but the B stepping is much harder, and (supposedly) only Asus and Gigabyte figured out how to disable the FSB check so far.

          My comment was referring to the fact that Intel boards don’t offer many overclocking options, and don’t overclock well anyway. EXTREMELY stable at stock, very poor overclocking.

    • PerfectCr
    • 16 years ago

    Well I for one am underwhelmed. Sure, it has great prospects for “the future”, but if it offers almost no real world gains NOW. Why should I pay a premium to have it NOW if I am not going to see any benefit? Intel’s PR Machine can’t convince me. Sorry.

    Makes me even more happy that I invested in an Athlon 64 back in January!

    • Chryx
    • 16 years ago

    My take is thusly.

    the new ICH (it’s a southbridge goddamnit) is HOT stuff… the RAID + high quality sound onboard is a big step up.
    PCI-E is fledgling but has headroom,
    Prescott is still utterly meh.

    and am I the only one that’s noticed that Prescott on LGA775 has i[

      • Thresher
      • 16 years ago

      I’m totally with you on this.

      Matrix RAID sounds impressive as hell. I wonder if anyone on the AMD side is working on something similar.

      Maybe I didn’t read carefully enough, but will this encode to Dolby Digital on the fly like the nForce 2 MCP-T?

    • WaltC
    • 16 years ago

    What a good, thorough, well-written and informative review! Thanks! Your efforts are much appreciated.

    I especially appreciated seeing the low-res graphics tests, and indeed everything about this review was revelatory as to cpus and core logic, as opposed to 3d graphics, and covered the bases in exemplary fashion. Many other sites simply do 3d-card comparisons and call them cpu & core-logic reviews, and I hope those sites will look to this review as a model to emulate for cpu & core-logic reviews, as the community will certainly benefit.

    I really enjoyed revisiting Silicon Magic…;) What surprised me in referencing your link was that we were looking at things like this as recently as a bit less than two years ago. Seems like five…;)

    As to your questions raised there, I’ll hazard a couple of guesses:

    *As noted the core-logic retains support for PCI in addition to its support for PCIe. I wonder which of the motherboard- integrated peripherals are hanging from the PCIe bus and which are still PCI. The core logic must in some way arbitrate between the two (switch?), which I imagine would slow things down from their independent theoretical peaks (I wonder if both buses might be slowed a bit due to their co-existence?) So there might be some bus contention between the PCI and PCIe buses as to core-logic arbitration and management.

    *If I understood your question correctly, I thought that the theoretical unidirectional transfer speed of AGPx8 was 2Gb/sec, and 4Gbs/sec for PCIex16. Would this explain the doubling of ATi’s PCIe Silicon Magic transfers for PCIe over AGP x8?

    *I can’t recall much about the SM software, and wonder as to methods it uses to transfer files across the bus, as well as the sizes of the files it transfers, in terms of the top-end transfer speed numbers the software indicates in mbs/sec.

    Most of the high-end 3d cards for PCIe will have their own dedicated local ram pools of up to 256mbs to texture from, which in terms of theoretical transfer speed will be ~12x the speed of AGP x8, 6x the speed of PCIe unidirectional, and at least 3x the speed of PCIe duplex (at ~8 Gbs/sec.) Also, there won’t be any issues of bus contention and arbitration to concern the 3d gpu texturing from its own local ram as might affect the current PCIe/PCI core-logic & bus implementations.

    I’ve always seen PCIe as much more of a boon to IGPs than for high-end, standalone 3d cards, which can texture much faster from their local ram than from PCIe across the motherboard bus. Where I can imagine that PCIex16 might be a boon to these cards is in the fact that duplex operation of the bus might serve to refresh/replace the 3d-card’s onboard texture caches much more quickly than is currently possible with AGP x8. I would still expect high-end 3d cards to continue to directly texture out of local memory, though, since it’s still much faster than even PCIex16 duplex operation.

    Just some guesses, and thanks again for a most informative review!

    • Koly
    • 16 years ago

    This product launch is a bad joke. It is the paper launch of paper launches. LGA775 CPUs, PCI Express graphics cards, DDR2 memory… Ok, if I could by this and pay the horrendous price what will I get? NCQ? Hahaha.

    This is not going to be a standard platform in a year, because in a year or so dual core Pentium Ms will start to replace Prescotts. Prescott is dead. LGA775 is a dead born child. Four months after the launch of 3.4GHz Prescott you can’t buy one. According to Theinquirer (http://www.theinquirer.net/?article=16638), this will not change untill the end of August and I find it very probable. Put all together, I quite doubt there will be a faster than 4GHz Prescott ever.

    DDR2 is a joke. I don’t see any of the reviewers to have the courage, but somebody has to say it: 925X+DDR2 is not equal, not a bit a faster, it is clearly and systematically slower than 875+DDR. For twice the price of the memory, as rumoured.

    PCI Express graphics is a joke. No, it’s not the case of game developers being behind the curve. There will be no games stressing the bandwidth of AGP8x in a few years. Check the benchmarks here

    §[<http://www.sudhian.com/showdocs.cfm?aid=554<]§ Current games barely stress AGP2x and especially new ones like Far Cry or UnrealT2004 have identical scores for AGP2x and AGP8x. "It’s a solution to a problem no one has" says everything. PCI Express is a great thing for replacing PCI devices, but AGP should have been untouched for a while, only very slowly replacing it in a few years. AGP incompatibility of the new Intel chipsets is one of the most arrogant and annoying moves I have ever seen from a hardware company. Intel wants us (again) to throw out are current systems and buy a nice new "next generation" thing with less perfomance, more heat and very probably noise. And in year or a year and a half, to throw out this and buy the next "next generation". Intel, thank you very much, but you are dead to me. I am very surprised to see a quite positive conlusion from Damage. There is very little positive with this. A system based on a Prescott processor, DDR2 memory and PCI-E graphics is an extremely unconvincing option and Intel wanting to limit the number of choices as much as possible is a very very bad thing. I don't want to spend money when Intel thinks I should and I won't.

      • Proesterchen
      • 16 years ago

      I agree with the general gist of you post – DDR2 slow & expensive, PCI-E only useful for PCI replacement and the whole Prescott/LGA775 mess.

      On the other hand, it’s a really strong showing form the Intel PR department, convincing most of the reviewers that despite all the obvious downsides, there’s something positive to end their reviews with. 😕

      I bought two b{

        • Koly
        • 16 years ago

        You are right, Intel’s biggest strength is obviously its PR department. I imagine it’s hard to write something really nasty when the company supplies you with a couple of CPUs, a motherboard, a couple of graphics cards, a memory and a hard drive (did I forget something?), all unavailable at the moment.

      • blitzy
      • 16 years ago

      while the AGP bus is not in need of an overhaul just yet, I think we can all agree that the PCI bus is… and hence if you’re going to overhall the main expansion bus why not keep the graphics bus under the same umbrella? it seems logical to me

      I like the prospect of high quality audio, better PSU efficiencies and NCQ…. some of the other potential potholes that intel seem to be steering towards dont really effect me since they are usually more expensive (less cost effective) than AMD (assuming AMD also includes the aforementioned standards)

        • Koly
        • 16 years ago

        One thing I cannot swallow is that the PCI bus is still there, the PCI slots are not going anywhere, they are only accompanied by the PCI-E slots, while the strongest point of the PCI architecture – the AGP, where there is little reason to any change, is gone.

        What should they do is give us an OPTION, buy a MB with AGP or buy it with PCI-E. A slow painless migration would then follow. If you are a gamer, your video card could cost as much as their stupid CPU plus their motherboard. They want us to throw it out, cause their name is Intel.

      • protomech
      • 16 years ago

      I guess there will be some that cling to their floppy drives, AT cases, power supplies, keyboards and AGP 2x.

      Yes, it’s a new platform. And it’s just like every new platform that has come along in computing history. Parts will continue to be available for some time in the old specification, some transition parts will be released (slocket adapters, PATA to SATA bridge devices), and then eventually you’ll do a complete system upgrade to the new platform.

      You’re not going to see a performance increase with DDR2 vs DDR1 when the processor FSB is at 200×4 MHz. Maybe in a year or so, when the FSB is 266×4 or 300×4. Maybe not, maybe the market will reject DDR2, and we’ll sit at DDR400 for another couple of years until JEDEC specs out DDR500. Or another memory technology will become more popular, or Intel moves to a 256-bit memory system and doubles motherboard sizes. Or not.

      It’s not that the game developers are behind the curve. It’s that existing games target only the local video memory available, and game art is tuned towards the lowest common denominator (as they should). It’s up to the platform developers to raise the lowest common denominator. When the lowest common denominator rides on an 8 GB/s channel rather than a 1 GB/s asynchronous channel, perhaps we will see not just “faster more resolution bigger textures” but perhaps some new ways of applying the technology that is available.

      Raising the integrated graphics core from a DX7-class featureset to a DX9-class featureset is pretty significant too. Non-enthusiasts may not care much whether their game or application runs at 15 fps vs 150 fps, but they do care whether it runs at all.

        • Koly
        • 16 years ago

        DDR2 is in a bad position, because DDR scaled much higher than expected and there is still some room left. DDR2 needs to be run much faster to bring better performance, not to say the CPU FSB has to be clocked appropriatery too. As far as I know, Intel’s plans are to introduce 1066MHz FSB until the end of this year and getting it to mainstream next year. That could bring some performance increase, maybe bringing it to par with systems on DDR400, maybe a little faster. To really shine, it has to be clocked faster, but I don’t see faster FSB on Intel’s roadmaps and maybe the CPU would not even take advantage of it. AMD will surely wait, the high latency memory would probably hit them even more and they would have to introduce a (yet another!) new socket. I guess AMD would be much happier with a faster DDR JEDEC standard.

        There is a big moral about the fate of DDR2 in the graphics industry. After a short and unsuccesful experimenting on FX5800, nVidia abandoned it in favour of DDR, while I think Ati haven’t implemented it at all. The jump is from high speed DDR to GDDR3 and, seeing GDDR3’s immediate success, my guess is main memory variant of it could be a real successor of DDR, not DDR2.

      • AGerbilWithAFootInTheGrav
      • 16 years ago

      I totally agree, and I have to be critical with TR conclusion – almost recommended???

      I imagine that for the same $$$ for the new platform you could build a dual 248 opteron system with similar specs…

      The new platform is – unavailable (mentioned), and secondly almost 2x the price

      * reading it again well it sounds too harsh on TR, but still it’s a lot of dosh for not much in real life. ie the same or more is available for less which cannot be really worth recommending.

      • Shining Arcanine
      • 16 years ago

      Have you ever once considered that dual core Pentium-M processors will be made for LGA775?

      Have you noticed that the new chipsets put DDR2 on par with DDR?

      Of course there are no games stressing PCI Express, the application that will stress it is HDTV.

      The performance actually increases slightly with the new platform. Not to mention the heat and noise will be taken care of by future processors…

        • Koly
        • 16 years ago

        I find very improbable that a dual core P-M would run in LGA775. Intel changes sockets with almost every minor CPU architecture changes (like now), it would be a big surprise (though positive) if they would be able to keep the socket in this case. I am sceptical.

        The difference between 925X+DDR2 and 875+DDR in most reviews is very tiny, mostly around or under 1%, but this is systematically in favor of 875.

        If HDTV will use PCI-E bandwidth, that’s great, but why should a gamer have no option to keep his card with an upgrade?

        I don’t know what you mean by performance increase with the new platform. You mean a 3.6GHz Prescott in 925X is a tiny bit faster than a 3.4GHz one in 875? Now that’s great…

        I am very curious to see the solution for noise and heat. I doubt very much it could be anything else than a desktop Pentium M.

    • Zenith
    • 16 years ago

    Ahhhh, why use those Maxtors for native command queuing? Should use a sexy Seagate 7200.7 SATA drive for NCQ! 😀

      • Kurlon
      • 16 years ago

      Hell with that slow drive, I want to see a 72gb Raptor flex it’s NQC muscle!

        • hmmm
        • 16 years ago

        The rev. 2 Raptors supposedly supported TCQ, so will there be a firmware update or something to let them to NCQ? That’d be nice.

          • Kurlon
          • 16 years ago

          NCQ == TCQ

          Tagged Command Queuing is the term pulled over from the SCSI world, NCQ is what SATA makers have settled on to market the functionality TCQ provides.

        • Shining Arcanine
        • 16 years ago

        I agree, I’d like to see them use 72GB Raptors as well.

    • spuppy
    • 16 years ago

    Are you using a new camera scott? The pics look better than ever!

      • Damage
      • 16 years ago

      Thanks, man. No, still my G3, but I am learning to use it right. 🙂

    • Spotpuff
    • 16 years ago

    So let’s summarize:
    1) little to no to negative performance gains on the CPU
    2) massive upgrade costs for RAM and vid cards
    3) no significant gains for the video card
    4) no significant gains in RAM performance
    5) Shifting the RMA burden from Intel to mobo manufacturers
    Hurray?

    • Perezoso
    • 16 years ago

    I’m still reading…
    q[

      • Damage
      • 16 years ago

      doh! thanks.

        • blitzy
        • 16 years ago

        The VIA audio controller on the K8T800 Pro uses much more CPU time, consistent with what we’ve seen from in in the past.

        from pg 19

    • Convert
    • 16 years ago

    Very nice review indeed.

    The PCI-E benches were the most interesting of all.

    • danny e.
    • 16 years ago

    its nice to see intel doing a few things right. nice to see some of the old designs finally begin to die off.

    when will the dang floppy die? I am not going to burn a cd just to put a few SATA drivers on it. its time to come up with another cheap alternative to the floppy that holds around 100 – 200MB. not real necessary actually… but i just want floppy to die.

    die floppy die.

      • Ryszard
      • 16 years ago

      Bootable ROM with some flash memory. I want motherboards to come equipped with a flash memory chip soldered to the board, that comes with a bootable DOS environment loaded from the ROM, with tools for preparing your hardware and drivers for the disk controllers it has on the board.

      You boot, the DOS enviroment asks you if you’re setting up the motherboard with an OS for the first time. If yes, it presents you with a list of those inbuilt SATA drivers it has in its memory. If you don’t like what’s on the list, say you’re using your own disk controller card, you choose the option to augment the list with something from the CD that’s provided with the controller. It copies that into its flash memory.

      The DOS environment reboots the board and turns itself into a floppy drive as far as the OS setup program is concerned, which can find all the disk controller drivers it needs.

      Once OS setup is done, you enter the BIOS and turn off that option ROM.

      Something along those lines, to get rid of the only frickin’ need for a floppy drive in this day and age. Begone you abhorrent spectacle of legacy computing.

        • Buub
        • 16 years ago

        Re: DOS system in ROM

        You can currently do all that with a bootable CD or DVD. And, those are both much easier to upgrade. I don’t see any need for this to be in a ROM on the motherboard, since current technologies do this and more, much more easily.

        • indeego
        • 16 years ago

        Better yet just have a network aware OS that can get the drivers online while installing the OS. You can always override, but I wondered why XP doesn’t use this better, like say oh, ahem, linux.

        There’s always PXE/network booting toog{<.<}g

      • MagerValp
      • 16 years ago

      USB Flash memory seems to be fairly popular with geeks and non-geeks alike, and at 64..2048 MB they’re a decent floppy alternative. Do mobos support booting off them though?

        • eckslax
        • 16 years ago

        I’m pretty sure that I heard they were bootable. I’ll have to try it sometime.

          • bhtooefr
          • 16 years ago

          Some boards made in the last couple of years support USB booting. Just look in your boot order in the BIOS for “USB Devices” or something like that (it could be under “Removable Drives”). I know that the Dell Optiplex GX260 DOES support USB boot (I noticed that in the BIOS when I was checking something on one at my college).

      • Klopsik206
      • 16 years ago

      Yes, danny e.
      I WANT FLOPPY DIE AS WELL.
      It’s really incredible it survived that far.

      I also want to old printed port and COMs to die – I would be nice to squize 4 extra USB ports instead.

      Actually I envy Apple a bit, as they made brave decision to drop support on all acient stuff a while ago.

        • Kurlon
        • 16 years ago

        Oy! RS232 is far FAR from dead! You can kill com ports after I’m dead and buried, but for now, anyone working in IT, especially networking, needs ’em.

          • Klopsik206
          • 16 years ago

          What for do you need them which cannot be done by USB?
          Just curious…
          Klopsik

            • Kurlon
            • 16 years ago

            I have yet to use a USB serial dongle that is 100% reliable, whereas a good ‘ol 16550 UART gets the job done every time.

            • nexxcat
            • 16 years ago

            Have a pair of screws that prevent it from falling off and cause my Oracle servers to wait at the dreaded ok prompt?

            • barawn
            • 16 years ago

            USB requires the host to have available memory to initialize a few tables. This means that the processor can’t access USB until it’s initialized its on-chip cache (if it has any!!) or system memory (again, if it has any!).

            RS232 has no such limitation. This means that RS232 can fundamentally always provide a lower-level diagnostic than USB can.

            Given that USB->serial dongle cables are ‘flaky’ to be polite, I would much rather the motherboard manufacturers burn the tiny amount of space needed to provide a 2×5 header for RS232 so I can use it when I need it.

      • indeego
      • 16 years ago

      The floppy is almost dead where I work. All desktops and laptops don’t have them, they are only available via a USB port and a request. Most people use USB keys or CD-R’s to transfer data mobilyg{<.<}g

    • danny e.
    • 16 years ago

    i for one welcome our alien overlords

      • danny e.
      • 16 years ago

      go to bed, dork.

Pin It on Pinterest