Single page Print

Ethernet performance
We evaluated Ethernet performance using the NTttcp tool from Microsoft's Windows DDK. The docs say this program "provides the customer with a multi-threaded, asynchronous performance benchmark for measuring achievable data transfer rate."

We used the following command line options on the server machine:

ntttcps -m 4,0,192.168.1.25 -a
..and the same basic thing on each of our test systems acting as clients:
ntttcpr -m 4,0,192.168.1.25 -a
Our server was a Windows XP Pro system based on Asus' P5WD2 Premium motherboard with a Pentium 4 3.4GHz Extreme Edition (800MHz front-side bus, Hyper-Threading enabled) and PCI Express-attached Gigabit Ethernet. A crossover CAT6 cable was used to connect the server to each system.

The boards were tested with jumbo frames disabled.

We've always liked how the integrated Gigabit Ethernet controllers in Nvidia chipsets have largely kept motherboard makers from skimping on components and equipping boards with either slow PCI-based GigE chips or ones with brutally high CPU utilization. Of course, that was when Nvidia's GigE implementation had comparable throughput and lower CPU utilization than the best auxiliary Gigabit chips on the market. That's just not the case with the 790i SLI, whose Ethernet throughput hits a wall at around 830Mbps—a good 100Mbps short of the Marvell 88E8056 used on the our X38 and P35 boards. What's more, the Marvell chip also offers lower CPU utilization than the 790i SLI.

PCI Express performance
We used ntttcp to test PCI Express Ethernet throughput using a Marvell 88E8052-based PCI Express x1 Gigabit Ethernet card.

Throughput isn't a problem for the Ultra here, but its CPU utilization is slightly higher than that of the other chipsets.

PCI performance
To test PCI performance, we used the same ntttcp test methods and a PCI VIA Velocity GigE NIC.

We've seen much better PCI throughput from our X38 and P35 motherboards in previous reviews, so I'm hesitant to make too much of these results. The 790i clearly doesn't have a problem here, but something about the system configuration we used for this latest round of testing is affecting PCI Gigabit Ethernet throughput with Intel chipsets.

Pay no attention to the higher CPU utilization of the nForce chipsets. They're pushing a lot more data, so significantly higher CPU utilization is to be expected.