South bridge I/O
Now we'll switch gears and test the south bridge portion of the nForce4, the MCP. I turned off Hyper-Threading on the Pentium 4 system in all of the I/O tests, in the hopes of getting more accurate CPU utilization results.
We evaluated Ethernet performance using the NTttcp tool from the Microsoft Windows DDK. We used the following command line options on the server machine:
ntttcps -m 4,0,192.168.1.25 -a..and the same basic thing on each of our test systems acting as clients:
ntttcpr -m 4,0,192.168.1.25 -aWe used an Abit IC7-G-based system as the server for these tests. It has an Intel NIC in it that's attached to the north bridge via Intel's CSA connection, and it's proven very fast in prior testing. The server and client talked to each other over a Cat 6 crossover cable.
We tested the nForce4 SLI Intel Edition several different ways, with and without NVIDIA's Firewall enabled, and with and without ActiveArmor acceleration. We also tested the Intel Edition with the firewall included in Microsoft's Windows XP Service Pack 2, just to see how it compared. Our hope was to see the benefits of ActiveArmor acceleration offloading some of the TCP packet handling and giving lower CPU utilization. However, the results didn't quite turn out like we expected.
We checked with NVIDIA about this problem, and they provided us with a driver update that was supposed to correct it. With that new 4.82 Ethernet driver, we got the following results.
Our next task was to test jumbo frame sizes, a provision in Gigabit Ethernet implementations that could potentially improve throughput and lower CPU utilization. When we tested jumbo frames back in our nForce4 Ultra review, you may recall that ActiveArmor stumbled badly, producing very low throughput. Our hope was to find that NVIDIA had fixed this problem. Instead, we found that in the original 4.73 drivers NVIDIA gave us for testing the Intel Edition and in the 4.75 production drivers for the nForce4 for AMD, the web-style configuration interface would not allow us to turn on jumbo frames, even with an active GigE connection.
Fortunately, the updated 4.82 Ethernet drivers also corrected this problem. We turned on frame sizes up to 9K bytes for the server and the clients. (None of them had jumbo frames enabled by default.) Here's what we found.
Unfortunately, though, NVIDIA's regular drivers weren't right in a couple of ways. Couple that with the fact that we have seen comparable or superior combinations of Gigabit Ethernet throughput and CPU utilization on recent motherboards, including the Gigabyte K8T890 board with a Marvell PCI Express chip, and the nForce4 SLI's ActiveArmor GigE isn't especially appealing.
In our limited testing time, we also saw the nForce4's GigE implementation present some basic problems with link negotiation and the like. At the end the day, if I had a motherboard with two Ethernet ports (like many high-end boards do these days) and I were only going to use one, I would choose the non-NVIDIA Ethernet implementation, if it were a PCI Express chip. NVIDIA seems to be making progress on fixing some basic problems, like the one with jumbo frames, but their Gigabit Ethernet just isn't as bulletproof as it should be. In this age of PCI-E connectivity and built-in Windows firewalls, one wonders whether ActiveArmor still has a place.
|Kopin microdisplays could make VR headsets sharper and slimmer||5|
|Rumor: Ryzen stock coolers and retail packaging pictured||37|
|International Mother Language Day Shortbread||14|
|AOC readies up a pair of 144-Hz curved VA monitors||15|
|Fallout 4's wasteland is coming to VR||10|
|Blizzard ends support for Windows XP and Vista||35|
|TSUBAME3.0 gears up for AI supercomputing with 2160 Tesla P100s||45|
|Master of Shapes brings Vive tracking to Daydream VR||5|
|Biostar's Ryzen motherboards race toward release||67|
|Something about running from a deathclaw right into my mancave wall is not that appealing.||+24|