Single page Print

Performance testing
The Gigabyte Z170X-Gaming 7 that we just reviewed is fitted with dual Gigabit Ethernet controllers: one an Intel I219-V, and the other a Killer E2400. This arrangement makes it a perfect board for some side-by-side testing.

In fact, we're using the exact same hardware setup as we used in the Gaming 7 review. All of the testing shown below was carried out with the test system running Windows 8.1 Professional 64-bit, and we used the following driver versions:

  • Killer E2400: Killer Suite 1.1.56.1590, or standard driver 9.0.0.31
  • Intel I219-V: 20.2

Ethernet throughput
First things first: let's see how the Killer E2400 performs in every day networking tasks. We'll kick things off with a simple throughput test. We evaluated Ethernet performance using version 5.31 of the NTttcp tool from Microsoft. The website states that this program is "used to profile and measure Windows networking performance, NTttcp is one of the primary tools Microsoft engineering teams leverage to validate network function and utility." Sounds like a great place to start.

We used the following command-line options on the server machine (the receiver):

ntttcp.exe -r -m 4,*,192.168.1.50 -a

and the same basic settings on our client system (the sender):

ntttcp.exe -s -m 4,*,192.168.1.50 -a

These tests were run three times, and we're reporting the median result. The CPU usage numbers were taken directly from the NTttcp output, and the throughput results were derived from the utility's reported throughput in MB/s—scientifically speaking, we multiplied them by eight.

Our server was a Windows 10 Pro system based on Asus' M5A99FX PRO R2.0 motherboard with an AMD FX-8300 CPU. A crossover Cat6 cable was used to connect the server to the test system.

For the Killer E2400, we gathered three sets of results: one with the full Killer Suite installed and Bandwidth Control enabled, one with the full Killer Suite installed and Bandwidth Control disabled, and one with just the standard drivers installed. All three configurations produced results that were within the run-to-run variance of these tests, so we've reported just one result for the Killer.

The synthetic NTttcp throughput test doesn't reveal any meaningful difference between the Intel and the Killer NICs. Even CPU usage is comparable. So far, so good.

Network file-transfer performance
With the synthetic NTttcp throughput test out of the way, it was time to check on file transfer performance. For this test, we turned to a Gentoo Linux install that was set up on the same test server used above. We fired up the vsftpd FTP server and created our two tests. The "small" file batch consists of 1.2GB of high-bitrate MP3s, while the "large" file is an 11.7GB tar file created from three separate movie files.

Our test system was connected to the server using a crossover CAT6 cable. The standard Windows FTP program was used for transferring the "large" file. For the "small" file batch, we used the NcFTP Client 3.2.5 for Windows because of its easy-to-use recursive mode that can grab whole directory trees.

For the "large" file test, we used the following ftp command to download the file:

ftp -s:ftpcommands.txt -A 192.168.1.40

..with the following ftpcommands.txt file:

get test.tar
quit

For the "small" files test, we calculated the transfer times by taking a timestamp before and after the NcFTP transfer, like so:

echo %time% cmd /c ncftpget -R -V ftp://192.168.1.40/music
echo %time%

These tests were run three times, and we reported the median result.

Let's see how the Killer's performance stacks up in this real world transfer test.

The Killer pulls out a win with the "small" files test, though it is less than two seconds' difference. Our single "large" file produced incredibly close transfer times between the competing network controllers.

To measure CPU load during the file transfer tests we used the typeperf utility, with a sampling interval of five seconds, collecting a total of 100 samples, like so:

typeperf "\Processor(_Total)\% Processor Time" -si 5 -sc 100

Unlike transfer times, the CPU load numbers do show a difference between our two network controllers. We see the Killer using more CPU cycles compared to the Intel GigE controller: 2 percantage points more for the "small" files test, and 4 percentage points for the "large" file test. This added utilization is probably a result of the Killer driver's focus on low latency at the expense of creating a larger number of interrupts.

Once again, enabling or disabling Bandwidth Control in the Killer Suite had such a minimal impact on the results that any differences fell within the run-to-run variance of the tests themselves. Thie same was true when using the driver-only setup.

Let's dig a little deeper now with some netperf request/response testing.