Revisiting the Killer NIC, eight years on
Reviews

Revisiting the Killer NIC, eight years on

Onboard Gigabit Ethernet: we don’t think about it too much. We’ve had it for a long time, and for the most part, it just works. The folks behind the Killer Networking products first burst on to the scene trying to change that, and they’re still at it. It’s been just over eight years since we first took an in-depth look at a Killer NIC. Now that a good number of gaming-focused Z170-based boards and laptops include Gigabit Ethernet (and wireless networking) powered by Killer, it’s the perfect time to do some fresh testing.

A few weeks ago, I visited the Killer Networking folks at Rivet Networks. While I was there, I got a chance to pick the brains of Killer CEO Mike Cubbage and Chief Marketing Officer Bob Grim. Cubbage is one of the co-founders of Bigfoot Networks, and he’s been with the Killer Networking team from the beginning, through Qualcomm’s  purchase of Bigfoot in 2011 and during the team’s time as part of the Big Q. He’s also responsible for taking Killer independent again with Rivet Networks.

Like Cubbage, Grim is one of the founders of Bigfoot Networks. He served as the company’s vice-president of marketing and sales. In late 2007, he left Bigfoot for AMD, where he ran a number of marketing and sales teams. As of this month, though, he’s made his way back to the Killer team to help with marketing and business development.

I spent my time at Rivet Networks asking lots of questions about the hardware and software that makes up Killer’s current products, the team’s success in getting motherboard and laptop design wins, and how the Killer products have changed since the days of the original Killer NIC. The company also gave me a demo of the Killer traffic-prioritization technology, as well as a look at DoubleShot Pro—a solution in which Killer’s wired and wireless controllers work together to shuttle low-priority traffic over Wi-Fi and high-priority packets over Ethernet.

As TR’s motherboard guy, I came away from the visit eager to do some in-depth testing of the Killer E2400 Gigabit Ethernet controller that we’ve seen on the last two Z170 boards we reviewed: the Gigabyte Z170X-Gaming 7 and the MSI Z170A Gaming M5. Our test subject in this case is the Z170X-Gaming 7. With its twin GigE interfaces—one Killer-powered and one Intel-powered—it’s the perfect candidate for some side-by-side testing. But before we get to that, I’ll discuss what I learned from my visit.

Killer’s hardware
For those not familiar with the Killer story, here’s the Cliff’s Notes version. Bigfoot Networks—the company that created the Killer NIC—arrived on the scene in 2006. Bigfoot wanted to bring innovation to consumer networking with a series of gaming-focused NICs. Early Killer cards had dedicated network hardware built around a Freescale PowerPC system-on-a-chip with 64MB of dedicated memory. The cards ran a custom embedded Linux distribution.

This hardware could operate in a mode that bypassed the Windows networking stack entirely to purportedly speed up packet processing and reduce latency. Bigfoot even offered a software developer’s kit that allowed end users to write their own applications for the Killer NIC. While a handful of interesting apps were produced, no killer app emerged.

Part of the problem may have been the card’s price tag—$279, to be exact. Subsequent iterations of Killer hardware brought the price down to a more palatable $79, but Killer was still asking buyers to fork over money for a component most folks were used to getting for free on their motherboards. In the end, the dedicated hardware of the Killer NIC simply cost more than the onboard GigE solutions that relied on the standard combination of a controller and a PHY. That solution never led to the mass-market adoption for which Bigfoot was hoping.

The pivotal moment in the life of Killer’s tech, I’m told, came about with the release of Intel’s Nehalem-based Core i7 processors in 2008. Suddenly, the performance of the dedicated hardware solution could be matched by moving Killer’s network processing to the host CPU. At this point, Bigfoot began turning the Killer technology into an intelligent software layer focused on traffic classification and prioritization, built on top of a network driver tweaked for low latency. Using hardware to bypass the operating system’s network stack reverted to the realm of high-frequency traders.

So if the intelligence has moved back into software, what’s the hardware behind current Killer NICs? Given that Bigfoot Networks was eventually acquired by Qualcomm, it should come as no surprise that Qualcomm’s Atheros division provides the Gigabit Ethernet controllers that serve as the foundation for current Killer solutions.

Despite being a separate company today, Rivet Networks still maintains strong ties to Qualcomm. In fact, it’s one of Qualcomm Atheros’ authorized design centers. That status gives the Killer folks access to detailed parameters of the Atheros chip that they’re using, so Killer’s driver developers can tune the software’s behavior to suit their main goal: low-latency operation. The development team can also pass ideas back to the Atheros engineers for changes or additions to the underlying Ethernet controller.

Also, the Killer Networking team is now working exclusively with motherboard and laptop makers to get design wins for their Ethernet and Wi-Fi controllers. That means we won’t be seeing any new stand-alone Killer network cards. Rivet says it only plans to make one Killer product offering available at a time, so we should see the most recent E2400 controller replace the older E2200 in motherboards over the next six months or so.

Killer’s software stack
So modern Killer Networking solutions put the secret sauce in the software stack. What does the recipe look like?

At the 30,000-foot view, the Killer Networking software stack—the “Killer Suite”—is made up of three components. The Killer driver sits closest to the hardware. Above that is the Killer Windows service, and atop that is the Killer Network Manager software. For those who just want to use the Killer NIC as a standard Ethernet controller without all of the Killer components, there is a driver-only package available.

First, the driver. One major difference between the Killer driver and the equivalent Qualcomm Atheros driver is the threshold each one uses for sending out a packet. Killer tells us its driver has been tweaked to minimize latency, so as soon as it gets any amount of data to send, it puts that data straight onto the wire. In contrast, a driver that doesn’t prioritize latency may hold off on sending to do a couple of things. Such a driver might wait to combine multiple small payloads into a single packet if the destination is the same, or it may queue up multiple sends at a time to minimize the number of interrupts taken. Games usually send out data in 128-byte chunks or less, so Killer’s driver should minimize the amount of time that game data spends in the network stack. In fact, the Killer Networking folks claim the E2400’s latency performance beats the competition by up to 50% during single-application usage.

Killer’s Network Manager software is based around detection, classification, and prioritization of network traffic. It automatically assigns priorities to different types of network traffic in the system. Take traffic from torrents, for instance. Those packets are high-bandwidth but latency-insensitive. We don’t want this traffic to monopolize bandwidth to the detriment of latency-sensitive applications, like games and VoIP clients. Killer’s default traffic priorities are assigned as follows, with priorities decreasing as you move to the right:

Games  →  real-time video & voice  →  browser traffic →  everything else

These default priorities can be augmented with custom profiles for applications of your choice using the Network Manager interface.

Killer says network traffic is classified by a combination of static rules—port X means traffic of type M—and heuristics that look at the network activity from each running process. In the case of a web browser, the currently active tab determines the priority—if you’re watching something that streams video to you, like YouTube, the browser will have a higher priority than it would if you’re reading this article.

Killer refines its default profiles and makes those improvements available as downloads. To update the rules and heuristics, simply click the “Download Latest App Priorities” button in the Network Settings screen of the Killer Network Manager.

The Network Settings screen also houses the one piece of required setup that the Killer software needs. You have to tell it your upstream and downstream speeds, so that it knows how much total bandwidth it has to play with. And, as shown above, if you want to disable the Killer software’s Bandwidth Control functionality, you can do so from this screen.

One last feature of the Killer Network Manager that we haven’t touched on yet is its built-in monitoring. Click over to the performance screen and you’ll see the top five applications by total traffic, as well as stats on upload and download usage for the past two minutes.

Unfortunately, the user can’t configure how many minutes of data is shown for the upload and download stats, nor can one export the data. You can reset the top five applications data using a button back in the Applications page, though.

The Killer suite of software is only available for Windows. That exclusivity isn’t surprising given Killer’s gaming focus. For the Linux users out there, the Killer NICs work with the existing alx Ethernet driver. Support for the latest Killer E2400 hasn’t been merged upstream yet, though, so you’ll have to patch the driver to add the necessary PCI ID.

Now that we’ve looked at Killer’s full hardware and software stack, let’s get to testing it.

 

Performance testing
The Gigabyte Z170X-Gaming 7 that we just reviewed is fitted with dual Gigabit Ethernet controllers: one an Intel I219-V, and the other a Killer E2400. This arrangement makes it a perfect board for some side-by-side testing.

In fact, we’re using the exact same hardware setup as we used in the Gaming 7 review. All of the testing shown below was carried out with the test system running Windows 8.1 Professional 64-bit, and we used the following driver versions:

  • Killer E2400: Killer Suite 1.1.56.1590, or standard driver 9.0.0.31
  • Intel I219-V: 20.2

Ethernet throughput
First things first: let’s see how the Killer E2400 performs in every day networking tasks. We’ll kick things off with a simple throughput test. We evaluated Ethernet performance using version 5.31 of the NTttcp tool from Microsoft. The website states that this program is “used to profile and measure Windows networking performance, NTttcp is one of the primary tools Microsoft engineering teams leverage to validate network function and utility.” Sounds like a great place to start.

We used the following command-line options on the server machine (the receiver):

ntttcp.exe -r -m 4,*,192.168.1.50 -a

and the same basic settings on our client system (the sender):

ntttcp.exe -s -m 4,*,192.168.1.50 -a

These tests were run three times, and we’re reporting the median result. The CPU usage numbers were taken directly from the NTttcp output, and the throughput results were derived from the utility’s reported throughput in MB/s—scientifically speaking, we multiplied them by eight.

Our server was a Windows 10 Pro system based on Asus’ M5A99FX PRO R2.0 motherboard with an AMD FX-8300 CPU. A crossover Cat6 cable was used to connect the server to the test system.

For the Killer E2400, we gathered three sets of results: one with the full Killer Suite installed and Bandwidth Control enabled, one with the full Killer Suite installed and Bandwidth Control disabled, and one with just the standard drivers installed. All three configurations produced results that were within the run-to-run variance of these tests, so we’ve reported just one result for the Killer.

The synthetic NTttcp throughput test doesn’t reveal any meaningful difference between the Intel and the Killer NICs. Even CPU usage is comparable. So far, so good.

Network file-transfer performance
With the synthetic NTttcp throughput test out of the way, it was time to check on file transfer performance. For this test, we turned to a Gentoo Linux install that was set up on the same test server used above. We fired up the vsftpd FTP server and created our two tests. The “small” file batch consists of 1.2GB of high-bitrate MP3s, while the “large” file is an 11.7GB tar file created from three separate movie files.

Our test system was connected to the server using a crossover CAT6 cable. The standard Windows FTP program was used for transferring the “large” file. For the “small” file batch, we used the NcFTP Client 3.2.5 for Windows because of its easy-to-use recursive mode that can grab whole directory trees.

For the “large” file test, we used the following ftp command to download the file:

ftp -s:ftpcommands.txt -A 192.168.1.40

..with the following ftpcommands.txt file:

get test.tar
quit

For the “small” files test, we calculated the transfer times by taking a timestamp before and after the NcFTP transfer, like so:

echo %time% cmd /c ncftpget -R -V ftp://192.168.1.40/music
echo %time%

These tests were run three times, and we reported the median result.

Let’s see how the Killer’s performance stacks up in this real world transfer test.

The Killer pulls out a win with the “small” files test, though it is less than two seconds’ difference. Our single “large” file produced incredibly close transfer times between the competing network controllers.

To measure CPU load during the file transfer tests we used the typeperf utility, with a sampling interval of five seconds, collecting a total of 100 samples, like so:

typeperf “\Processor(_Total)\% Processor Time” -si 5 -sc 100

Unlike transfer times, the CPU load numbers do show a difference between our two network controllers. We see the Killer using more CPU cycles compared to the Intel GigE controller: 2 percantage points more for the “small” files test, and 4 percentage points for the “large” file test. This added utilization is probably a result of the Killer driver’s focus on low latency at the expense of creating a larger number of interrupts.

Once again, enabling or disabling Bandwidth Control in the Killer Suite had such a minimal impact on the results that any differences fell within the run-to-run variance of the tests themselves. Thie same was true when using the driver-only setup.

Let’s dig a little deeper now with some netperf request/response testing.

 

Network round-trip latency
Netperf’s request/response tests measure the number of “transactions” completed over a given period of time. A “transaction” is defined as the exchange of a single request and a single response. Netperf supports request/response testing for both TCP and UDP, and it can be configured to use a custom request and response size.

For this test, we swapped between the Intel and the Killer GigE controllers. The other hardware and software on the system remained the same. Thus, any differences in the average round trip latency that we see in this testing should be due to the NIC in use and its driver.

Netperf is usually distributed as source code, so pre-built binaries for Windows are usually only made available by third parties. Not all versions of the software are easy to come by. For this test, we used the pre-built netperf 2.4.5 binary from this source for Windows. On our Linux server, we built netperf 2.4.5 from source.

We ran the following command on our test system:

netperf.exe -l 30 -t TCP_RR -L 192.168.1.25 -H 192.168.1.40 -c — -r size,size

..with the server set up to listen on the following IP address:

netserver -L 192.168.1.40

Once again, our test system was connected to the server using a crossover CAT6 cable.

Netperf reports the number of transactions performed per second over the duration of the test, which we inverted to turn into an average round trip latency. The CPU usage numbers were taken directly from the Netperf output. These tests were run three times, and we reported the median result.

First up, some TCP round trip latency tests:

With a request and response size of just one byte, we are effectively finding the minimum time it takes to get a TCP packet out on the wire and to receive the response from the server. The Killer’s results are impressive. The Intel’s I219-V’s minimum round trip latency is 38.3 microseconds, or 71%, longer than the Killer E2400’s.

Obviously, both NICs are using the same underlying Windows networking stack, so we’ve now found a test where the low-latency tuning that the Killer folks have done can be measured. But, what does the CPU usage look like for this test?

Impressively, the Killer is no hungrier for your precious CPU cycles in this test than the Intel controller.

Round trip latency for a single byte test is interesting for finding the minimum path length through the network and driver stack, but it’s somewhat academic. Let’s see what happens when we increase the size.

With a 32-byte payload, the results are even more impressive. Here, the Killer has a round-trip latency that is under one-third of what the Intel controller achieves.

And it does so using a comparable amount of host CPU cycles.

Once we reach sizes of 128 bytes, the Killer’s lead shrinks to 4.7%. CPU usage between the two competing connectivity solutions is still comparable, though.

When sending and receiving 512 bytes, the Killer’s round trip latency is 12.5% less than the Intel. CPU usage for the Killer looks a little higher, but only by 0.3%.

Using the standard drivers, without the rest of the Killer software stack, and enabling or disabling Bandwidth Control in the Killer Suite resulted in no meaningful differences in performance or CPU usage.

Most online multiplayer games rely on UDP instead of TCP, though. With that fact in mind, we ran through these same tests using netperf’s UDP request/response mode.

Similar to last time, we ran the following command on our test system:

netperf.exe -l 30 -t UDP_RR -L 192.168.1.25 -H 192.168.1.40 -c — -r size,size

..and once again, the server was set up to listen on the following IP address:

netserver -L 192.168.1.40

These results don’t perfectly mirror the average round-trip latencies we saw for TCP packets, but the trend is the same for UDP datagrams. The Killer outperfors the Intel controller by a wide margin at small request-and-response sizes, and the margin shrinks as the size increases. The Killer’s CPU usage is higher than Intel’s, but not more than 1% higher in these tests.

Since most games rely on UDP datagrams of 128 bytes or less, the above results show the Killer NIC is doing its part to minimize the client-side latency. Once your network traffic leaves your home router, however, that data has to fend for itself out on the Internet. With that in mind, your online gaming experience depends on more than which Gigabit Ethernet controller you’re using. That said, minimizing the time game data spend in the depths of your PC obviously can’t hurt.

Now that we’ve thoroughly exhausted our network performance tests, let’s look at what else Killer’s software suite can do for us.

 

Network multitasking
Killer’s packet-prioritization software is supposed to ensure good performance when multiple applications are sending and receiving data at once, even when mixing latency-sensitive tasks like online gaming with bandwidth-hungry endeavors like bulk downloads. So long as Bandwidth Control is enabled in the Killer Network Manager software, it does this work automatically without the need for user-created custom profiles.

We put Bandwidth Control to the test by playing Valve’s Team Fortress 2 while downloading multiple Linux ISO images at the same time. We used Team Fortress 2‘s optional console to display the current ping times. That way, we got a meaningful way of quantifying the gaming experience, regardless of how badly I was playing.

To get a baseline, we started testing with Bandwidth Control disabled. The first step was to get in some good practice with TF2. After thoroughly enjoying myself establishing the appropriate preconditions for the test, I quickly switched over to my BitTorrent client and started downloading Live ISO images from Canonical and the Fedora Project.

Ping times immediately spiked from the roughly 40-ms range that I was seeing during unhindered gameplay. It wasn’t long before pings were up around 700 ms. Needless to say, this development turned my gaming session into an unplayable, stuttery mess.

Enabling Bandwidth Control in the Killer Network Manager immediately changed all that. Ping times returned to a much more manageable 50-ms range, and gameplay was pleasant once again. My downloads continued in the background, and I was happily back to “testing” for this article.

Running the same test of Team Fortress 2 gameplay with torrents downloading in the background on the Intel network controller gave the same results as using the Killer with Bandwidth Control disabled—an unplayable experience that left me spending more time waiting to respawn than actually playing the game.

This test is a fairly extreme example of running two applications concurrently, but it’s not that far-fetched. Team Fortress 2 needs low latency for the best possible gameplay experience, and the torrent downloads want every bit of my downstream link. It does show that the traffic prioritization functionality in Killer’s software works as advertised. Each application was automatically detected, and traffic was prioritized appropriately. The only manual step that I took was telling the Killer Network Manager my upload and download speeds.

Killer says its underlying technology is unique in the way that it detects apps with heuristics. When a new game comes out, Killer’s methods automatically recognize it on day one without needing to be told explicitly that a new game is installed. If only the same thing could be said for CrossFire and SLI profiles.

For one last quick test, I kept the torrents downloading but switched over to YouTube. With Bandwidth Control enabled, I started watching an episode of The TR Podcast in 1080p HD. My viewing experience was flawless, with no occurrences of buffering. With Bandwidth Control disabled, I started seeing instances of buffering—no great shock.

All of this goodness aside, one scenario where the Killer software’s bandwidth-control smarts won’t help you is if another device on your network is monopolizing the link, either upstream or downstream. If another member of your household kicks off some bandwidth-intensive process—say, for example, they decide it’s time to download that 18GB game from Steam while you’re fragging on a Killer-equipped PC—your online experience will suffer regardless. To help with that situation, you’ll have to head over to the quality-of-service (QoS) settings in your router.

Conclusions
Killer Networking hardware is appearing in more and more motherboards and laptops. It’s a long way from the original Killer NICs that polarized so many in the PC hardware world. If you want the features of Killer’s networking stack, you no longer have to pay for an add-in PCI or PCIe network card with dedicated network processing hardware.

The real question is whether your next device should have a Killer NIC baked in. The company’s software suite does offer some impressive features, like automatic prioritization of game traffic and other latency-sensitive packets. If those features sound valuable for your needs, or you think that it’s something you’d like to try out, we have no qualms about recommending a motherboard with Killer Networking onboard.

Our testing showed that the Killer E2400 is a capable Gigabit Ethernet controller, though it did use more CPU time under some network loads compared to an Intel NIC. The Killer E2400 often delivered lower packet latency in exchange for the extra CPU cycles, and its local prioritization voodoo worked as advertised for us, too.

We didn’t experience any issues with system stability or crashes, either with the full Killer suite or the company’s plain driver—instability being one bit of conventional wisdom that some folks cite as a reason to avoid Killer hardware.  If you’re wary of an otherwise-ideal motherboard or laptop just because it happens to have Killer-powered networking on board, you can probably relax. Not only can you disable the Bandwidth Control feature of Killer’s software, but you can also forgo the company’s software suite entirely and just install a plain driver package. If all you want is a basic GigE controller with no frills, the Killer NIC can play that role, too.

Latest News

Website Load Time
Statistics

Website Load Time Statistics: Average Page Speed in 2023

crypto
Crypto News

Top Crypto Gainers on September 28 – COMP, BCH, And MKR

In Q4 2023, the crypto market has settled in a total market cap range slightly exceeding 1 trillion. Meanwhile, Bitcoin continues to dominate the crypto market due to its large...

XRP
Crypto News

Analytical Insights Suggest a Potential 7,918% XRP Rally, Targeting $39 Based on Historical Data

In a recent analysis, financial pundits projected an unprecedented 7,918% surge in XRP value. This audacious prediction stemmed from a meticulous examination of historical data and an advanced Z-Score analysis. ...

X Could
News

X Could Become Profitable Next Year, Says CEO Yaccarino

DarkBeam's
News

DarkBeam’s Alarming Data Breach Exposes 3.8 Billion Records

ADA
Crypto News

Cardano Price Prediction: ADA Struggles To Enter Uptrend – Will It Drop Further?

Most Influential Tech Innovations
Statistics

The 10 Most Influential Tech Innovations of the 21st Century