Revisiting the Killer NIC, eight years on

Onboard Gigabit Ethernet: we don’t think about it too much. We’ve had it for a long time, and for the most part, it just works. The folks behind the Killer Networking products first burst on to the scene trying to change that, and they’re still at it. It’s been just over eight years since we first took an in-depth look at a Killer NIC. Now that a good number of gaming-focused Z170-based boards and laptops include Gigabit Ethernet (and wireless networking) powered by Killer, it’s the perfect time to do some fresh testing.

A few weeks ago, I visited the Killer Networking folks at Rivet Networks. While I was there, I got a chance to pick the brains of Killer CEO Mike Cubbage and Chief Marketing Officer Bob Grim. Cubbage is one of the co-founders of Bigfoot Networks, and he’s been with the Killer Networking team from the beginning, through Qualcomm’s  purchase of Bigfoot in 2011 and during the team’s time as part of the Big Q. He’s also responsible for taking Killer independent again with Rivet Networks.

Like Cubbage, Grim is one of the founders of Bigfoot Networks. He served as the company’s vice-president of marketing and sales. In late 2007, he left Bigfoot for AMD, where he ran a number of marketing and sales teams. As of this month, though, he’s made his way back to the Killer team to help with marketing and business development.

I spent my time at Rivet Networks asking lots of questions about the hardware and software that makes up Killer’s current products, the team’s success in getting motherboard and laptop design wins, and how the Killer products have changed since the days of the original Killer NIC. The company also gave me a demo of the Killer traffic-prioritization technology, as well as a look at DoubleShot Pro—a solution in which Killer’s wired and wireless controllers work together to shuttle low-priority traffic over Wi-Fi and high-priority packets over Ethernet.

As TR’s motherboard guy, I came away from the visit eager to do some in-depth testing of the Killer E2400 Gigabit Ethernet controller that we’ve seen on the last two Z170 boards we reviewed: the Gigabyte Z170X-Gaming 7 and the MSI Z170A Gaming M5. Our test subject in this case is the Z170X-Gaming 7. With its twin GigE interfaces—one Killer-powered and one Intel-powered—it’s the perfect candidate for some side-by-side testing. But before we get to that, I’ll discuss what I learned from my visit.

Killer’s hardware

For those not familiar with the Killer story, here’s the Cliff’s Notes version. Bigfoot Networks—the company that created the Killer NIC—arrived on the scene in 2006. Bigfoot wanted to bring innovation to consumer networking with a series of gaming-focused NICs. Early Killer cards had dedicated network hardware built around a Freescale PowerPC system-on-a-chip with 64MB of dedicated memory. The cards ran a custom embedded Linux distribution.

This hardware could operate in a mode that bypassed the Windows networking stack entirely to purportedly speed up packet processing and reduce latency. Bigfoot even offered a software developer’s kit that allowed end users to write their own applications for the Killer NIC. While a handful of interesting apps were produced, no killer app emerged.

Part of the problem may have been the card’s price tag—$279, to be exact. Subsequent iterations of Killer hardware brought the price down to a more palatable $79, but Killer was still asking buyers to fork over money for a component most folks were used to getting for free on their motherboards. In the end, the dedicated hardware of the Killer NIC simply cost more than the onboard GigE solutions that relied on the standard combination of a controller and a PHY. That solution never led to the mass-market adoption for which Bigfoot was hoping.

The pivotal moment in the life of Killer’s tech, I’m told, came about with the release of Intel’s Nehalem-based Core i7 processors in 2008. Suddenly, the performance of the dedicated hardware solution could be matched by moving Killer’s network processing to the host CPU. At this point, Bigfoot began turning the Killer technology into an intelligent software layer focused on traffic classification and prioritization, built on top of a network driver tweaked for low latency. Using hardware to bypass the operating system’s network stack reverted to the realm of high-frequency traders.

So if the intelligence has moved back into software, what’s the hardware behind current Killer NICs? Given that Bigfoot Networks was eventually acquired by Qualcomm, it should come as no surprise that Qualcomm’s Atheros division provides the Gigabit Ethernet controllers that serve as the foundation for current Killer solutions.

Despite being a separate company today, Rivet Networks still maintains strong ties to Qualcomm. In fact, it’s one of Qualcomm Atheros’ authorized design centers. That status gives the Killer folks access to detailed parameters of the Atheros chip that they’re using, so Killer’s driver developers can tune the software’s behavior to suit their main goal: low-latency operation. The development team can also pass ideas back to the Atheros engineers for changes or additions to the underlying Ethernet controller.

Also, the Killer Networking team is now working exclusively with motherboard and laptop makers to get design wins for their Ethernet and Wi-Fi controllers. That means we won’t be seeing any new stand-alone Killer network cards. Rivet says it only plans to make one Killer product offering available at a time, so we should see the most recent E2400 controller replace the older E2200 in motherboards over the next six months or so.

Killer’s software stack

So modern Killer Networking solutions put the secret sauce in the software stack. What does the recipe look like?

At the 30,000-foot view, the Killer Networking software stack—the “Killer Suite”—is made up of three components. The Killer driver sits closest to the hardware. Above that is the Killer Windows service, and atop that is the Killer Network Manager software. For those who just want to use the Killer NIC as a standard Ethernet controller without all of the Killer components, there is a driver-only package available.

First, the driver. One major difference between the Killer driver and the equivalent Qualcomm Atheros driver is the threshold each one uses for sending out a packet. Killer tells us its driver has been tweaked to minimize latency, so as soon as it gets any amount of data to send, it puts that data straight onto the wire. In contrast, a driver that doesn’t prioritize latency may hold off on sending to do a couple of things. Such a driver might wait to combine multiple small payloads into a single packet if the destination is the same, or it may queue up multiple sends at a time to minimize the number of interrupts taken. Games usually send out data in 128-byte chunks or less, so Killer’s driver should minimize the amount of time that game data spends in the network stack. In fact, the Killer Networking folks claim the E2400’s latency performance beats the competition by up to 50% during single-application usage.

Killer’s Network Manager software is based around detection, classification, and prioritization of network traffic. It automatically assigns priorities to different types of network traffic in the system. Take traffic from torrents, for instance. Those packets are high-bandwidth but latency-insensitive. We don’t want this traffic to monopolize bandwidth to the detriment of latency-sensitive applications, like games and VoIP clients. Killer’s default traffic priorities are assigned as follows, with priorities decreasing as you move to the right:

Games  →  real-time video & voice  →  browser traffic →  everything else

These default priorities can be augmented with custom profiles for applications of your choice using the Network Manager interface.

Killer says network traffic is classified by a combination of static rules—port X means traffic of type M—and heuristics that look at the network activity from each running process. In the case of a web browser, the currently active tab determines the priority—if you’re watching something that streams video to you, like YouTube, the browser will have a higher priority than it would if you’re reading this article.

Killer refines its default profiles and makes those improvements available as downloads. To update the rules and heuristics, simply click the “Download Latest App Priorities” button in the Network Settings screen of the Killer Network Manager.

The Network Settings screen also houses the one piece of required setup that the Killer software needs. You have to tell it your upstream and downstream speeds, so that it knows how much total bandwidth it has to play with. And, as shown above, if you want to disable the Killer software’s Bandwidth Control functionality, you can do so from this screen.

One last feature of the Killer Network Manager that we haven’t touched on yet is its built-in monitoring. Click over to the performance screen and you’ll see the top five applications by total traffic, as well as stats on upload and download usage for the past two minutes.

Unfortunately, the user can’t configure how many minutes of data is shown for the upload and download stats, nor can one export the data. You can reset the top five applications data using a button back in the Applications page, though.

The Killer suite of software is only available for Windows. That exclusivity isn’t surprising given Killer’s gaming focus. For the Linux users out there, the Killer NICs work with the existing alx Ethernet driver. Support for the latest Killer E2400 hasn’t been merged upstream yet, though, so you’ll have to patch the driver to add the necessary PCI ID.

Now that we’ve looked at Killer’s full hardware and software stack, let’s get to testing it.

 

Performance testing

The Gigabyte Z170X-Gaming 7 that we just reviewed is fitted with dual Gigabit Ethernet controllers: one an Intel I219-V, and the other a Killer E2400. This arrangement makes it a perfect board for some side-by-side testing.

In fact, we’re using the exact same hardware setup as we used in the Gaming 7 review. All of the testing shown below was carried out with the test system running Windows 8.1 Professional 64-bit, and we used the following driver versions:

  • Killer E2400: Killer Suite 1.1.56.1590, or standard driver 9.0.0.31
  • Intel I219-V: 20.2

Ethernet throughput

First things first: let’s see how the Killer E2400 performs in every day networking tasks. We’ll kick things off with a simple throughput test. We evaluated Ethernet performance using version 5.31 of the NTttcp tool from Microsoft. The website states that this program is “used to profile and measure Windows networking performance, NTttcp is one of the primary tools Microsoft engineering teams leverage to validate network function and utility.” Sounds like a great place to start.

We used the following command-line options on the server machine (the receiver):

ntttcp.exe -r -m 4,*,192.168.1.50 -a

and the same basic settings on our client system (the sender):

ntttcp.exe -s -m 4,*,192.168.1.50 -a

These tests were run three times, and we’re reporting the median result. The CPU usage numbers were taken directly from the NTttcp output, and the throughput results were derived from the utility’s reported throughput in MB/s—scientifically speaking, we multiplied them by eight.

Our server was a Windows 10 Pro system based on Asus’ M5A99FX PRO R2.0 motherboard with an AMD FX-8300 CPU. A crossover Cat6 cable was used to connect the server to the test system.

For the Killer E2400, we gathered three sets of results: one with the full Killer Suite installed and Bandwidth Control enabled, one with the full Killer Suite installed and Bandwidth Control disabled, and one with just the standard drivers installed. All three configurations produced results that were within the run-to-run variance of these tests, so we’ve reported just one result for the Killer.

The synthetic NTttcp throughput test doesn’t reveal any meaningful difference between the Intel and the Killer NICs. Even CPU usage is comparable. So far, so good.

Network file-transfer performance

With the synthetic NTttcp throughput test out of the way, it was time to check on file transfer performance. For this test, we turned to a Gentoo Linux install that was set up on the same test server used above. We fired up the vsftpd FTP server and created our two tests. The “small” file batch consists of 1.2GB of high-bitrate MP3s, while the “large” file is an 11.7GB tar file created from three separate movie files.

Our test system was connected to the server using a crossover CAT6 cable. The standard Windows FTP program was used for transferring the “large” file. For the “small” file batch, we used the NcFTP Client 3.2.5 for Windows because of its easy-to-use recursive mode that can grab whole directory trees.

For the “large” file test, we used the following ftp command to download the file:

ftp -s:ftpcommands.txt -A 192.168.1.40

..with the following ftpcommands.txt file:

get test.tar

quit

For the “small” files test, we calculated the transfer times by taking a timestamp before and after the NcFTP transfer, like so:

echo %time% cmd /c ncftpget -R -V ftp://192.168.1.40/music

echo %time%

These tests were run three times, and we reported the median result.

Let’s see how the Killer’s performance stacks up in this real world transfer test.

The Killer pulls out a win with the “small” files test, though it is less than two seconds’ difference. Our single “large” file produced incredibly close transfer times between the competing network controllers.

To measure CPU load during the file transfer tests we used the typeperf utility, with a sampling interval of five seconds, collecting a total of 100 samples, like so:

typeperf “\Processor(_Total)\% Processor Time” -si 5 -sc 100

Unlike transfer times, the CPU load numbers do show a difference between our two network controllers. We see the Killer using more CPU cycles compared to the Intel GigE controller: 2 percantage points more for the “small” files test, and 4 percentage points for the “large” file test. This added utilization is probably a result of the Killer driver’s focus on low latency at the expense of creating a larger number of interrupts.

Once again, enabling or disabling Bandwidth Control in the Killer Suite had such a minimal impact on the results that any differences fell within the run-to-run variance of the tests themselves. Thie same was true when using the driver-only setup.

Let’s dig a little deeper now with some netperf request/response testing.

 

Network round-trip latency

Netperf’s request/response tests measure the number of “transactions” completed over a given period of time. A “transaction” is defined as the exchange of a single request and a single response. Netperf supports request/response testing for both TCP and UDP, and it can be configured to use a custom request and response size.

For this test, we swapped between the Intel and the Killer GigE controllers. The other hardware and software on the system remained the same. Thus, any differences in the average round trip latency that we see in this testing should be due to the NIC in use and its driver.

Netperf is usually distributed as source code, so pre-built binaries for Windows are usually only made available by third parties. Not all versions of the software are easy to come by. For this test, we used the pre-built netperf 2.4.5 binary from this source for Windows. On our Linux server, we built netperf 2.4.5 from source.

We ran the following command on our test system:

netperf.exe -l 30 -t TCP_RR -L 192.168.1.25 -H 192.168.1.40 -c — -r size,size

..with the server set up to listen on the following IP address:

netserver -L 192.168.1.40

Once again, our test system was connected to the server using a crossover CAT6 cable.

Netperf reports the number of transactions performed per second over the duration of the test, which we inverted to turn into an average round trip latency. The CPU usage numbers were taken directly from the Netperf output. These tests were run three times, and we reported the median result.

First up, some TCP round trip latency tests:

With a request and response size of just one byte, we are effectively finding the minimum time it takes to get a TCP packet out on the wire and to receive the response from the server. The Killer’s results are impressive. The Intel’s I219-V’s minimum round trip latency is 38.3 microseconds, or 71%, longer than the Killer E2400’s.

Obviously, both NICs are using the same underlying Windows networking stack, so we’ve now found a test where the low-latency tuning that the Killer folks have done can be measured. But, what does the CPU usage look like for this test?

Impressively, the Killer is no hungrier for your precious CPU cycles in this test than the Intel controller.

Round trip latency for a single byte test is interesting for finding the minimum path length through the network and driver stack, but it’s somewhat academic. Let’s see what happens when we increase the size.

With a 32-byte payload, the results are even more impressive. Here, the Killer has a round-trip latency that is under one-third of what the Intel controller achieves.

And it does so using a comparable amount of host CPU cycles.

Once we reach sizes of 128 bytes, the Killer’s lead shrinks to 4.7%. CPU usage between the two competing connectivity solutions is still comparable, though.

When sending and receiving 512 bytes, the Killer’s round trip latency is 12.5% less than the Intel. CPU usage for the Killer looks a little higher, but only by 0.3%.

Using the standard drivers, without the rest of the Killer software stack, and enabling or disabling Bandwidth Control in the Killer Suite resulted in no meaningful differences in performance or CPU usage.

Most online multiplayer games rely on UDP instead of TCP, though. With that fact in mind, we ran through these same tests using netperf’s UDP request/response mode.

Similar to last time, we ran the following command on our test system:

netperf.exe -l 30 -t UDP_RR -L 192.168.1.25 -H 192.168.1.40 -c — -r size,size

..and once again, the server was set up to listen on the following IP address:

netserver -L 192.168.1.40

These results don’t perfectly mirror the average round-trip latencies we saw for TCP packets, but the trend is the same for UDP datagrams. The Killer outperfors the Intel controller by a wide margin at small request-and-response sizes, and the margin shrinks as the size increases. The Killer’s CPU usage is higher than Intel’s, but not more than 1% higher in these tests.

Since most games rely on UDP datagrams of 128 bytes or less, the above results show the Killer NIC is doing its part to minimize the client-side latency. Once your network traffic leaves your home router, however, that data has to fend for itself out on the Internet. With that in mind, your online gaming experience depends on more than which Gigabit Ethernet controller you’re using. That said, minimizing the time game data spend in the depths of your PC obviously can’t hurt.

Now that we’ve thoroughly exhausted our network performance tests, let’s look at what else Killer’s software suite can do for us.

 

Network multitasking

Killer’s packet-prioritization software is supposed to ensure good performance when multiple applications are sending and receiving data at once, even when mixing latency-sensitive tasks like online gaming with bandwidth-hungry endeavors like bulk downloads. So long as Bandwidth Control is enabled in the Killer Network Manager software, it does this work automatically without the need for user-created custom profiles.

We put Bandwidth Control to the test by playing Valve’s Team Fortress 2 while downloading multiple Linux ISO images at the same time. We used Team Fortress 2‘s optional console to display the current ping times. That way, we got a meaningful way of quantifying the gaming experience, regardless of how badly I was playing.

To get a baseline, we started testing with Bandwidth Control disabled. The first step was to get in some good practice with TF2. After thoroughly enjoying myself establishing the appropriate preconditions for the test, I quickly switched over to my BitTorrent client and started downloading Live ISO images from Canonical and the Fedora Project.

Ping times immediately spiked from the roughly 40-ms range that I was seeing during unhindered gameplay. It wasn’t long before pings were up around 700 ms. Needless to say, this development turned my gaming session into an unplayable, stuttery mess.

Enabling Bandwidth Control in the Killer Network Manager immediately changed all that. Ping times returned to a much more manageable 50-ms range, and gameplay was pleasant once again. My downloads continued in the background, and I was happily back to “testing” for this article.

Running the same test of Team Fortress 2 gameplay with torrents downloading in the background on the Intel network controller gave the same results as using the Killer with Bandwidth Control disabled—an unplayable experience that left me spending more time waiting to respawn than actually playing the game.

This test is a fairly extreme example of running two applications concurrently, but it’s not that far-fetched. Team Fortress 2 needs low latency for the best possible gameplay experience, and the torrent downloads want every bit of my downstream link. It does show that the traffic prioritization functionality in Killer’s software works as advertised. Each application was automatically detected, and traffic was prioritized appropriately. The only manual step that I took was telling the Killer Network Manager my upload and download speeds.

Killer says its underlying technology is unique in the way that it detects apps with heuristics. When a new game comes out, Killer’s methods automatically recognize it on day one without needing to be told explicitly that a new game is installed. If only the same thing could be said for CrossFire and SLI profiles.

For one last quick test, I kept the torrents downloading but switched over to YouTube. With Bandwidth Control enabled, I started watching an episode of The TR Podcast in 1080p HD. My viewing experience was flawless, with no occurrences of buffering. With Bandwidth Control disabled, I started seeing instances of buffering—no great shock.

All of this goodness aside, one scenario where the Killer software’s bandwidth-control smarts won’t help you is if another device on your network is monopolizing the link, either upstream or downstream. If another member of your household kicks off some bandwidth-intensive process—say, for example, they decide it’s time to download that 18GB game from Steam while you’re fragging on a Killer-equipped PC—your online experience will suffer regardless. To help with that situation, you’ll have to head over to the quality-of-service (QoS) settings in your router.

Conclusions

Killer Networking hardware is appearing in more and more motherboards and laptops. It’s a long way from the original Killer NICs that polarized so many in the PC hardware world. If you want the features of Killer’s networking stack, you no longer have to pay for an add-in PCI or PCIe network card with dedicated network processing hardware.

The real question is whether your next device should have a Killer NIC baked in. The company’s software suite does offer some impressive features, like automatic prioritization of game traffic and other latency-sensitive packets. If those features sound valuable for your needs, or you think that it’s something you’d like to try out, we have no qualms about recommending a motherboard with Killer Networking onboard.

Our testing showed that the Killer E2400 is a capable Gigabit Ethernet controller, though it did use more CPU time under some network loads compared to an Intel NIC. The Killer E2400 often delivered lower packet latency in exchange for the extra CPU cycles, and its local prioritization voodoo worked as advertised for us, too.

We didn’t experience any issues with system stability or crashes, either with the full Killer suite or the company’s plain driver—instability being one bit of conventional wisdom that some folks cite as a reason to avoid Killer hardware.  If you’re wary of an otherwise-ideal motherboard or laptop just because it happens to have Killer-powered networking on board, you can probably relax. Not only can you disable the Bandwidth Control feature of Killer’s software, but you can also forgo the company’s software suite entirely and just install a plain driver package. If all you want is a basic GigE controller with no frills, the Killer NIC can play that role, too.

Comments closed
    • bellyfuz
    • 4 years ago

    I disabled mine. It doesn’t do Wake on Lan. Stuck in a generic Chinese Intel card that does……… Plus Qualacomm drivers are the worst NIC drivers I’ve ever seen…..

    • sparkman
    • 4 years ago

    Summary:

    1) Microseconds don’t matter. Milliseconds do. The Killer NIC only gives you microseconds. On your video card, a 1 frame per second speedup would have a bigger impact on your game than the Killer NIC does.

    2) Unless you download/stream a lot while playing games at the same time on the same PC. Then, the Killer NIC might be nice.

    • koaschten
    • 4 years ago

    Why do I have a feeling of the test being flawed due to the test setup?
    How possible is it that
    – most of the traffic while you are gaming comes from a roommate watching netflix, uploads to dropbox or watches twitch.tv?
    – you downloading a linux iso while gaming?

    In the first case, the killer NIC does crap for your latency. Therefor zero value. It needs just to die, because it’s only advertising, no general positive net.

    • TheMonkeyKing
    • 4 years ago

    So how was this measured with QoS enabled on a very good router as compared to the NICs traffic thing enabled?

      • EndlessWaves
      • 4 years ago

      Yeah, the test needed to be taken one step further. Given the conclusion that killer NICs increase CPU use compared to an Intel NIC but provide good QoS we need the following to make a sensible buying decision:

      1. How do they compare to router QoS?

      2. How do they compare to other NICs in terms of CPU use? Is the high CPU usage unusual or is it the Intel drivers that are particularly good?

      If it’s higher CPU usage and the QoS is nothing special then it’s minor negative mark against a motherboard using it. If the CPU usage is comparable to other NICs and the QoS is useful on top of router QoS then it’s unobjectionable or worth having.

        • DoomGuy64
        • 4 years ago

        The QoS software is not hardware accelerated, so it’s obviously going to cause a small CPU hit. The CPU usage isn’t particularly large either, as the bench shows that the difference is negligible, and the benefit outweighs the hit.

        You’re essentially getting cFosSpeed for free, and a pre-tuned driver for latency reduction. Intel might give similar results with custom settings and cFosSpeed, but killer nic gives this to you OOTB with no additional purchases.

        As for router QoS, most firmwares are notoriously bad, and so is the performance on open source firmware. You’d need a router with a dual core and 256+mb ram, the knowledge of how to set it up manually, and pray you’re lucky enough the firmware doesn’t have any nasty bugs. I’m all for router QoS, but the ease of use, performance, and stability just isn’t there yet. Killer nic has a market here, simply because nobody else has done QoS right. Working QoS is enough to justify it’s existence in the market, because nobody else offers a decent OOTB experience.

          • EndlessWaves
          • 4 years ago

          2% more CPU use is much, much bigger than 0.05ms latency reduction. Intel have several instances where the gap between one CPU model and the next is 3%.

          Not that it is very significant and I’m sure it’ll be unnoticeable almost all of the time but it’s a number that’s big enough that you might plausibly notice it once in a blue moon – unlike the latency change.

          The latency improvements of the drivers are irrelevant, it’s all about whether single computer QoS is enough to put up with the driver’s higher CPU use that results from the pointless quest for tiny fractions of a millisecond time savings.

            • gkdiamond
            • 4 years ago

            How would you “plausibly notice it”? If task are being completed faster, as they were in the tests, and it only uses a bit more CPU time why would it matter. It’s not like it is demanding 100% CPU time to do the tasks faster.

    • Krogoth
    • 4 years ago

    Anybody who works with networking knows that the NIC hasn’t been an issue for over 20 years.

    Network issues are almost always a problem with your [b<]medium[/b<], switch, router, gateway or ISP. Internet latency is dictated primarily by geological location of the server and clients. CPU utilization hasn't been an issue since Pentium 4 and Athlon XP days for non-servers. 40Gbps-100Gbps Ethernet maybe somewhat taxing for CPU but it is only available on server-tier NICs and the proper medium for isn't cheap so its a moot point for 98% of the market. They are cheaper alternatives for point to point stuff over short runs (Thunderbolt/USB 3.1) Killer NIC has always been snake's oil since day #1 for people who don't know about networking 101. You can replicate all of its gimmicks with pure software solutions on a decent router.

      • TruthSerum
      • 4 years ago

      That’s basically what the article… says… it’s all about the s/w optimization now.

      Back in the day it was gimmick h/w doing ‘that job’.

      But I don’t see how it’s snake oil now, if the cost is roughly the same as other fancy nics,
      and you get a user-friendly (more than @ the router, usually) prioritization schema.

      It’s not “worth” going out of your way for, but at the same time it seems to be fairly solid here.

    • Waco
    • 4 years ago

    I might have missed it…was interrupt moderation disabled on the Intel NIC? It makes a massive difference in latency…

      • loophole
      • 4 years ago

      No, you didn’t miss it, I just failed to spell it out clearly – we took the default out-of-the-box driver behavior.

        • Waco
        • 4 years ago

        I’d bet good money a lot of that (already miniscule) latency goes out the window on the Intel NIC / stack when you disable interrupt moderation in the driver.

    • LocalCitizen
    • 4 years ago

    perhaps it’s time they expand their product line to include “gaming” routers?
    not that i would buy one, but it still seem like a natural extension

      • bhtooefr
      • 4 years ago

      Really, the consumer market could really use a good router, with easy to configure QoS.

      Killer’s secret sauce, when it comes down to it, is easy to configure QoS.

        • Deanjo
        • 4 years ago

        There are bunch of good consumer routers already out there that can do good QoS such as Asus’s line.

          • Krogoth
          • 4 years ago

          There has been decent routers that can do QoS for over 15 years, although some of them required third-party firmware.

            • Deanjo
            • 4 years ago

            Yes there are however those 3rd party firmware solutions are not the simplest to configure. The Asus QoS UI is fairly straightforward enough.

      • Bensam123
      • 4 years ago

      Yeah… probably more so then a killer nic or in tandem with one. DDWRT is the best solution as far as that goes right now.

    • HisDivineOrder
    • 4 years ago

    [quote<] As of this month, though, he's made his way back to the Killer team to help with marketing and business development.[/quote<] Is that yet ANOTHER confirmed AMD defection? AMD is losing employees left and right.

    • HisDivineOrder
    • 4 years ago

    [quote<] We've had [Gigabit ethernet] for a long time, and for the most part, it just works. The folks behind the Killer Networking products first burst on to the scene trying to change that, and they're still at it.[/quote<] Love it. That's precisely right. They tried to change it from "just working" to something more complicated and guess what? Most everybody ignored them except MSI.

    • DrCR
    • 4 years ago

    Ironically this article is inducing me to do some homework on how to setup QOS in DD-WRT.

    • odizzido
    • 4 years ago

    Looks like this offers decent QoS. The 0.3ms latency reduction is almost pointless though. I can see if two people have 1 shot weapons and fire at the same time the person who gets the shot off 0.3ms sooner will have a tiny advantage…but it’s such a small difference. Going to a 120hz monitor can cut 8ms off the max time it can take to get the information to know to shoot in the first place.

      • meerkt
      • 4 years ago

      In my multiplayer gaming I insist on only playing games that track game event to a sub-ms accuracy. If they don’t, I just don’t play them multiplayer.

      (Disclaimer: Last multiplayer game I played was UT99.)

    • Meadows
    • 4 years ago

    Do we yet know the advantages of this compared to pure software solutions like, say, cFosSpeed?

    I’d be interested in a comparison as I own a licence to the latter. It also offers cooperative traffic shaping on your home LAN, meaning if you have it installed on multiple computers, not one of them will be able to “monopolise” the available bandwidth of the household to the detriment of the others. Purportedly, at least; I have a second licence for another computer but I mainly use just this one so I can’t do meaningful tests.

      • LostCat
      • 4 years ago

      I loved cFosSpeed. I’d suspect the impact is fairly similar. I refused to use a NIC without cFS before I got this thing.

      • meerkt
      • 4 years ago

      Haven’t tried them, but I think other alternatives are:
      [url<]http://seriousbit.com/netbalancer/[/url<] [url<]http://www.netlimiter.com/[/url<]

        • Meadows
        • 4 years ago

        NetLimiter doesn’t do priority settings.

        NetBalancer on the other hand does, but while it looks similar, it does not offer lifetime free updates like cFosSpeed does and still costs nearly three times as much! Rather brave pricing, if you ask me.

          • meerkt
          • 4 years ago

          There’s a “priority” setting. I don’t know how fine-grained the control is:
          [url<]http://www.netlimiter.com/docs/basic-concepts/rules[/url<]

            • Meadows
            • 4 years ago

            They should’ve included it in the product bullet points then, as that’s where my impression was from.

            • meerkt
            • 4 years ago

            Another thing they should do is get rid of the offensive-ridiculous “pay extra $8 to be able to redownload the full version for a whole year!”.

    • Meadows
    • 4 years ago

    [quote<]"We see the Killer using more CPU cycles compared to the Intel GigE controller: 2% more for the "small" files test, and 4% for the "large" file test."[/quote<] Surely you mean 14% more for the small files test and 93% more for the large file test, no? Unless it's percentage [i<]points[/i<].

      • Freon
      • 4 years ago

      There is an enormous amount of cajoling around the use of relative and absolute measures… It does not look good to me. I really suggest it be edited to at least read a bit more impartially.

    • lycium
    • 4 years ago

    Oh my god, has it really been 8 years? I remember reading that article “the other day”…

    Granted, the Riva TNT review etc was longer ago, but damn…

    • xeridea
    • 4 years ago

    Reasons Killer never took off:
    1. It costs way too much
    2. Routers already do QoS, which is where you want it anyway, any device on your network is affected. Things like torrents can be done easily without QoS, by limiting the bandwidth used within torrent software. I used to play Diablo 2 online while torrenting with an imperceptible difference in lag simply by setting limit, I think it was like 80%.
    3. The latency differences while good percentage-wise, are mostly academic due to 1/10 of a ms not really mattering. If you can get 0.2% lower latency at the cost of increased CPU load, you may be better saving your CPU cycles.

      • Freon
      • 4 years ago

      All of this is still true, unfortunately this article fails to really get the big picture and stares down microseconds that do not matter even over a LAN.

      Next up, we’re going to have articles on shortening the copper wiring in your house to save nanoseconds of latency.

      5 meter wire: 25 nanoseconds
      *pages of nonsense*
      4 meter wire: 20 nanoseconds

      [b<]A 20% IMPROVEMENT! [/b<]

      • Hinton
      • 4 years ago

      History lesson:

      Killer was launched as a standalone niche product.

      Then it took off.

      Ie. the opposite of what you claimed.

    • UnfriendlyFire
    • 4 years ago

    And then any latency decreases will be blown away when the router derps and prioritizes someone’s Netflix over VOIP and gaming traffic. My family had a router that would do that, and thus cause ping to shoot from 30 ms to +2000 ms.

    Or when the ISP throttles your connection to a point where someone opening a Youtube page will also cause packet bottlenecks for your game.

    Or when the server is based in another country, and is unable to keep up with the gaming traffic.

    • rika13
    • 4 years ago

    TL;DR: It doesn’t Nagle and does QoS.

      • cygnus1
      • 4 years ago

      yeah, basically.

    • tipoo
    • 4 years ago

    They should have kept the giant K cowboy belt heatsink

    [url<]http://www.pcper.com/images/reviews/338/card_front.jpg[/url<] Lol, those were the days. And that reminds me - do dedicated PhysX cards do anything anymore?

      • Forge
      • 4 years ago

      Nah, Nvidia ended support for the dedicated PhysX cards once the GPUs overtook their performance, which IIRC was around the G82 era.

        • tipoo
        • 4 years ago

        Remember Cellfactor Revolution to demo PhysX? So impressive at the time

        [url<]https://www.youtube.com/watch?v=-DSrAgbbXBs[/url<]

    • Airmantharp
    • 4 years ago

    One thing I found out on torrents (back when I used such things) was that you had to limit your total upstream and downstream bandwidth to leave room for other stuff, remembering that there’s about 10%-15% of upload bandwidth in terms of connection overhead needed to support your downloads.

    You choke off your *upload* pipe, and nothing works :D.

      • xeridea
      • 4 years ago

      I remember this also. I could play online games fine on 3Mb connection by limiting torrent speed to about 80% of my connection. Ping tests to Google showed it was barely affected by torrent.

      • meerkt
      • 4 years ago

      I think uTorrent’s UTP is supposed to automatically manage bandwidth? But yeah, it’s hardly a big effort to set a global upload limit. Or just exit the client when you do something that needs low latency.

        • Airmantharp
        • 4 years ago

        Probably; it’s been half a decade at least since I’ve looked at them.

      • Bauxite
      • 4 years ago

      Depends on your internet connection, very true for DSL and usually true for cable but for those of us lucky to have fiber or similar for the last mile you really can squeeze the pipe quite close to max.

      • Freon
      • 4 years ago

      Yes, it seems most decent bit torrent client software and even Steam offer bandwidth controls. The problem has long been solved.

    • Shambles
    • 4 years ago

    No thanks. Only Intel NICs in my house.

    • hechacker1
    • 4 years ago

    Thanks for testing this. Basically the secret sauce is QoS and lower packet queues. However, we’re talking about microseconds here; once it hits your router, modem, and the internet all bets are off.

    It’s good to know there’s really no downside though compared to Intel, except maybe rock solid driver support on Intel’s side.

    I wonder, have you tried disabling interrupt moderation, and reducing the TX and RX queues on the Intel driver? Even Realtek usually offers those options in the default Windows driver.

    And then, I wonder if there really would be a difference in ping times for TF2 if you had a router with proper QoS. CeroWRT comes to mind with its fq_codel scheduler which already prioritizes smaller packet flows compared to torrent traffic.

    What this Killer software is doing is essentially moving the bandwidth throttling to the software stack, rather than the router. Which is probably good for a lot of people because their routers generally suck at QoS.

      • loophole
      • 4 years ago

      Good question about driver tuning. The short answer was that we took the default out-of-the-box driver behavior. Let’s say it’s an exercise left for the reader 😛

    • Convert
    • 4 years ago

    Great review, well written. I also immediately started thinking their next logical step is routers. They could make a Killer router and even have the software communicate back to pass along the prioritization information and heuristics. That would be pretty awesome if you could have QOS software on your PC that controlled the router behavior.

    I noticed one tiny spelling error on the first page: [quote<]Using hardare to bypass the operating system's network stack reverted to the realm of high-frequency traders.[/quote<]

      • Airmantharp
      • 4 years ago

      Maybe not a ‘Killer’ Router, but providing a software add-in for router makers that could work with Killer-enabled PCs, such that the QoS priorities on the client are mirrored for the router itself, at a minimum.

        • Vaughn
        • 4 years ago

        I’ll just do QOS from my R7000 running Asus merlin firmware.

        And stick to the NIC on my X58 board for now which is a Marvell controller.

          • cygnus1
          • 4 years ago

          Had no idea the Nighthawk could run Merlin’s Asus firmware, that’s pretty nifty. Despite the rave reviews of their most recent gear I never bought any of the newer Netgear stuff because of past terrible experience with Netgear. My Asus routers have been great, and even better with the Merlin firmware. I’ll have to reconsider the Nighthawk knowing it can run my firmware of choice.

            • Vaughn
            • 4 years ago

            I was on a 6 year old DGL 4500 router.

            And was looking to upgrade when I came across the news that I could run the Merlin firmware on it I picked up an R7000 immediately.

            [url<]http://www.linksysinfo.org/index.php?threads/asuswrt-merlin-on-netgear-r7000.71108/[/url<] Been running it for almost 2 months now and its been fantastic I used the stock firmware for maybe 5 mins and will never go back.

    • Chrispy_
    • 4 years ago

    So, a Killer NIC’s software stack is the real special sauce and they claim it can reduce latency over a basic NIC (let’s say a Realtek, for argument’s sake). That’s cool and all, but here’s what a game sees when you click the fire button in an online game:

    Game engine internal network clock 100Hz (5ms latency average)
    [b<]Realtek NIC latency to router (<1ms)[/b<] Router processing latency (1ms) latency to remote server (25ms) Server internal network clock 100Hz (5ms latency average) Server player prediction clock 20Hz (20ms latency average) return latency to your router (25ms) Router processing latency (1ms) [b<]Realtek NIC latency(<1ms)[/b<] Yay! Packet received \o/ So, uh... <64ms of total latency there, of which the cheapo Realtek NIC is responsible for <2ms. Let's just say the Killer NIC fancy software stack reduces that to <63ms of latency: What have we gained, in real terms? In my experience we've gained an awkward user-configurable driver stack that is more complicated to your average gamer than the Windows Update "it just works" generic Realtek stack, and we've gained a Killer NIC-sized dent in our wallet.

      • Convert
      • 4 years ago

      That alone is pointless and very easy to get sucked in to believing it’s important when you see graphs like the ones in this article. Pretty hard not to give some weight to those % differences. Even though it’s completely silly.

      However, as the review covers the usefulness is more about preventing other applications from stomping on what you prefer to be doing. I see the latency as pure gimmick that only the people who require special keyboards and mice that sense G forces that you can’t even produce would care about…

        • Chrispy_
        • 4 years ago

        Yeah. The graphs and advertising focus on significant percentage improvements but the units are in [i<]microseconds[/i<]. When your round trip is measured in miliseconds, your ISP latency is quite literally tens of thousands of times more relevant to your gaming than the NIC latency. 50ms ping reduced by the Killer NIC by 300 [i<]microseconds[/i<] is *still* 50ms of ping. However, the software looks good. I still wouldn't torrent whilst gaming, but I guess for casual users who are the sole person using their connection, it's useful. Realistically, there are two or more internet users in a household and person A is just [i<]screwed[/i<] for gaming if person B is hogging all the dataz, regardless of fancy NIC client software.

          • morphine
          • 4 years ago

          Personally I think that if it’s included in good mobos (which it apparently is), then no harm done. You know that at least you get a quality NIC with “single-user” QoS, which isn’t a bad deal at all.

            • Chrispy_
            • 4 years ago

            “No harm done” is probably the perfect attitude.

            Behind all the ridiculous marketing shenanigans and attempts to oversell, the underlying NIC seems competent enough and the software is both potentially useful to a some people and also [b<][i<]optional[/i<][/b<] for those that don't want it.

            • morphine
            • 4 years ago

            Well, the angle of “you’re getting a NIC that appears to be on par with Intel’s” is also a good thing. You know that you’re getting something better than an el-cheapo Realtek thingie.

            Not hating on Realtek, either. The hatorade that’s been poured over their audio solutions is way undeserved, but I’ve personally encountered a few issues here and their with their Ethernet PHYs/drivers.

          • brucek2
          • 4 years ago

          Exactly. The “throttle torrents to keep room for interactive uses” function is needed in the router or just before it, not on an individual PC. I’d say “or perhaps in the brain of the users sharing a house” but it seems we’re increasingly entering an age where device makers are feeling free to queue up gigabytes of downloads without so much as a yes/no — see Win 10 upgrades and Destiny pushed without any sign of user interest on PS4.

          • mctylr
          • 4 years ago

          [s<]Um, nit-picking: It's [b<]milli[/b<]seconds (ms) not [b<]micro[/b<]seconds (µs). A millisecond is one thousandth (1e-3) of a second , a microsecond is one millionth (1e-6) of a second.[/s<] Nevermind, I misread the timing in the "review" as milliseconds, not microseconds, so Chrispy_ is correct, the savings, other than from the Quality of Service (QoS) software, are negligible.

        • cygnus1
        • 4 years ago

        [quote<] the usefulness is more about preventing other applications from stomping on what you prefer to be doing. [/quote<] The usefulness is more about preventing other applications from stomping on what you prefer to be doing [b<][u<]as long as it's all happening on one Windows PC... [/b<][/u<] The utility of their technology is very limited by the very nature of networks. Networks are shared systems, there is usually more than 1 host sitting behind most internet connections.

          • Convert
          • 4 years ago

          I’m fully aware of the limitation. Hence the usefulness of it being able to tie back in to a router, that way you could prioritize users as well as traffic.

            • cygnus1
            • 4 years ago

            Yep, that would be nice. Honestly, what they should do, is partner with router manufactures to come up with proper and more importantly [u<]uniform[/u<] support for 802.1p that is respected properly by the routers. But, all the hardware is already out there. Since they are focusing on software, they could easily build a DD-WRT or Merlin style firmware for a couple different lines of routers. And then build an activation system into so as to monetize it like some versions of DD-WRT have.

            • Andrew Lauritzen
            • 4 years ago

            Sure but you absolutely don’t need – or even want – client side stuff if you have a router-based solution anyways. The decisions and heuristics you want to run need to be at the gateway, not the clients who don’t have the right information to add anything meaningful to the algorithm.

            So if you have a cheap router and only one significant (bandwidth-wise) client… sure. But I don’t think this is a particularly common configuration these days to be honest.

            It’s good to see what appears to be competent solution (time will tell on stability which is really paramount here), but the software gimmicks are just that – pointless complexity that is optimizing a point in the stack that is fairly irrelevant to latency today.

            • Convert
            • 4 years ago

            Obviously. But I’m not sure if we are there yet. I think this would be at least a good step to get to that point. Router side QoS is very basic/generic last time I dealt with it on consumer devices. It’s not like I can say “Prioritize any traffic from game.exe from computer 2”. In that case they do have something meaningful to add by having client side software. I don’t have a killer NIC but judging by the review it seems like you have very granular control over the QoS down to the .exe and have some idea of the traffic it’s passing. You could very easily identify applications and set priority levels, which if you could pass that along to a router would be very helpful.

            I’m not at all talking about leveraging it for the sub 1ms improvements. Also this is not only very niche now but it would still be niche no matter how they improve things even with router integration. Even in households with multiple internet users you’re typically only battling against YouTube streaming and web browsing, hardly detrimental to gaming.

            Quite honestly I don’t have high hopes for it in routers. Consumer router manufacturers can’t even seem to make a router that doesn’t require monthly reboots or have oddities with the wireless or completely goes out to lunch with a heavy torrent load. It’s 2015 and they seemingly can’t build a device to preform it’s original function flawlessly.

      • Lans
      • 4 years ago

      I agree but on the other hand, the local numbers do look promising (Netperf 128 bytes round trip latency) and it should be followed up with real world numbers using real games. Still I would take this with a huge spoon of salt since I would think / hope game developers do some optimization (Netperf 32 bytes numbers) to find “the sweet spot”.

        • Convert
        • 4 years ago

        Local LAN party with all Killer gear?

        • Krogoth
        • 4 years ago

        You realize that bottleneck for LANs is the switch(es) and/or router?

      • Freon
      • 4 years ago

      Well, at least someone gets it!

      One of my “must have nots” in a Skylake board was this Killer NIC junkware. They do not deserve a place in the market or any of my money, my friends money, my family’s money…

        • Waco
        • 4 years ago

        Dammit. Miss clicked and -1’d you. I meant to +1 you since I did exactly the same thing shopping.

          • Chrispy_
          • 4 years ago

          If anything, this reviews shows that it’s a decent enough NIC – so I wouldn’t actively avoid a board just because it uses a Killer NIC. Nobody’s forcing you to install the software stack so if you use it as a normal NIC with Windows Update WHQL drivers, it’ll be no worse than a Broadcom, Atheros, Realtek NIC that comes with so many boards anyway.

            • Waco
            • 4 years ago

            Agreed, but until recently (past year or so I guess), you couldn’t grab *just* the driver easily at all. That was enough to make me never want to deal with it.

            • Freon
            • 4 years ago

            You pay extra for it. I.e. the Gigabyte Gaming 7 TR reviewed recently, the UD5 is almost identical sans the Killer NIC and Sound Blaster audio. $30 difference. It’s hard to find a perfect apples to apples comparison, but it looks like you’re going to spend an extra $10-15 for it.

            • Chrispy_
            • 4 years ago

            Sure, but you’d pay extra for an Intel NIC in most cases too. If you want more than the absolute cheapest NIC on the market, there’s always going to be some cost involved – you get what you pay for (unless you buy Apple)

            • Freon
            • 4 years ago

            You get nothing by paying extra for a board with a Killer NIC over the built-in Intel I219 which essentially free with your 100-series chipset.

            Did you not realize Intel’s chipsets have had Intel LAN built in for years now?

            I mean, ok, there may be some minor level of external analog circuitry and obvious the physical port, but really…

            • D@ Br@b($)!
            • 4 years ago

            The gaming 7 has also:
            3866OC vs 3466OC
            Dp/HDMI vs Dsub/DVI/HDMI
            Intel, Killer vs Intel
            x16/x8x8/x8x8x4 vs x16/x16x4
            +1 M.2 socket 3
            +1 Asmedia SATA 2×6 Gb/s conn.
            + 2 SATA3 ports
            + 1 PCIex16 slot
            + 1 lan port
            + 1 USB type C
            Did I forget something….?

            • Freon
            • 4 years ago

            Your post is about 90% incorrect. Take a harder look.

            The few true differences are inconsequential, like the DRAM speed support being quoted or minor differences in what output ports they chose from the IGP.

      • strangerguy
      • 4 years ago

      There’s no improvement because you didn’t use $1000 audiophile-I mean latency-nophile Ethernet cables. *sarcasm*

      • Bensam123
      • 4 years ago

      Sorta interested in where you get the internal game engine clocks from… Also movement prediction (which doesn’t happen on the server and isn’t a buffer anyway).

      [url<]https://developer.valvesoftware.com/wiki/Source_Multiplayer_Networking[/url<]

      • ZGradt
      • 4 years ago

      Yeah. The latency of the local network is nothing compared to the connection to your ISP, which your NIC can’t help at all.
      It sounds like this could hep if you’re using your gaming computer to download stuff in the background, but I don’t do that.
      Maybe if I had a Killer router to prioritize my packets over the family’s Netflix packets…

        • Freon
        • 4 years ago

        Even on a LAN, a few dozen microseconds is inconsequential to anything but some bizarre distributed computing needs. At that point you may not want to use GigE anyway, and you likely won’t be using consumer boards, especially “gaming” boards.

      • just brew it!
      • 4 years ago

      Seems to me the real potential win here is the traffic prioritization. OTOH, people that care that much about networking performance are probably going to take pains to not run any other bandwidth-hogging applications while they’re gaming anyway.

      Still, with consumer broadband performance improving slowly but steadily, we will soon be reaching the point where latencies in the local machine and LAN will start to become a significant component of overall latency.

        • Chrispy_
        • 4 years ago

        [quote<]Still, with consumer broadband performance improving slowly but steadily, we will soon be reaching the point where latencies in the local machine and LAN will start to become a significant component of overall latency.[/quote<] Well, not unless the server you're connecting to is in the same city. Ignoring switching time, the speed of light alone will add 1ms per 100 miles to your round trip time. So, uh, lets assume you're lucky and there's a game server within 500 miles (which, given there's usually only one or two per [i<]continent[/i<] is pretty optimistic) you have an unavoidable minimum latency of 5ms, aka 5000μs because it takes the signal that long to travel at the speed of light to your server and back. So, a killer NIC saves you 50μs but your WAN latency is still two orders of magnitude higher, and that's using a theoretically perfect link that operates at the speed of light. In reality that's utterly impossible! If ISP's became "perfect like this" they go from being four orders of magnitude more important than your NIC latency, to just two orders of magnitude. Still seems like a lost cause to me, and I doubt we'll be sending data faster than the speed of light any time soon!

          • just brew it!
          • 4 years ago

          Ahh well, there’s still the LAN party crowd to cater to, eh? 😉

            • Chrispy_
            • 4 years ago

            Totally – Those 50μs are the difference between victory or humiliation!

            It’s a good job I drink beer at LAN parties; it’s scientifically proven to reduce my typical 150ms reaction times by turning me into an Omnipotent gaming deity. Rumours that too much beer in fact makes me sluggish and prone to pass out on my keyboard are libelous slander and outright lies!

          • BobbinThreadbare
          • 4 years ago

          I’m waiting for my quantum entanglement modem. Coming any day now I’m sure.

        • Krogoth
        • 4 years ago

        Completely incorrect.

        Internet latency with normal broadband connection speeds has little do with the lack of bandwidth. It is everything do with routing/switching equipment between you and your destination. The laws of physics are applicable here as well.

        Suffice to say that you should always try to connect to servers that are geographically close to your location.

        Local connection latency has a non-issue for over 30 years.

          • just brew it!
          • 4 years ago

          Latency goes way up when buffers start to fill up and your traffic needs to wait in line. But as has already been pointed out elsewhere, this needs to be handled via QoS in the router; doing it locally on the NIC doesn’t do you much good unless the source of the competing traffic is on the same system. (And if something else on the system IS generating lots of traffic, that’s a case of “If it hurts when you do that, then don’t do that!”)

    • WaltC
    • 4 years ago

    Killer NICs have been a part of MSI’s AMD chipset-based motherboards for several months now. I have one with my 970 Gaming motherboard. It’s just another NIC…;) It’s fine, but so was my RealTek. At least they keep the killer software separate from the driver, because you really don’t need it–it looks like fluff to me. Yawn. 6 of one, half-dozen of the other.

    • _ppi
    • 4 years ago

    Thanks for the test. Given that now I am in market for Skylake system and lot of motherboards have Killer NIC, this was very useful information (also regarding the stability and bad rep of Killer NIC).

    Nevertheless, could you please add one more test? How does Killer NIC compare to Intel’s in that 40ms ping scenario in Team Fortress 2 (with no torrent and other traffic – which is typical scenario)?

    Obviously, such a test would have lot of variances, but it should tell if there is any meaningful difference in the entire network stack from your PC to server somewhere in different city. Thank you.

    Edit: That being said, how about routers that help with the home network situation you described in the end.

      • loophole
      • 4 years ago

      With just Team Fortress 2 running, the ping times between the Killer and the Intel were roughly comparable (as you say though there’s a fair bit of variance with tests like that). (With ping times in the 40ms range you don’t get to see the differences that the more targeted local LAN round trip latency tests showed.)

      • meerkt
      • 4 years ago

      Also would be useful: a comparison with a Realtek chip.

    • ikjadoon
    • 4 years ago

    Ok….But, in actual gaming where I’m not trying to torrent an ISO at the same time….How does the Killer NIC help?

      • meerkt
      • 4 years ago

      Improve your latency by up to 0.2ms in synthetic tests!

      • anotherengineer
      • 4 years ago

      Easy, just think you have Special ‘K’ on the board, the placebo effect will make it better 😉

      • Meadows
      • 4 years ago

      Oh, it’ll help lots under Windows 10!

    • DancinJack
    • 4 years ago

    I just have yet to find a NIC that performs better, overall, than Intel’s solution(s). I SEE the results here, and they tell me I shouldn’t be worried about a Killer NIC, but I will still side with Intel because of past experience.

    I’m glad a lot of motherboard OEMs have included Intel NICs more recently. Five years ago you’d be lucky to find them. We should probably give a hand to ASUS in that. I think they were the first to really use them on the reg.

      • Visigoth
      • 4 years ago

      Agreed. I hate those pesky Realtek stuff!

      Now if somebody would please invent a better audio chip so we can get rid of those miserable Realtek audio chips…Intel integrated DSP/C-Media chip/NVIDIA Soundstorm, please?

        • Krogoth
        • 4 years ago

        Realtek NIC work fine.

        Intel NICs are overrated. They just have server-tier functions that never see any real use outside the enterprise world.

          • Airmantharp
          • 4 years ago

          I think it’s a drivers thing… but yeah, I haven’t had issues with GigE NICs in almost a decade, regardless of vendor.

            • JustAnEngineer
            • 4 years ago

            I haven’t seen disastrously bad NIC driver problems since NVidia got out of the chipset business.

            • Airmantharp
            • 4 years ago

            To think, the last one I can think of was an Nvidia board (Gigabyte), but it was a Marvell chipset attached to nForce 3, which wasn’t stable for about the first year- then there was a driver update, and no further problems.

            • bhtooefr
            • 4 years ago

            I217-LM and I218-LM had a particularly nasty one back in 2013 – in sleep mode, with Wake-on-LAN enabled, they’d do an IPv6 multicast flood.

            IIRC, one machine wasn’t enough to bring down a network, but I want to say that it only took three machines doing it to DDoS the switch they were connected to.

          • mtruchado
          • 4 years ago

          I saw some Realtek chipset missbehaviour. I have one MoBo that I had to install an intel NIC in order to fix a problem in where sometimes when the system is up and running the NIC just does not work, not even the lights are blinking. Whether this is a hw failure or even a bios issue I don’t know thought.

        • mtruchado
        • 4 years ago

        Fortunately there are MoBos using CMedia chipset, under my experience they are the best you can buy for integrated solution. Forget about VIA solution as well, It sucks as Realtek does.

      • Meadows
      • 4 years ago

      Even a Realtek can do better than you think if you apply some of this secret sauce through, for example, cFosSpeed. The main point in this review is not the NIC and its optimisations, but rather the software layer that intelligently assigns priorities to the different types of network traffic without the user ever having to touch any QoS settings or ports at all. Long gone are the days when you couldn’t play online while downloading something massive, for example.

      • LostCat
      • 4 years ago

      I have an Intel Gigabit CT desktop adapter that performed pretty horseshit in gaming without cFosSpeed installed.

    • Bauxite
    • 4 years ago

    Should test an intel server NIC, some of those have pretty advanced things going on in drivers, i350 is the latest gige version. Anything is better than realtek at least.

    Other than a crappy ISP, your internet router usually has a bigger role to play than your actual LAN link, a lot of them fail hard with high traffic, esp torrents. (even many “prosumer” models are trash)

    But hey, you can always roll your own infiniband network if you [i<]really[/i<] care about latency inside your house 😉

    • anotherengineer
    • 4 years ago

    Really good review and info.

    1 little nagging question though, how does it compare to the regular run of the mill realtek gig mobo nic??

      • Topinio
      • 4 years ago

      Well, we can see from this nice article that a Killer is similar to an Intel Ethernet port, but with added QoS, so unless Realtek now is too (and I doubt it) I’d infer Intel>Killer>Realtek for stability, and if one wants QoS software and doesn’t dual-boot with Linux there’s an argument for Killer>Intel>Realtek.

        • Airmantharp
        • 4 years ago

        Basically this.

        Though generally speaking, the NIC used on a motherboard/in a laptop is pretty low on my list of priorities, so long as it is GigE and/or 802.11ac.

        • meerkt
        • 4 years ago

        There’s generic QoS software for Windows:
        [url<]https://techreport.com/discussion/29144/revisiting-the-killer-nic-eight-years-on?post=945728#945728[/url<] And I think also IN Windows since 7, but it's probably as cryptic and inaccessible as its firewall.

        • anotherengineer
        • 4 years ago

        Maybe, but only way to know for sure would be to test a few mobo’s with 2 or 3 diff realtek nic’s to see.

        Never know, maybe there really isn’t a reason to pay a premium bump for Intel or Killer Gig mobo nic’s over realtek ones.

        Having said that though, I do have an Intel CT desktop adapter plugged into my mobo 😉
        [url<]http://www.intel.com/content/www/us/en/network-adapters/gigabit-network-adapters/gigabit-ct-desktop-adapter-brief.html[/url<] Hmmm which actually is quite a few years old already. I wonder if Intel will have a newer replacement soon??

    • christos_thski
    • 4 years ago

    Maybe they could use that heuristic technology to release a router with better QOS functionality? Perhaps I’m missing something about networking , because I see that as a much better use for these technologies. The advanced bandwidth management functionality would be nice for a router, too.

      • jokke
      • 4 years ago

      Under Linux and *BSD you have all those features and more for any network card.

        • Forge
        • 4 years ago

        You tell em, Steve-Dave!

      • LostCat
      • 4 years ago

      Isn’t that what Streamboost is?
      [url<]http://www.smallnetbuilder.com/lanwan/lanwan-features/32297-does-qualcomms-streamboost-really-work[/url<] "Qualcomm's StreamBoost is automatic network bandwidth management / traffic shaping technology. It is largely based on work by BigFoot Networks..."

Pin It on Pinterest

Share This