Receive Side Scaling - latency hazard?

The network is the forum.

Moderators: Steel, notfred

Receive Side Scaling - latency hazard?

Postposted on Wed Aug 11, 2010 9:01 am

Lately I've been trying to shift from being bandwidth-conscious to being latency-conscious in a never-ending race to maintain my "my daddy is smarter than Einstein, stronger than Hercules, and frags you with a snap of his finger" position in Quake Live, and I'd have some specific questions. Lengthy online searching taught me some things, but led me nowhere with regards to my actual questions.

First off, Receive Side Scaling. It's probably a godsend for browsing and filesharing applications as far as I can figure, but does it negatively impact the rest of the system or the latency of simple gaming connections?
Secondly, MTUs. One sees 1500 as being the ever-compatible default everywhere one looks, but do lower values help with latency just like, say, disabling Nagle's algorithm does?

Also, any other tips?
Thanks
Meadows
Grand Gerbil Poohbah
Silver subscriber
 
 
Posts: 3151
Joined: Mon Oct 08, 2007 1:10 pm
Location: Location: Location

Re: Receive Side Scaling - latency hazard?

Postposted on Wed Aug 11, 2010 3:30 pm

Last call out for assistance; does anyone even know these stuff?
Meadows
Grand Gerbil Poohbah
Silver subscriber
 
 
Posts: 3151
Joined: Mon Oct 08, 2007 1:10 pm
Location: Location: Location

Re: Receive Side Scaling - latency hazard?

Postposted on Wed Aug 11, 2010 6:04 pm

Receive Side Scaling helps in situations where enough data is flowing through a NIC that a single CPU core is unable to deal with the task of getting the data off the NIC. You have to be pushing some pretty serious amounts of data in order for this to happen, typically near the limit of a 1Gb NIC or well past 1Gb on a 10Gb NIC. RSS will split this task up across multiple CPU cores. Usually in the case of TCP traffic a particular flow is kept on a particular CPU core to avoid possible out of order issues in the stack. I think that in the case of UDP it tracks things via source/destination IP.

It's tough to say what effect RSS would have on gaming latency, but I'm confident that the effect is essentially undetectable in terms of gameplay. If RSS was enabled and the queue used by the game's network traffic changed from server connection to server connection it could mean that latency varied from session to session in a very, very small way. When I say very very small I'm talking a value that is dwarfed by the WAN latency of the server you're connected to over the Internet, i.e. probably measured in single-digit microseconds if not hundreds of nanoseconds.

From a CPU affinity standpoint it's possible that the game you're playing as well as the NIC receive process could get moved around among available cores as you're playing, which might have an exceedingly small effect on latency as well. Given the way most games monopolize a CPU core however (and the relatively low amount of network traffic involved in most online games) it's more likely that the NIC receive process would land on CPU 0 along with other system processes while the game ran on CPU 1.

Regardless, I would expect that RSS is going to have a negligible effect on your gaming experience one way or another. Any advantage you gain by having a less-busy core grab data off the NIC will probably be offset by the fact that the actual game engine core that needs the data will have to grab it from a common cache pool or main memory.

Messing with MTUs is usually a bad idea. In the case of TCP, modern operating systems will try to do Path MTU Discovery, which involves sending at max MTU and relying on an ICMP message from an intermediate device to tell you that your MTU is too big and needs to be reduced. These discoveries are done per connection with TCP because you build a virtual connection when you establish the TCP session.

Practically speaking Path MTU Discovery is still hit and miss because things along the way like firewalls will block the ICMP messages from reaching you. This means you never get the message that your MTU is too big, and your packets get silently dropped. Bad things. UDP is out of luck in this regard because it's connectionless and the system won't keep track of the MTU for a particular destination even if it's sending packet after packet to that destination.

Of course Path MTU Discovery usually comes into play when you raise your MTU above 1500, i.e. jumbo frames. You could theoretically lower your MTU, but you're compromising other services like HTTP; by lowering your MTU from 1500 to 1000, you make it so the remote server has to frame three packets to send you 3000 bytes of data instead of only two.

Moreover, it's not going to do you any good anyway, because while MTU is taken into account during session negotiation in TCP (via Maximum Segment Size), games AFAIK almost universally use UDP and it has no facility for declaring max MTU to a destination. This means that if you set your MTU low enough, the server will send you packets that are too big for your to handle, and those packets will be dropped. So effectively you're handicapping your TCP traffic for non-gaming stuff without helping UDP traffic at all.

Fortunately, none of this matters because any game whose networking code is worth its salt is going to send you lots of small packets rather than fewer large ones anyway, which is why they use UDP in the first place; assuming the game is written well, messing with MTU is trying to improve something that probably can't be improved. Similarly, your client should send the server data using the same rules, so setting your MTU lower won't help you in that respect either.

As for Nagle's algorithm, you're correct that it improves network efficiency by queueing small amounts of data for a small amount of time (up to 200 msec IIRC) so it can hopefully "fill out" a packet and make the payload larger vs. the headers. However, since nagling is a TCP thing and games almost always use UDP, disabling or enabling nagling (which is usually done on a per-socket basis depending on the application's needs rather than a system-wide basis) won't have any effect on most games.
Dr. Evil
TR Staff
Gold subscriber
 
 
Posts: 75
Joined: Wed Dec 26, 2001 7:00 pm
Location: Lee's Summit, MO

Re: Receive Side Scaling - latency hazard?

Postposted on Wed Aug 11, 2010 6:29 pm

Dr. Evil wrote:However, since nagling is a TCP thing and games almost always use UDP, disabling or enabling nagling (which is usually done on a per-socket basis depending on the application's needs rather than a system-wide basis) won't have any effect on most games.

I would like to add here that disabling nagling has the potential to bring massive responsiveness improvements in games like WoW for example.

Sticking to Quake Live, it appears I can leave RSS in place and probably don't need to touch the MTU. Thanks for the reply.
Meadows
Grand Gerbil Poohbah
Silver subscriber
 
 
Posts: 3151
Joined: Mon Oct 08, 2007 1:10 pm
Location: Location: Location

Re: Receive Side Scaling - latency hazard?

Postposted on Wed Aug 11, 2010 7:31 pm

I'm not sure what you can really do on the client side. I remember dicking around with this stuff with CS and Half-Life to little avail. Your situation may be different, but most internet connections are oversubscribed and the infrastructure between you and the server is just plain slow. You are also at the mercy of the quality of the network code in the application.

Enter your lament and/or suitable dirge music here.

You can get some idea of what's around you by running a traceroute between you and the chosen server, but that is subject to ICMP filtering as stated above. I also like the Nitro test severs, which can show interesting statistics and settings.
Calm seas never made a skilled mariner.
drsauced
Graphmaster Gerbil
 
Posts: 1463
Joined: Mon Apr 21, 2003 1:38 pm
Location: Here!


Return to Networking

Who is online

Users browsing this forum: No registered users and 3 guests