Page 1 of 1

Receive Side Scaling - latency hazard?

Posted: Wed Aug 11, 2010 9:01 am
by Meadows
Lately I've been trying to shift from being bandwidth-conscious to being latency-conscious in a never-ending race to maintain my "my daddy is smarter than Einstein, stronger than Hercules, and frags you with a snap of his finger" position in Quake Live, and I'd have some specific questions. Lengthy online searching taught me some things, but led me nowhere with regards to my actual questions.

First off, Receive Side Scaling. It's probably a godsend for browsing and filesharing applications as far as I can figure, but does it negatively impact the rest of the system or the latency of simple gaming connections?
Secondly, MTUs. One sees 1500 as being the ever-compatible default everywhere one looks, but do lower values help with latency just like, say, disabling Nagle's algorithm does?

Also, any other tips?
Thanks

Re: Receive Side Scaling - latency hazard?

Posted: Wed Aug 11, 2010 3:30 pm
by Meadows
Last call out for assistance; does anyone even know these stuff?

Re: Receive Side Scaling - latency hazard?

Posted: Wed Aug 11, 2010 6:04 pm
by Dr. Evil
Receive Side Scaling helps in situations where enough data is flowing through a NIC that a single CPU core is unable to deal with the task of getting the data off the NIC. You have to be pushing some pretty serious amounts of data in order for this to happen, typically near the limit of a 1Gb NIC or well past 1Gb on a 10Gb NIC. RSS will split this task up across multiple CPU cores. Usually in the case of TCP traffic a particular flow is kept on a particular CPU core to avoid possible out of order issues in the stack. I think that in the case of UDP it tracks things via source/destination IP.

It's tough to say what effect RSS would have on gaming latency, but I'm confident that the effect is essentially undetectable in terms of gameplay. If RSS was enabled and the queue used by the game's network traffic changed from server connection to server connection it could mean that latency varied from session to session in a very, very small way. When I say very very small I'm talking a value that is dwarfed by the WAN latency of the server you're connected to over the Internet, i.e. probably measured in single-digit microseconds if not hundreds of nanoseconds.

From a CPU affinity standpoint it's possible that the game you're playing as well as the NIC receive process could get moved around among available cores as you're playing, which might have an exceedingly small effect on latency as well. Given the way most games monopolize a CPU core however (and the relatively low amount of network traffic involved in most online games) it's more likely that the NIC receive process would land on CPU 0 along with other system processes while the game ran on CPU 1.

Regardless, I would expect that RSS is going to have a negligible effect on your gaming experience one way or another. Any advantage you gain by having a less-busy core grab data off the NIC will probably be offset by the fact that the actual game engine core that needs the data will have to grab it from a common cache pool or main memory.

Messing with MTUs is usually a bad idea. In the case of TCP, modern operating systems will try to do Path MTU Discovery, which involves sending at max MTU and relying on an ICMP message from an intermediate device to tell you that your MTU is too big and needs to be reduced. These discoveries are done per connection with TCP because you build a virtual connection when you establish the TCP session.

Practically speaking Path MTU Discovery is still hit and miss because things along the way like firewalls will block the ICMP messages from reaching you. This means you never get the message that your MTU is too big, and your packets get silently dropped. Bad things. UDP is out of luck in this regard because it's connectionless and the system won't keep track of the MTU for a particular destination even if it's sending packet after packet to that destination.

Of course Path MTU Discovery usually comes into play when you raise your MTU above 1500, i.e. jumbo frames. You could theoretically lower your MTU, but you're compromising other services like HTTP; by lowering your MTU from 1500 to 1000, you make it so the remote server has to frame three packets to send you 3000 bytes of data instead of only two.

Moreover, it's not going to do you any good anyway, because while MTU is taken into account during session negotiation in TCP (via Maximum Segment Size), games AFAIK almost universally use UDP and it has no facility for declaring max MTU to a destination. This means that if you set your MTU low enough, the server will send you packets that are too big for your to handle, and those packets will be dropped. So effectively you're handicapping your TCP traffic for non-gaming stuff without helping UDP traffic at all.

Fortunately, none of this matters because any game whose networking code is worth its salt is going to send you lots of small packets rather than fewer large ones anyway, which is why they use UDP in the first place; assuming the game is written well, messing with MTU is trying to improve something that probably can't be improved. Similarly, your client should send the server data using the same rules, so setting your MTU lower won't help you in that respect either.

As for Nagle's algorithm, you're correct that it improves network efficiency by queueing small amounts of data for a small amount of time (up to 200 msec IIRC) so it can hopefully "fill out" a packet and make the payload larger vs. the headers. However, since nagling is a TCP thing and games almost always use UDP, disabling or enabling nagling (which is usually done on a per-socket basis depending on the application's needs rather than a system-wide basis) won't have any effect on most games.

Re: Receive Side Scaling - latency hazard?

Posted: Wed Aug 11, 2010 6:29 pm
by Meadows
Dr. Evil wrote:
However, since nagling is a TCP thing and games almost always use UDP, disabling or enabling nagling (which is usually done on a per-socket basis depending on the application's needs rather than a system-wide basis) won't have any effect on most games.

I would like to add here that disabling nagling has the potential to bring massive responsiveness improvements in games like WoW for example.

Sticking to Quake Live, it appears I can leave RSS in place and probably don't need to touch the MTU. Thanks for the reply.

Re: Receive Side Scaling - latency hazard?

Posted: Wed Aug 11, 2010 7:31 pm
by drsauced
I'm not sure what you can really do on the client side. I remember dicking around with this stuff with CS and Half-Life to little avail. Your situation may be different, but most internet connections are oversubscribed and the infrastructure between you and the server is just plain slow. You are also at the mercy of the quality of the network code in the application.

Enter your lament and/or suitable dirge music here.

You can get some idea of what's around you by running a traceroute between you and the chosen server, but that is subject to ICMP filtering as stated above. I also like the Nitro test severs, which can show interesting statistics and settings.

Re: Receive Side Scaling - latency hazard?

Posted: Wed Sep 12, 2018 5:34 am
by Bensam123
I realize this is a necro, but this is very much relevant to my interests, specifically regarding MTU. I've went down the same exact road as Meadows over the years and MTU is still something that I end up with conflicting answers about. I realize MTU affects mainly TCP traffic and in a perfect ecosystem it wont influence your gaming experience at all. However, we don't live in a perfect world and our computers send more things over our network cards then just gaming packets. Not just your computer, but also all traffic on your network.

So for instance if you stream to a service like Twitch that's constantly sending out TCP packets in addition to your gaming traffic and all other packets on your network, would reducing MTU on your router cause other TCP packets to become fragmented and then allow certain kinds of traffic to get squeezed in between packets that would otherwise need to wait, especially while utilizing QoS? Streaming to a service already causes a hit to latency simply by the process of using bandwidth.

Re: Receive Side Scaling - latency hazard?

Posted: Wed Sep 12, 2018 6:44 am
by Chrispy_
I still think MTU is irrelevant and best left at the defaults your ISP sets.

Typical MTUs on modern connections are around a kilobyte or two, whilst bandwidth is usually multiple megabytes a second.

That means that making wild changes to your MTU has a near-zero effect on your latency. Let's spitball a 30MBit/s connection, with 20ms of latency in-game, providing 4MiB/s of bandwidth and a 1536 byte MTU. Let's cut your MTU to 512 bytes. That's a pretty drastic change, but let's see what happens to your in-game latency:

At an MTU of 1536 bytes, the maximum packet delay for the next packet caused by transmission time of the current packet is 1536 bytes divided by 4,194,304 bytes per second, multiplied by 1000ms (in a second). That's 0.36ms of latency caused by your MTU, equivalent to 1 frame at 3000 frames per second. It would change your in-game latency from 20ms to 20ms, AT WORST. Remember, the average delay is half the maximum delay, too.

At the new, radical MTU of 512 bytes, the maximum packet delay for the next packet caused by transmission time of the current packet is 512 bytes divided by 4,194,304 bytes per second, multiplied by 1000ms (in a second). That's 0.12ms of latency caused by your MTU, equivalent to 1 frame at 8000 frames per second. It would change your in game latency from 20ms to 20ms.

If you game at 8000 frames per second, and the game you're playing has network code that also runs at that rate, you ABSOLUTELY SHOULD change your MTU. Meanwhile, in 2018 the typical game servers run at a tick rate of 63Hz, 64Hz or even 21Hz - and the twitchiest, fastest game to date runs at 128Hz. I don't think we're in danger of getting servers running at more than 3000Hz any time soon ;)

TL;DR - even if your internet connection is a couple of orders of magnitude slower than a mediocre connection in 2018, you won't gain even a single frame of latency by messing with your MTU.

Re: Receive Side Scaling - latency hazard?

Posted: Wed Sep 12, 2018 7:21 am
by Bensam123
Hmmm, then why does using bandwidth increase your latency? For instance streaming, which uses about 8mbps out of my 23 will increase my ping by roughly 2ms according to a long running pingplotter. Variance also increases quite a bit, not just the amount of spikes, but the amount of 'small' spikes as well. The average ping increase is not particularly what gamers worry about. The stream is off of a different machine connected to the same router. When my bandwidth cap was much lower the impact on my ping was much higher as well. The router is more then adequate for this sort of traffic.

Downloads, even well below my cap (say 100mbps out of actual 460) also increase ping times.

For instance

No Stream
Image

Streaming
Image

I agree, changing MTU doesn't really make sense, just trying to iron out possibilities.

Re: Receive Side Scaling - latency hazard?

Posted: Wed Sep 12, 2018 7:23 am
by Waco
Disabling interrupt moderation on your NIC likely has a bigger effect than nearly anything else you can do client-side. You'll eat up more CPU cycles but at least your packets will ping the CPU immediately instead of waiting to see if any more show up.

Re: Receive Side Scaling - latency hazard?

Posted: Wed Sep 12, 2018 7:55 am
by Chrispy_
Downloads of any size affect your latency because there's a queue of non-gaming traffic in the buffers that the gaming packet has to wait for.

What you need is router firmware that has application-specific QoS. Sadly, I can't recommend anything consumer that handles it that granularly but DD-WRT is a good start if your router is compatible.

Re: Receive Side Scaling - latency hazard?

Posted: Wed Sep 12, 2018 8:53 am
by just brew it!
Bensam123 wrote:
Hmmm, then why does using bandwidth increase your latency? For instance streaming, which uses about 8mbps out of my 23 will increase my ping by roughly 2ms according to a long running pingplotter. Variance also increases quite a bit, not just the amount of spikes, but the amount of 'small' spikes as well. The average ping increase is not particularly what gamers worry about. The stream is off of a different machine connected to the same router. When my bandwidth cap was much lower the impact on my ping was much higher as well. The router is more then adequate for this sort of traffic.

Queueing delays, most likely at your router/modem. Your packets are waiting in line behind other traffic. If the other traffic is "bursty", this will cause a large variance in ping times.

If your router supports QoS you may be able to mitigate this by prioritizing the gaming traffic.

Re: Receive Side Scaling - latency hazard?

Posted: Wed Sep 12, 2018 9:47 am
by LostCat
Chrispy_ wrote:
Downloads of any size affect your latency because there's a queue of non-gaming traffic in the buffers that the gaming packet has to wait for.

What you need is router firmware that has application-specific QoS. Sadly, I can't recommend anything consumer that handles it that granularly but DD-WRT is a good start if your router is compatible.

Alternately, cFosSpeed does it very well on the OS side for Windows users.

I haven't needed it in a while but it was good stuff.

Re: Receive Side Scaling - latency hazard?

Posted: Wed Sep 12, 2018 10:22 am
by Glorious
LostCat wrote:
Alternately, cFosSpeed does it very well on the OS side for Windows users.

I haven't needed it in a while but it was good stuff.


That will do literally nothing in this case, because he's downloading/streaming on a different computer.

(and it doesn't do much of anything normally anyway)

Re: Receive Side Scaling - latency hazard?

Posted: Wed Sep 12, 2018 8:58 pm
by Bensam123
Chrispy_ wrote:
Downloads of any size affect your latency because there's a queue of non-gaming traffic in the buffers that the gaming packet has to wait for.

What you need is router firmware that has application-specific QoS. Sadly, I can't recommend anything consumer that handles it that granularly but DD-WRT is a good start if your router is compatible.


Yup, I already do this. AC68u has pretty good prioritization when you use something like merlin. I'm currently using a 86u and have used DDWRT and Tomato in the past. But based on my usage it doesn't seem to actually do that much outside of normal QoS bandwidth management, it doesn't seem to be prioritizing icmp packets, even though they're considered gaming traffic. The QoS for Asus seems to only be bandwidth management and doesn't help much with prioritizing actual gaming traffic, it is definitely recognizing it under the QoS stats and makes it seem as though it does packet shaping as well.

My thought with MTU was by chopping down the size of the packets on the routers end it would allow it to re-prioritize traffic more frequently as it's arriving since the size of the packets are smaller.

I'm curious what you mean by download traffic being more 'bursty', it all comes down to being packets and I'm no where close to bandwidth cap. Regarding streaming it's CBR, so it's almost always a constant stream of bits around the same amount of bandwidth, it doesn't change much. If the packet shaping is doing it's job, ICMP and game packets should slip out first or very close it, as was mentioned earlier by Chrispy the amount of time between packets is infinitesimally small, I should see almost no variance from streaming (or downloading) unless I'm getting close to my cap and that's where QoS bandwidth management starts to matter a lot.

Re: Receive Side Scaling - latency hazard?

Posted: Thu Sep 13, 2018 7:42 am
by roncat
I had always thought the ISP were setting MTU size, and the optimum latency/speed was gained by matching this. Any smaller and you just incur more packet overhead (which is why your ping may get worse by a few percent).

Re: Receive Side Scaling - latency hazard?

Posted: Thu Sep 13, 2018 8:01 am
by Chrispy_
Bensam123 wrote:
I'm curious what you mean by download traffic being more 'bursty'

I think you're misquoting someone else.

IMO it's not bursty, it's the fact that your router has RX/TX buffers and flow control, so even if you're nowhere near bandwidth cap, there's added latency from the router processing the buffers. If you're only gaming, the buffers will be empty but with downloads/uploads of any size, gaming packets have to wait their turn in the queue of buffered packets.

Re: Receive Side Scaling - latency hazard?

Posted: Thu Sep 13, 2018 8:20 am
by notfred
Trying to plot latency along the path is pretty meaningless. The routers on the Internet forward traffic in hardware, anything that requires a reply gets punted to CPU and is no longer representative of what is going on for forwarded traffic. Endpoint latency is really the one thing that matters.

As others have said (and done the math for!) messing with your MTU is meaningless these days, it's the queues in your router that count. It made a difference in the dialup Internet days, but with the line rates now it doesn't. Messing with the MTU is also a good way to get yourself in to trouble talking to other devices on your network, just don't do it.

Re: Receive Side Scaling - latency hazard?

Posted: Fri Sep 14, 2018 2:03 am
by Bensam123
roncat wrote:
I had always thought the ISP were setting MTU size, and the optimum latency/speed was gained by matching this. Any smaller and you just incur more packet overhead (which is why your ping may get worse by a few percent).


Yes, I was discussing the difference between being able to sneak a packet in between essentially two packets if you split it into 750 say instead of 1500, among other things.

Chrispy_ wrote:
Bensam123 wrote:
I'm curious what you mean by download traffic being more 'bursty'

I think you're misquoting someone else.

IMO it's not bursty, it's the fact that your router has RX/TX buffers and flow control, so even if you're nowhere near bandwidth cap, there's added latency from the router processing the buffers. If you're only gaming, the buffers will be empty but with downloads/uploads of any size, gaming packets have to wait their turn in the queue of buffered packets.


Not misquoting, just responding to someone else as well. That's also where I'm not exactly sure why packet shaping wouldn't reorder that 'waiting' line. Bandwidth management obviously prioritizes what packets get more available headroom to leave, but traffic shaping should change the order at which they leave, which the 86u is supposed to do.

Re: Receive Side Scaling - latency hazard?

Posted: Fri Sep 14, 2018 7:27 am
by Chrispy_
Well, it's working, I think.

With QoS packet-shaping enabled you are downloading simultaneously and only seeing a rise from 25.7 to 27.5ms of game latency. That's pretty trivial, 1.8ms isn't enough to lose you a server 'tick' in either direction. If you weren't using QoS I'd expect to see your latency double or worse.

Even if the router is packet-shaping and prioritising which incoming packets get routed to your PC first, there's still a buffer in the modem that need to be traversed by incoming packets before it hits the router and its QoS. If you're gaming and downloading, the gaming packets will be tiny and use almost no bandwidth, the downloads will be larger and more numerous. I'm not aware of a modem that can operate without a receive buffer and it has no control over how consistent the flow rate of download packets it's receiving will be. Even if you're using only 1/3rd your bandwidth for download, it won't really prevent buffer and bandwidth saturation.

Let's use:
G as representative of a gaming packet arriving in the RX buffer,
D as representative of a data download packet arriving in the RX buffer
- as representative of no packet arriving in the RX buffer at all.

Just gaming, you'll see this:
G-------G-------G-------G-------G-------G-------G-------G-------G-------G-------G-------G-------G-------

Ideally, gaming and downloading, you'd see this:
GDD---GDD---GDD---GDD---GDD---GDD---GDD---GDD---GDD---GDD---GDD---GDD---GDD

Unfortunately, just downloading at 1/3rd bandwidth looks more like this:
DDDD-------------DD----------DDDDDD----DDDDDDD---------------------------D-------DD--DDD-----
Sadly, nothing at your end has any control over how evenly the 1/3rd bandwidth usage arrives at your end. It's entirely down to the ISP's router and all the preceding hops.

Which means downloading and gaming at the same time looks like this:
GDDDDG--G-------G----DDG------G----DDDDDDG---GDDDDDDDG-------G---D-----G--DD--GD
and the two bolded 'G's are delayed packets that are contributing to a slightly higher latency.

Re: Receive Side Scaling - latency hazard?

Posted: Fri Sep 14, 2018 12:16 pm
by TheRazorsEdge
Bensam123 wrote:
would reducing MTU on your router cause other TCP packets to become fragmented and then allow certain kinds of traffic to get squeezed in between packets that would otherwise need to wait, especially while utilizing QoS?


It sounds like you want fragmentation to happen. This is almost always a terrible idea. (And the "almost" is there as a CYA... I can't think of any time you'd actually want fragmentation.)

Every packet has overhead. Routers and switches must determine which interface to send it through. Devices must create/verify checksums on each packet. Firewalls/IDS/antivirus must inspect them.

Fragmented packets must be held and reassembled by the IP stack of the recipient anyway. You don't want to risk this happening on your high-performance device.