Receive Side Scaling helps in situations where enough data is flowing through a NIC that a single CPU core is unable to deal with the task of getting the data off the NIC. You have to be pushing some pretty serious amounts of data in order for this to happen, typically near the limit of a 1Gb NIC or well past 1Gb on a 10Gb NIC. RSS will split this task up across multiple CPU cores. Usually in the case of TCP traffic a particular flow is kept on a particular CPU core to avoid possible out of order issues in the stack. I think that in the case of UDP it tracks things via source/destination IP.
It's tough to say what effect RSS would have on gaming latency, but I'm confident that the effect is essentially undetectable in terms of gameplay. If RSS was enabled and the queue used by the game's network traffic changed from server connection to server connection it could mean that latency varied from session to session in a very, very small way. When I say very very small I'm talking a value that is dwarfed by the WAN latency of the server you're connected to over the Internet, i.e. probably measured in single-digit microseconds if not hundreds of nanoseconds.
From a CPU affinity standpoint it's possible that the game you're playing as well as the NIC receive process could get moved around among available cores as you're playing, which might have an exceedingly small effect on latency as well. Given the way most games monopolize a CPU core however (and the relatively low amount of network traffic involved in most online games) it's more likely that the NIC receive process would land on CPU 0 along with other system processes while the game ran on CPU 1.
Regardless, I would expect that RSS is going to have a negligible effect on your gaming experience one way or another. Any advantage you gain by having a less-busy core grab data off the NIC will probably be offset by the fact that the actual game engine core that needs the data will have to grab it from a common cache pool or main memory.
Messing with MTUs is usually a bad idea. In the case of TCP, modern operating systems will try to do Path MTU Discovery, which involves sending at max MTU and relying on an ICMP message from an intermediate device to tell you that your MTU is too big and needs to be reduced. These discoveries are done per connection with TCP because you build a virtual connection when you establish the TCP session.
Practically speaking Path MTU Discovery is still hit and miss because things along the way like firewalls will block the ICMP messages from reaching you. This means you never get the message that your MTU is too big, and your packets get silently dropped. Bad things. UDP is out of luck in this regard because it's connectionless and the system won't keep track of the MTU for a particular destination even if it's sending packet after packet to that destination.
Of course Path MTU Discovery usually comes into play when you raise your MTU above 1500, i.e. jumbo frames. You could theoretically lower your MTU, but you're compromising other services like HTTP; by lowering your MTU from 1500 to 1000, you make it so the remote server has to frame three packets to send you 3000 bytes of data instead of only two.
Moreover, it's not going to do you any good anyway, because while MTU is taken into account during session negotiation in TCP (via Maximum Segment Size), games AFAIK almost universally use UDP and it has no facility for declaring max MTU to a destination. This means that if you set your MTU low enough, the server will send you packets that are too big for your to handle, and those packets will be dropped. So effectively you're handicapping your TCP traffic for non-gaming stuff without helping UDP traffic at all.
Fortunately, none of this matters because any game whose networking code is worth its salt is going to send you lots of small packets rather than fewer large ones anyway, which is why they use UDP in the first place; assuming the game is written well, messing with MTU is trying to improve something that probably can't be improved. Similarly, your client should send the server data using the same rules, so setting your MTU lower won't help you in that respect either.
As for Nagle's algorithm, you're correct that it improves network efficiency by queueing small amounts of data for a small amount of time (up to 200 msec IIRC) so it can hopefully "fill out" a packet and make the payload larger vs. the headers. However, since nagling is a TCP thing and games almost always use UDP, disabling or enabling nagling (which is usually done on a per-socket basis depending on the application's needs rather than a system-wide basis) won't have any effect on most games.