Balancing the load
Because graphics is inherently parallelizable, adding a second video card holds the possibility of nearly doubling graphics performance, so long as other system components like the CPU or main memory aren't getting in the way. The most common scenarios where SLI is likely to help are the same scenarios where a really fast graphics card would have an advantage over a middle-of-the-road one: at very high resolutions with lots of antialiasing and texture filtering, especially in games or apps that use richer shading effects.
SLI appears to divvy up the work between the GPUs in one of two ways, depending on the application. The first method is the one I'd expected, where the screen is split into two parts horizontally, and one card renders the top part of the scene while the other renders the bottom. The exact proportion of the screen allocated to each card varies dynamically according to graphics load. NVIDIA's drivers offer the option of turning on an overlay in order to see SLI load balancing at work. Here's a look at this load-balancing method in Far Cry:
The water at the bottom of the screen is a lot more to work to render than the sky, so the split between the two cards is adjusted to balance out the workload.
In action, this load-balancing indicator is fun to watch. It's obviously computing how to split up the next frame based on what happened in some number of previous frames, and the delay is quite perceptible. The algorithm may be relatively efficient, but it does seem to be rather relaxed.
The other method of load balancing for SLI appears to be an every-other-frame arrangement whereby each GPU renders a whole frame, alternating between the two.
NVIDIA's drivers use this second method in Doom 3 and 3DMark05. The green line across the middle of the screen never moves, but the green bars on the left move up and down, apparently to indicate utilization for each GPU.
Assuming the graphics load is distributed reasonably well between the two GPUs, by whatever method, a dual-card SLI setup ought to be quite a bit faster than a single-card setupnot 100% faster, though. Managing two GPUs and doing the load balancing will cause a little overhead, but the benefits should outweigh the drawbacks significantly. Let's see how it performs in practice.
|In the lab: WASD's Code keyboard with Cherry MX clear switches||25|
|GeForce 344.48 driver enables DSR on Kepler, Fermi GPUs||52|
|ARM intros two new CCN 'uncore' products for data center SoCs||10|
|G.Skill's Phoenix Blade PCIe SSD boasts 2000MB/s transfer rates||23|
|First Win10 Tech Preview update adds Action Center||18|
|Reports: Broadwell-E slips to 2016, but Skylake-S sampling already||29|
|Cooler Master's Mizar mouse reviewed||11|
|Cooler Master's Nepton 240M liquid cooler reviewed||33|
|AMD cuts A-series desktop processor prices||65|
|I just found this AMAZING trick! Call of Duty takes up 0GB if you just don't buy it!||+100|