NVIDIA announces Hybrid SLI, new chipsets

Nvidia took advantage of the mass of press descending on Las Vegas for CES to hold a pre-show briefing unveiling Hybrid SLI. This Windows Vista-only technology seeks to exploit systems running Nvidia integrated graphics chipsets and discrete GPUs, allowing the two to work more closely together to improve performance and lower power consumption. Integrated graphics processors (IGPs) have traditionally been confined to budget chipsets, but that won’t be the case any longer. Starting this quarter, all new Nvidia chipsets for AMD processors will include an embedded graphics processor. All new chipsets for Intel processors will start getting integrated GPUs in the second quarter of this year.


To begin, Hybrid SLI will be made up of two components: HybridPower and GeForce Boost. HybridPower is easily the most interesting of the two, allowing a system to literally shut down its discrete graphics cards when their pixel-pushing horsepower isn’t needed, deferring to the chipset’s integrated graphics processor. Commands controlling the process are passed over an SMBUS that’s a part of the PCI Express spec, so a graphics card that supports HybridPower is required. Unfortunately, none of Nvidia’s existing graphics products support HybridPower, but the company’s next-gen high-end GPUs will.

Video outputs on a high-end motherboard

Systems that use HybridPower will have their displays hooked up to the motherboard’s graphics outputs. In low-power mode, discrete graphics cards will be turned off, and the chipset GPU will handle all the rendering. Switch to high-performance mode, and discrete graphics cards will come to life and assume rendering duties. However, since the system’s displays will be hooked up to the motherboard, the frame buffer contents for the discrete graphics cards must be copied over to the IGP’s frame buffer. According to Nvidia, second-gen PCI Express provides plenty of bandwidth for this frame buffer dump. Latency is apparently a non-issue, as well.

Video outputs on a high-end motherboard

Hybrid SLI should be a boon for gaming notebooks, allowing them to turn off power-hungry GPUs to save battery life. Nvidia is also targeting this feature for enthusiast PCs, for whom it will first become available with Nvidia’s new nForce 780a chipset. The 780a’s embedded GPU is capable of feeding a single digital output, be it DVI or HDMI, in addition an analog VGA output. Also packed into the 780a is a HyperTransport 3 interconnect ready to take advantage of AMD’s latest Phenom processors, a full assortment of GigE, USB, and SATA ports, and 32 PCI Express 2.0 lanes via an nForce 200 chip.

The nForce 780a block diagram. Source: Nvidia

Nvidia didn’t say much about the nForce 200 when the chip was first introduced, but they’ve now revealed a couple of additional details about the its capabilities that had previously been held back due to pending patents that have now been granted. The nForce 200 was built with SLI scaling in mind, and it includes a Posted Write Shortcut that allows data from one graphics card to be passed directly to other cards without having to loop back through the CPU. There is also a broadcast function that replicates a set of commands from the CPU across multiple graphics cards connected to the nForce 200. Both of these features are designed to reduce traffic on the 16-lane PCI Express 2.0 link that connects the nForce 200 with the 780a SPP.


Alright, enough with the nForce 200 tangent. Back to Hybrid SLI, and its slightly less exciting GeForce Boost component. In a sense, GeForce Boost describes what one might expect Hybrid SLI to be all about: harnessing the combined power of a discrete graphics card and chipset-level integrated graphics to improve performance. GeForce Boost is really designed for budget systems where discrete and integrated graphics solutions offer similar horsepower—trying to boost a mid-range or even high-end GPU’s performance with a pokey IGP can actually decrease performance, Nvidia says.

Gigabyte’s GeForce 8200 motherboard

The new GeForce 8200 integrated graphics chipset is Nvidia’s target platform for GeForce Boost, and as one might expect, it’s a budget solution aimed at Micro ATX motherboards. That’s not to say Nvidia has skimped on features, though. The 8200 is fully DirectX 10-compliant and it features a PureVideo HD video engine purportedly capable of offloading 100% of the MPEG2, VC-1, and H.264 decode process for Blu-ray and HD-DVD movies. On the chipset side of things, the 8200 also features a HyperTransport 3.0 interconnect and second-generation PCI Express.

Comments closed
    • alucard_x
    • 13 years ago

    so you’re screwed if you want dual dvi / hdmi out.. meh 😐

    • vdreadz
    • 13 years ago

    AMD / ATI seems as though they might have the upper hand.

    • YeuEmMaiMai
    • 13 years ago

    too bad ATi already introduced this stuff a few weeks ago……….

    • Lazier_Said
    • 13 years ago

    Only one digital out? Not interested.

    Add that to: Vista? Not interested.

    • kilkennycat
    • 13 years ago

    l[

      • UberGerbil
      • 13 years ago

      Uh, only the “hybrid” part appears to be Vista-only. For XP, it’s one or the other (reboot and throw the switch in the BIOS to change). That’s pretty much the situation today, so I don’t see how the loss of functionality nobody currently has is going to be a “sales disaster”

    • just brew it!
    • 13 years ago

    This almost sounds like someone said, “Hey let’s see if we can come up with something semi-useful to do with all that bandwidth we have with PCIe 2.”

    Unless they give the IGP its own dedicated framebuffer, this may chew up a fair bit of memory bandwidth (and HT bandwidth too, on AMD systems). If I’m understanding things correctly, when using the discrete graphics card all of the pixels for every frame need to be copied twice — once to the IGP’s framebuffer (which is presumably in system memory), and then back to the IGP when the IGP scans the frame out to the monitor.

    Unless I’m misunderstanding something about the tech, it seems rather inefficient to me.

      • UberGerbil
      • 13 years ago

      Agreed (unless they’ve snuck, say, 4MB of framebuffer cache into the IGP); but for mobile GPUs that need all the help they can get, it might still be worth it. (Though I think somebody was more interested in finding a use for the dormant shaders in the IGP than finding something to soak up PCIe 2.0 bandwidth). Anyway, as I’ve been saying since this was first announced, this is more interesting as a power saving option in notebooks than as a performance boost.

    • Kurlon
    • 13 years ago

    The big win to me is the possibility of Express Card ‘add on 3d accelerators’ for laptops. Got a spare Express Card 54 slot? Slap in an 8800 GT module, power with the external wall wart, and have blazing 3D on your lappy’s built in display. When you’re traveling, retain your normal battery life by relying solely on the IGP.

    • cegras
    • 13 years ago

    The AMD hybrid crossfire can pretty much manage to play Crysis on medium at 1024×768 resolution. The article is on [H].

    • donkeycrock
    • 13 years ago

    it would be neat if they could make the IGP be the physics processor.

      • mesyn191
      • 13 years ago

      Games would have to support it…

      • zimpdagreene
      • 13 years ago

      I agree also. If they can make it look real in movies.Then its time to make it real looking in games with physics.

    • albundy
    • 13 years ago

    “GeForce Boost is really designed for budget systems where discrete and integrated graphics solutions offer similar horsepower—trying to boost a mid-range or even high-end GPU’s performance with a pokey IGP can actually decrease performance, Nvidia says.”

    So are you boosting Crysis from 2fps to 4fps? LOL! Also, is NV bundling the board with a discrete card? I doubt anyone would opt for a discrete card if the IGP can already handle the basics. I guess the benchmarks will prove the board worthiness, but making AMD performance users pay extra for such a feature is nonsense.

    • Sargent Duck
    • 13 years ago

    Wouldn’t it just be easier to have a discrete graphics card scale down, ala Cool ‘n Quite? For example, instead of having a 8800GT turn completetly off when in 2d mode, why not just have it scale down to 100mhz core, 200mhz memory, and only 12sp’s? Or something like that. The power difference would be neglible, and I think it would be a lot easier to do in the drivers.

      • dragmor
      • 13 years ago

      No, Nvidia and ATI both have cards that have different clocks at 2D and 3D. AMD’s 38×0 series does this particularly well, but that still leaves the boards using 20w-30w when doing nothing.

      • UberGerbil
      • 13 years ago

      It’s definitely easier to turn the thing completely off than mess around with voltage islands and clocks. And you’ll save a lot more power — you can turn off the PCIe links, and you won’t need to power a fan on the card.

    • mikehodges2
    • 13 years ago

    At last!! Now someone make apple put high end GPUs in their laptops..that’s the only reason I don’t have one 🙂

    • Xaser04
    • 13 years ago

    This sounds strangley familiar to something AMD/ATI are also developing / have developed.

      • eloj
      • 13 years ago

      Sounds just like the 3dfx voodoo 1/2.

    • LSDX
    • 13 years ago

    If drivers are good enough so you can mix onboard with discrete graphics card, it should be possible to mix different discrete graphics cards too, right ?

      • Meadows
      • 13 years ago

      Good point.

      • MixedPower
      • 13 years ago

      I don’t think they actually mix; it sounds like the discrete card will bypass the IGP when more graphics power is needed.

        • StashTheVampede
        • 13 years ago

        The “hybrid” method *is* using the onboard IGP to display a frame. With SLI, each card was every other frame. With the hybrid tech, the onboard solution could be every third, fourth, etc — depending on it’s on individual power.

          • Flying Fox
          • 13 years ago

          q[

            • UberGerbil
            • 13 years ago

            And we don’t know what technique nVidia is using for this implementation. It’s possible it’s “none of the above” and they’re just treating the IGP’s shaders as part of the pool (albeit an especially slow part).

    • alex666
    • 13 years ago

    A lot of people would like this, not just notebook users. If properly implemented, this would be an awesome feature. So often, we don’t need the full power of our video cards, so why run them and waste electricity.

    A good step in the green direction.

      • Irascible
      • 13 years ago

      A good step in a quiet direction as well.

      I don’t mind dust busters when I’m gaming. The rest of the time I want silent operation.

      • Meadows
      • 13 years ago

      Screw green, it’s a good step in mainstream performance direction.

Pin It on Pinterest

Share This