review nvidia brings optimus switchable graphics to notebooks

Nvidia brings Optimus switchable graphics to notebooks

Switchable graphics may not be one of the most exciting new technologies to hit the notebook space in the last few years, but for PC enthusiasts, I think it’s easily one of the most important. Enthusiasts are a notoriously demanding lot, you see. We crave performance, and most of us are gamers who simply can’t get by with a weak-sauce integrated graphics processor, especially if it’s an Intel Graphics Media Accelerator. At the same time, we want our notebooks to be thin and light and offer exceptional battery life—requirements that generally favor integrated graphics solutions, and GMAs in particular. Following tech writing’s tradition of lazy automotive analogies, that’s sort of like asking for the power of at least a turbo-charged V6 inside a car that offers Prius-like fuel economy.

Interestingly, the solution to this dilemma resides within the Prius itself—specifically, the hybrid nature of its drivetrain, which combats noxious emissions with a 60kW electric motor complemented by a 1.8-liter, four-cylinder gasoline engine. When cruising around town at relatively low speeds, the Prius gets by on its clean-running electric motor. Bury the gas pedal, and the secondary engine springs into action, armed with enough horsepower to bring Toyota’s eco-wagon up to highway speeds.

Emissions aren’t an issue for notebooks, but the hybrid approach can be applied to conserve battery life. Intel’s Graphics Media Accelerators are plenty capable of handling basic desktop tasks, web surfing, and even video playback, all while sipping battery power sparingly. They’re the perfect engines for puttering around town. A considerably more potent discrete GPU must roar to life when users demand 3D performance, though. There are plenty of discrete GPUs from which to choose, starting with modest solutions more akin to that turbo-charged V6 and reaching all the way up to obscenely powerful feats of engineering like the thousand-horsepower W16 that rumbles inside the Bugatti Veyron. These discrete GPUs may draw considerably more power than an Intel IGP, but if they’re only called upon when needed, battery life will only suffer when there’s good reason.

The first stab at hybrid notebook graphics came in the form of Sony’s Vaio SZ-110B, whose primary graphics processor could be controlled using a hardware switch just above the keyboard. A reboot was required to complete the switch from the system’s integrated GMA 950 to the discrete GeForce Go 7400, so graphics horsepower wasn’t exactly available on demand. Still, the Vaio provided the market with its first taste of switchable graphics—and with proof that good battery life and competent graphics performance could coexist in a thin-and-light notebook.

Fortunately, switchable graphics’ second coming proved far more common and easier to use. Rather than relying on physical switches, recent implementations are capable of changing the primary graphics adapter via software. Rebooting isn’t required when switching from discrete to integrated or vice versa, although you will have to endure a few seconds of screen flickering as display duties are migrated from one adapter to the other. You’ll also have to close any so-called blocking applications that are tying up the graphics adapter with DirectX calls.

This contemporary switchable setup comes much closer to delivering power on demand, but only after a pause and with a few strings attached. One of those strings: the user must know that reserve power is lying in wait. Amazingly, Nvidia claims, a lot of folks who buy switchable notebooks don’t have a clue. Among those that do, few can be bothered to switch manually. Nvidia tells us it conducted a survey of 10,000 owners of notebooks sporting switchable graphics and found that only 1% actively switched back and forth between graphics adapters. Nvidia points out that those surveyed were predominantly mainstream users rather than PC enthusiasts, so it seems likely that few had any exposure to switchable graphics outside of a likely misleading sales pitch from a pimply teenager at their local Best Buy. Nevertheless, the fact remains that current switchable graphics implementations aren’t truly seamless. Ideally, a hybrid graphics subsystem should deliver power on demand automatically and without pause or restriction, which is what Nvidia claims it’s achieved with its next generation of switchable graphics, dubbed Optimus.

Switchable graphics today
To understand what makes Optimus unique, we have to dig a little deeper into how current switchable graphics implementations work. On the hardware front, such systems are equipped with an integrated graphics processor (or IGP) in the chipset or CPU, along with a discrete GPU. The GPU hooks into the system via PCI Express, and it also must be connected to the display outputs, which are shared with the IGP. Sharing is facilitated by high-performance hardware multiplexers, otherwise known as muxes, that feature inputs for each graphics adapter, a single output, and a control line that tells the multiplexer which input to pass through to the display.

According to Nvidia, a minimum of two muxes are required to connect all the necessary lines for each display output. With the average switchable graphics notebook featuring three video outs—the LVDS LCD interface, an HDMI output, and an old-school VGA port—that’s at least six multiplexers, plus all the extra traces required to connect the auxiliary GPU.

Switchable graphics block diagram. Source: Nvidia

While mux control lines can be activated by software, the act of switching graphics adapters on the fly requires a considerable amount of driver cooperation, especially since this approach was conceived at a time when Microsoft’s reigning operating system, Windows Vista, only played nicely with one graphics adapter at a time. To get switchable graphics working in this environment, Nvidia had to create an “uber” display driver featuring an interposter sitting between the operating system and the Nvidia and Intel display drivers. Vista communicates with this uber driver via standard APIs, but a custom API jointly developed by Nvidia and Intel is used to interface the GMA driver with Nvidia’s interposter. The level of coordination required for this setup was challenging, Nvidia asserts, and the need to ensure compatibility apparently slowed driver updates.

Introducing the Optimus routing layer
Thankfully, Windows 7 is much more accommodating than Vista. Microsoft’s latest OS supports multiple graphics adapters, allowing independent Nvidia and Intel display drivers to coexist peacefully on the same system. These drivers don’t talk to each other, so there’s no need for multi-vendor cooperation, interposters, or custom APIs. In fact, with Windows 7, switching graphics doesn’t even require the user to touch a switch—one is built into what Nvidia calls its Optimus routing layer.

This routing layer includes a kernel-level library that maintains associations between certain classes and objects and a corresponding graphics adapter. Graphics workloads that are deemed GPU-worthy are routed to the discrete graphics processor, while simpler tasks are sent to the IGP. The routing layer can process workloads from multiple client applications, juggling the graphics load between the integrated and discrete GPUs.

Optimus in flowchart form. Source: Nvidia

Nvidia says the Optimus routing layer keys in on three classes of workloads: general-purpose computing apps, video playback, and games. When an application explicitly tries to leverage GPU computing horsepower using a CUDA or OpenCL call, the Optimus routing layer directs the workload to the discrete GPU. The DirectX Video Acceleration (DXVA) calls used by many video playback applications and even the latest Flash beta are also detected automatically and directed accordingly. For games, the routing layer is capable of keying in on DirectX and OpenGL graphics calls.

Not all games that use those two APIs require the power of a discrete GPU, though. Even Solitaire makes DirectX calls, and it’s more than happy running on an antiquated GMA. Rather than assuming that all DirectX or OpenGL calls need to be offloaded to a discrete GPU, Optimus references a profile library to determine whether a game requires extra grunt.

Nvidia already uses profiles to govern the behavior of its SLI multi-GPU teaming scheme, but Optimus is a little more advanced. The profiles are stored in encrypted XML files outside the graphics driver, and users have the option of letting Nvidia push out updates to those profiles automatically. Nvidia has apparently been working on the back end for this profile updating system for quite a while now, and it’s something that’s likely include “other profiles” in the future. (SLI anyone?)

The tinfoil hat crowd will be pleased to note that users are free to disable Optimus’ automatic profile updates. Those who would rather manage graphics preferences themselves can create and modify profiles via the driver control panel, through which any application can be configured to run on the IGP or the discrete GPU. Users can even right-click on an application and manually select which graphics processor it’ll use.

Allowing users to manage their own Optimus profiles is definitely the right thing to do. That said, one element of the profile management scheme could use a little extra work. At present, there’s no way to override the Optimus routing layer’s desire to process DXVA calls on a system’s discrete GPU. One can configure the profile for a video playback application or web browser to use only the integrated graphics processor, but as soon as HD or Flash video playback begins, the GPU persistently takes over. Nvidia is looking at letting users override Optimus’ preferences for video playback acceleration, and I hope that capability is exposed to users soon. GeForce GPUs may do an excellent job of accelerating video playback, but some Intel IGPs share the same functionality and probably consume less power. At least with the old switchable graphics approach, users could turn off the discrete GPU and be confident that it was going to stay off.

Making it work in hardware
While it may not be possible to disable an Optimus system’s discrete GPU completely, the scheme’s routing layer otherwise offers automatic power on demand. Intelligently activating the discrete GPU is a markedly different approach from requiring users to manage their systems’ switchable graphics state, one that should allow mainstream user to reap the benefits of a switchable GPU without even knowing that they have one.

To make Optimus switching truly seamless, additional modifications were necessary on the hardware front. About the only thing that hasn’t changed here is the no-power state that the discrete GPU slips into when it’s not in use. As with earlier switchable graphics setups, an Optimus GPU can be shut down completely. Awakening the GPU from this state takes less than half a second, and that’s the only waiting a user will have to do—the few seconds of screen flickering that accompanied each graphics switch with previous switchable implementations has been completely banished.

Optimus is able to avoid flickering because even when the discrete GPU is handling a graphics workload, the IGP is still being used as the display controller. In Optimus configs, the discrete GPU is only linked to the system over PCI Express; its display controller isn’t connected to anything. To get frames generated on the discrete GPU onto a display, the contents of the GPU’s frame buffer, which reside in dedicated video memory, are copied to the IGP frame buffer that sits in system RAM. This “mem2mem” transfer would typically be done by the 3D engine, according to Nvidia. Unfortunately, the engine would have to stall during the transfer to maintain coherency—a hiccup that could cost two to three seconds of latency.

Nvidia obviously doesn’t want to contend with that sort of delay, so current GeForce 200- and 300-series notebook GPUs, plus future Fermi-based and “netbook” graphics chips, all feature a built-in copy engine. This engine uses dedicated logic to feed frames from the discrete GPU’s frame buffer to system memory, and it can function without interrupting the graphics core. Nvidia claims the copious upstream bandwidth available with PCI Express is more than sufficient to allow the copy engine to perform those transfers with just two tenths of a frame of latency—whatever that means. I’m sure some hardcore gamers will insist that they can feel lag, but the effects were imperceptible to me.

Optimus block diagram. Source: Nvidia

With an Optimus system’s IGP serving as the display controller, there’s no need for notebook makers to use multiplexers or to run two sets of display traces. In theory, Opmtius motherboards should be cheaper to produce than boards for past hybrid scehemes, because manufacturers won’t have to pay for the muxes or find board real estate for them. According to Asus, dropping the number of onboard components and the heat that the muxes would normally produce will make it easier to squeeze Optimus into thin-and-light systems.

Optimus in the flesh: Asus’ UL50Vf
Speaking of Asus, the first implementation of Nvidia’s new Optimus tech to arrive in our labs is the company’s UL50Vf. A member of Asus’ UnLimited notebook line, the UL50Vf’s overall design is quite similar to the UL30A and UL80Vt we reviewed last year. Don’t let the recycled exterior fool you, though: the UL50Vf is a full-fledged Optimus implementation, albeit one that uses system and graphics hardware that’s been around for a while.

Based on Intel’s Consumer Ultra Low Voltage (CULV) platform, the UL50Vf sports a Core 2 Duo SU7300 processor running at 1.3GHz alongside an Intel GS45 Express chipset with GMA 4500MHD integrated graphics. The integrated GPU is backed by an Nvidia GeForce G210M with 512MB of dedicated video RAM. Before you get too excited, note that the G210M is the slowest Optimus-capable mobile GPU that Nvidia currently makes. The UL50Vf is quite an affordable system, though. The configuration we tested is slated to sell for $849 when it becomes available in March.

Despite its relatively low price tag, the UL50Vf very much looks and feels like a high-end laptop. The system’s top panel is beautifully finished with brushed black aluminum that won’t attract smudges like the glossy plastics used elsewhere in the chassis. The UL50Vf is reasonably thin and light, weighing in at a little over five pounds and measuring about an inch thick. That includes an optical drive, by the way, plus 4GB of RAM and a Seagate Momentus 5400.6 320GB hard drive.

While arguably a thin-and-light notebook, the UL50Vf actually has quite a large footprint due to its use of a 15.6″ widescreen display. The 16:9 panel features a 1366×768 display resolution and acceptable picture quality, but it’s only average an average screen overall, which is becoming more and more common among affordable laptops in this price range.

At least Asus has made good use of the extra space provided by the Ul50Vf’s larger chassis. The system features a full-size keyboard complete with an inverted-T directional pad and a full numpad. Asus had to shave a couple of millimeters off the width of the numpad keys, but that seems to have been the only concession required.

I quite like the feel of the keyboard overall. There’s enough tactile feedback for speedy touch typing, and while some flex is visible, the keyboard doesn’t feel mushy or vague. However, I’m not as crazy about the dimpled finish on the touchpad. This surface treatment may make it easy to tell when your fingers are on the touchpad, but I find that the indentations actually impede smooth tracking, at least with my fingertips.

Keeping the UL50Vf running is an eight-cell battery rated for 84Wh. The high-capacity cell makes perfect sense for an Optimus system, and it’s hardly a one-off for this particular model. Much of Asus’ UnLimited line is powered by batteries with similar Watt-hour ratings.

You may have to wait until March to get your hands on an UL50Vf, but Asus has a handful of other Optimus designs due out this month. Larger 16″ N61 and 17″ N71 systems based on Intel’s latest Arrandale mobile CPUs are coming on February 11 and 18, respectively. Closer to the end of the month, Asus’ Optimus lineup will expand to include the 13.3″ U30Jc, which will feature an Arrandale CPU alongside a GeForce G310M GPU. Additional Optimus designs employing Clarksdale and CULV Core 2 processors are also set to be released in March.

Asus won’t be the only firm launching Optimus systems, either. Nvidia says it expects more than 50 Optimus-equipped notebooks to be available by this summer.

Optimus in the real world
The big question now, of course, is whether Optimus works. I’ve been playing with a UL50Vf for about a week, and thus far, I’d have to say yes. Nvidia equipped this system with a handy little desktop widget that displays whether the discrete GPU is currently in use. According to that applet, and the general performance I’ve experienced, the Optimus routing layer does a good job of intelligently activating the system’s GeForce GPU when it detects a CUDA app, HD or Flash video playback, or a game. I haven’t been able to detect the fraction-of-a-frame latency associated with Optimus’ frame-buffer copy, nor have I noticed having to wait for the discrete GPU to be awakened from its dormant state.

Nowhere is the graphics power of a discrete GPU needed more than in games, so that’s where I began my testing. Darwinia was the first one I fired up, and as luck would have it, Optimus didn’t recognize the game’s 3D engine. Fair enough. Darwinia may be critically acclaimed, but it’s hardly a mainstream or popular title. Plus, the game actually runs pretty well on the GMA 4500MHD as long as you turn down the pixel shader effects.

Since I prefer to crank the eye candy wherever possible, I opted to create my own Optimus profile for Darwinia. That took all of a few seconds, after which I was in the game under GeForce power. Even with all the in-game detail levels cranked at the notebook’s 1366×768 native resolution, the G210M ran Darwinia at a solid 60 frames per second according to FRAPS. The GMA can manage fluid gameplay at this resolution, too, but only with pixel shader effects turned down and then only at 30 FPS.

The GMA 4500MHD can’t handle Call of Duty 4 at 1366×768, even with the lowest in-game detail levels. Heck, the GMA HD inside next-gen Arrandale CPUs struggles, too. However, with Optimus automatically enabling the system’s discrete GPU, FRAPS reported frame rates between 20 and 40 FPS with all the eye candy turned on at the native display resolution. Enabling anisotropic filtering didn’t slow performance much, either.

Borderlands is quite a bit more demanding than the first Modern Warfare and a complete waste of time if you’re stuck with Intel integrated graphics. The UL50Vf’s lowly GeForce G210M also had a difficult time maintaining smooth frame rates, too. At 1366×768, we had to run the game at medium detail levels with only a few effects turned on. Applying 16X aniso didn’t slow performance much, but FRAPS still spent most of its time displaying frame rates in the low twenties. Dropping the resolution to 640×480 allowed us to enable all of Borderlands‘ pseudo-cell-shaded visual goodness and get frame rates up to the low thirties.

One of the better driving games to hit the PC in recent years, Need for Speed: Shift is yet another console ports that’s a little too demanding for low-end GPUs, let alone integrated graphics processors. Running at our system’s native resolution, with the lowest in-game detail levels, the G210M could only muster about 20 FPS. Scaling the display resolution down to 640×480 didn’t improve frame rates much, either. The game really isn’t playable at even that visibly blocky resolution, and worse, it looks positively ugly with all the details turned down.

Keep in mind, of course, that the GeForce G210M is the weakest GPU to support Optimus. Hopefully, notebook makers will use Optimus on systems with considerably more potent graphics processors, as well.

Video playback
Video playback tests have become a staple of our notebook reviews, so we fired up a collection of local and streaming videos to see how Optimus would fare. I’ve compiled the results of our tests in a handy little chart below that details the approximate CPU utilization gleaned from Task Manager, a subjective assessment of playback quality, and whether the discrete GPU was active during playback. Windows Media Player was used for local video playback, while Firefox and the Flash 10 beta were used with streaming content.

CPU utilization Result Discrete GPU
Star Trek QuickTime 480p 0-2% Perfect On
Star Trek QuickTime 720p 2-8% Perfect On
Hot Fuzz
QuickTime 1080p
4-9% Perfect On
DivX PAL SD 3-12% Perfect Off
720p YouTube HD windowed 9-22% Perfect On
YouTube SD windowed 9-19% Perfect On

Optimus activated the discrete GPU for all but our SD video playback test. Each clip played back perfectly smoothly, and CPU utilization never got much higher than 20%, even with YouTube HD streaming a Star Trek trailer at 720p.

Those who keep multiple Flash video tabs open at once should keep in mind that even when a YouTube video has finished playing, Optimus keeps the GPU enabled. Only when the tab is closed or you browse to a page that doesn’t include Flash video is power cut to the discrete GPU. The same applies to HD video playback, at least with the latest version of Windows Media Player built into Win7. As long as WMP has an HD video open, the discrete GPU remains active. Pausing or even stopping the video has no effect. Drag and drop a standard-definition clip into the app, however, and the GPU is shut down immediately.

I asked Nvidia whether Optimus could be made to shut down the discrete GPU if video was stopped or paused, and the company said that would be possible with cooperation from software vendors. Nvidia might also want to look more closely at how CUDA and other general-purpose computing applications interact with Optimus. The Badaboom video transcoding application that Nvidia has used as a CUDA poster child activated the discrete GPU in our Optimus test system before we even had a chance to load a video to transcode. We left Badaboom sitting there with nothing loaded and no transcoding taking place for more than half an hour, and Nvidia’s Optimus widget still showed the discrete GPU as enabled.

Battery life
Switchable graphics solutions are designed to extend run times by only using a discrete GPU when needed, so battery life tests are certainly in order. Unfortunately, without a directly comparable notebook that doesn’t include Optimus, we can’t properly isolate Nvidia’s new switchable graphics scheme and provide you with a direct comparison.

We can, however, look at the sort of battery life you can expect from the UL50Vf when the discrete GPU isn’t active. Neither of our battery life tests require GeForce horsepower, so what you’re looking at below are run times with Optimus keeping the G210M turned off.

Each system’s battery was run down completely and recharged before each of our battery life tests. We used a 50% brightness setting for the Timeline and UL50Vf, which is easily readable in normal indoor lighting and is the setting we’d be most likely to use ourselves. That setting is roughly equivalent to the 40% brightness level on the K42F, UL80Vt, and Studio 14z, which is what we used for those configurations.

For our web surfing test, we opened a Firefox window with two tabs: one for TR and another for Shacknews. These tabs were set to reload automatically every 30 seconds over Wi-Fi, and we left Bluetooth enabled as well. Our second battery life test involves movie playback. Here, we looped a standard-definition video of the sort one might download off BitTorrent, using Windows Media Player for playback. We disabled Wi-Fi and Bluetooth for this test.

Although the UL50Vf can’t quite top the battery life offered by the underclocked battery-saving configuration of the UL80Vt, the Optimus system offers longer run times than the rest of the field. Nearly eight hours of real-world Wi-Fi web surfing is quite impressive for a budget thin-and-light notebook. Close to six hours of standard-definition video playback capacity is good enough for a few feature-length movies, too.

External operating temperatures
When Optimus shuts down a discrete GPU to preserve battery life, system temperatures are also reduced. To get a sense of just how cool an Optimus-based system can run, we probed the UL50Vf’s external operating temperatures with an IR thermometer placed 1″ from the surface of the system. Tests were conducted after the system had run our web surfing battery life test for a couple of hours.

The UL50Vf has slightly lower operating temperatures than the UL80Vt, which features comparable hardware but an old-school switchable graphics implementation.

Optimus is best thought of as a seamless and smarter switchable graphics solution. The switchable concept was sound long before Optimus came along, but even with recent implementations, users had to contend with brief delays, seconds of screen flickering, and open applications preventing the system from making a switch. And apparently, many users have no clue how switchable graphics works or can’t be bothered to fuss with it.

Optimus nicely solves these problems. The routing layer avoids blocking applications and intelligently determines when to activate the discrete GPU and when to let it sit dormant. With the discrete GPU using the IGP as a display processor, there’s no flickering or delay when the GPU does perk up.

Using profiles to govern Optimus’ reaction to games makes a lot of sense, especially if Nvidia’s new profile distribution scheme works as advertised. Making those profiles easy for users to modify is even better, particularly for enthusiasts who are prone to fiddle with such things. I do hope Nvidia will allow profile preferences to override Optimus’ propensity to offload video playback onto the GPU, though—at least for long enough for us to do some battery life tests to determine whether a discrete GPU or IGP is preferable for HD and Flash video playback. Optimus could also use a little extra intelligence to determine when CUDA and OpenGL applications are actively using the GPU, rather than sitting idle.

Even with those minor issues, Optimus is by far the best switchable graphics solution I’ve seen. Nvidia may be relegating the GPU to duty as a coprocessor, but within the confines of a notebook, that’s probably exactly what it should be. In fact, Nvidia has even conceded that it’s possible to save some die area by stripping the display controller completely out of its mobile GPUs. With Intel and AMD moving toward integrating GPUs into every CPU, that sort of approach may be the only way to get GeForce graphics into mainstream notebooks.

Optimus shouldn’t just be confined to low-rent GPUs and affordable systems, though. I see no reason why more expensive mobile gaming systems shouldn’t use this switchable graphics tech to combine high-performance mobile GPUs capable of running the latest games with integrated graphics that can surf the web all day long on battery power. Now that’s my kind of Prius.

0 responses to “Nvidia brings Optimus switchable graphics to notebooks

  1. Too bad you don’t have a desktop you can fit in your bags. That would be the best of both worlds.

  2. I know power gating isn’t perfect, but my concern is that this Nvidia switcheroo thing is going to only come with Nvidia cards and will drive up prices.

    I think I’d rather just take an ATI card that can get most of the way there. Nvidia has not been terribly compelling with laptop graphics and doesn’t appear to be changing that any time soon. It’s always either low end cards or high end ones in $3,000 desktop replacements.

    This isn’t the day and age of 2 hour max battery anymore, so I don’t think it’s quite as much of a concern.

    I’m mostly just wondering why single cards don’t switch between dual GPUs?

  3. Power gating helps, but it isn’t good enough. It will only squeeze ~60% of the leakage. That’s better than nothing but for a GPU with, say, a leakage of 5W, it’s still too much of a battery drain if your goal is 10h of battery life.

  4. I always wondered why they didn’t just stick a separate, very weak, GPU onto laptop cards. That way they could literally just turn one or the other off, but not have to deal with switching two totally different graphics setups.

    Having to run over the PCIe bus and through the card would undoubtedly still use slightly more power than a straight up IGP, but I figure that would have worked even in Windows XP.

    Anyone have a technical explanation for why they don’t go that route?

    This all seems like too little, too late. By the time they can get it rolling, GPUs will likely have power gating and be able to switch off any unneeded blocks, basically acting like an integrated GPU or full GPU, and anything in between.

  5. “Do NV’s GPUs lack capable power management?”

    No, but those big chips and all that RAM are always going to use several watts, no matter how much they can scale down, which annihilates battery life.

    I’m sure some of those cards idle using more power than an entire netbook or CULV laptop can run off of.

  6. Because audio is a very small amount of information, while video processing is not.

    Even full USB 3.0 bandwidth is the equivalent of only one PCIe 2.0 lane. Video cards have to plug into16 lane slots for a reason. USB 2.0 probably couldn’t even handle weak integrated graphics.

    May as well just stick with the external video card thinger that connects directly into the PCIe bus.

  7. Instead of docking station, how about USB video card? They already make external sound card like Sound Blaster Extigy and Yamaha Cavit, not to mention of tons of el-cheapo USB sound card. Why not a USB video card?

    Although, yes, it will be much more complex than USB sound card, since the external video card needs to re-route its output back to the laptop’s LCD screen. But imagine if such thing can be made feasible; instant video upgrades for laptop!

  8. If you want the benefit of having a lightweight system for ‘on the go use’, then you’re not going to size the power supply brick for the indefinite number of add-on uses that SuperDockingStation can accommodate. So the docking station is going to need its own power supply, with a fairly hefty capacity if you want maximum GPU combination/upgrade options in the future. That, and a PCIe-connected USB host for your printer, scanner, and other non-portable peripheral devices. That, and a PCIe-connected SATA controller for the storage peripherals that will live in the dock rather than the portable unit.

    Again, it’s all possible. But where’s the economics in it? Have you priced a CULV-based ultraffordable and a modest game-worthy desktop system lately? Net hardware cost for buying one of each is about a thousand bucks if you’re not too demanding, and the mobile unit is adequate for the uses you described plus a little extra, while the desktop gives you your performance and upgrade options. I can’t see your hybrid vision costing any less than that in the limited production volume that would be likely to sell, even though it gives you less hardware.

  9. Just because they’re currently used there doesn’t mean they always need to be used there.

    Aye, and what you’re talking about is pretty much the same thing as a discrete external graphics card.

    I’m talking about something similar to a computer case to house the components and using the docking station to hook up to a board with the slots on them. Basically it’s like having a PC only it’s not one without the laptop in a docking station.

    Not making all the components seperate and all external devices.

    So if you had a computer case with a motherboard in it without a processor or memory onboard and instead of routing to ports on board they would route to a dock where it would interface with the laptop. Basically being a transparent addition to the system. It would be no more expensive then buying those individual components today. The motherboard might even be cheaper because it is only a dummy board made for rerouting lanes to the slots and/or other various tasks.

    No black magic.

  10. Yeah. Back when PCIe was introduced, Intel and some of the OEMs like Dell were talking about external PCIe and showing prototypes of various sorts of “devolved” PCs, where some of the subsystems (video, storage) were separate “slices” that you would assemble like consumer A/V components with xPCIe hooking them together — no “opening the case” required (and separate power for each). But that’s an expensive way to put a PC together, especially when you generally want all the slices anyway (and most of the interesting uses require them all to be turned on, despite the vision of “only powering up the GPU and optical drive to play a DVD”). So that kind of stalled external PCIe.

    Meanwhile, as you say, docking stations are primarily a business niche, and businesses generally don’t care about discrete graphics.

  11. I don’t think it’s hard, I just don’t think it’s economical. You’re asking for a docking station that contains 70% of a desktop computer and a mobile unit that contains 70% of a mobile computer, in two separate chasses and with associated manufacturing costs. This will then necessarily sell for nearly twice the price of a standalone system once the low-volume factor and botique marketing costs are accounted for.

    Gamers would probably be the only viable market for this contraption — got any sales figures for how many even bother with OCZ’s DIY laptop line? Business users may buy docking stations and enjoy larger monitors and full size keyboards when at their desk, but on the road they still need to be able to run all of their software, not just YouTube.

  12. I am…

    It’s almost… too simple. I thought pretty much everyone here would appreciate new options and upgradeable. Setting a computer in a docking station and rebooting isn’t hard. Almost easier and more tactile then a switch.

    No, I’ve never heard of a manufacturer doing this before. I’ve heard of the proprietary graphic card type things, but never a shell type setup where a user could upgrade the components.

    Case with a power supply and daughter-board with a docking station on top or something of the like?

  13. I’m aware of the proprietary external graphics adapters people also pay a arm and a leg for.

    You can’ hook a sound card or a gigabit NIC into that can you? USB hub? tuner card? storage card?

    No, you can’t and I can assure you that it costs a arm and the leg.

  14. So GPU vertex processing sucks so much that they need to offload it… If it’s true, it’s worse than I thought; do you have any reference for the NVIDIA’s or AMD/ATI’s offloading and in which units? I know about Intel doing that, but that’s the first time I hear about others.

  15. NVidia offloads vertex processing to the CPU whenever it’s an option in a way, shape or form, so all your whining is still to no avail. The CPU sucks.

  16. I wonder if nVidia and Intel have a certain fruity OEM in mind? For all the whining from certain people around here about how the IGP in Arrandale would force the RDF into overdrive to compensate for a step backwards in graphics performance, this would seem to have “future MacBook” written all over it (especially since Apple might have more opportunity to tweak the driver stack for better efficiency).

  17. I think I’m at least as interested in the temperature readings when the the discrete GPU is going flat as I am in the idle temps. The former is more likely to be an actual comfort risk, after all. Small omission though.

  18. If you’re going to be doing that much switching and swapping and rebooting, you might as well buy a desktop.

  19. Well, if no one else is going to say it, I will : “Autobots, Transform and Roll OUT!” I think Optimus was my father figure growing up… 🙂

  20. I was referring to ‘scaling resolution down didn’t improve frame rates.’ and his comment, not to playability in general or specifically on notebooks.

  21. ¿voodoo2 3d accelerator with the flat ribbon printed on a circuit board? well at least some of the tech that came when they bought 3dfx was useful!

  22. Right, because discrete GPUs won’t advance at all in that timeframe. (Well, with NV renaming that might be true 😉 but it’s still a silly argument. The niche for discrete graphics ‘need’ might get smaller, heck it’s pretty small as it is, but it will always be there.

  23. When you put it that way the name makes perfect sense: Optimus components were typically rebadges of regular brand-name products; NV components are typically rebadges of other NV products. It’s a naming parallel made in heaven.

  24. You are actually proposing a simpler, less elegant solution. They could simply do what you are asking by changing the power profile from “battery” to “plugged in”. This type of switchable graphics already exists, and has been attempted before (alienware) but it never caught on.

  25. That’s what I was saying…

    I mean using the docking port as just a PCI-E bus… There are numerous possibilities just from there. It could go to a normal southbridge chip on daughtboard with all the normal goodies built in.

    I’m not entirely sure the majority of people buying laptops right now are doing so to upgrade their graphics. A majority of users aren’t even pressed for something faster and if they are, they really don’t notice and suppose it’s the way it’s supposed to be.

  26. Personally, I think they should build a dock that allows you to plug in a high end video card and/or audio card that routes directly into the laptop’s motherboard and out into a monitor.

    You dock, you take advantage of your bad ass video card, but otherwise use the rest of the laptop (ie., memory, cpu, motherboard, audio, etc) while using it like a dock (plugged in keyboard, monitor, mouse, etc).

    You undock and you go back an Optimus setup with intel GPU and more powerful nVidia GPU.

    By routing the PCIE of the laptop out to a dock, you enable users to upgrade their desk-based performance quickly and easily while also having the option of portability.

    Throw in a few external USB3 SSD’s running at the same rate as SATA… and you’ve changed the way people use their computers.

    Unfortunately, laptop vendors aren’t going to want to give up the primary reason many gamers upgrade their laptops (for the new GPU).

  27. Does anyone know if the new Alienware 11″ will be able to take advantage of this technology?

    I’ll be more interested in this when I see an Arrandale CULV with an nVidia Fermi-based tech.

    I look forward to the high end laptop being able to do these things because it should really improve laptop lifespan. The only downside will probably be the fact we may not be able to use any desktop driver that comes out by replacing an inf file (Thanks Laptopvideo2go!)…

  28. What they were saying about people not knowing about being able to switch was about people that own a laptop with switchable graphics, but didn’t know it could do that. Basically they got sold something they didn’t understand and I have a bridge for them as well.

  29. Nice effort, BUT … notably temporary, I don’t see this surviving long. The Llano Fusion, incorporating new energy saving/switching technologies, will put a major hurt on it next year and Bulldozer/Northern Islands Fusion solutions two years from now and whatever fusion solution Intel brings to the table to compete will dry up the need for discrete mobile GPUs leaving Nvidia no piece of the mobile market. Unless they license some of their GPU IP to Intel.

  30. I like the general philosophy, but I think they’re executing it completely wrong.

    I think notebooks should be as light and as power efficient as possible when they’re on the move, when you get home and sit down, that is a completely different matter.

    Tthey should play up the docking station card hardcore. Taking it to the next level and offering the ability to plug in discrete graphics cards and/or sound cards. Really whatever you want. Just drop it in and reboot (or don’t even reboot). Allowing users to upgrades their notebooks.

    This is the only reason I don’t have a laptop as my main computer. Because you can’t dynamically upgrade them or change them. Generally even the biggest and baddest ones are gimp, heavy, noisy, bulky, run hot, and have a lot of issues.

    I mean, PCI-E is ridiculously easy to route and scale. It’s amazingly flexible, yet never taken advantage of. I don’t know the distance PCI-E lanes can go, but how hard would it be to just route the extra lanes to a port that would be met with a docking station and a daughter-board with just PCI-E/PCI slots on them? I believe PCI-E can even have hubs or something like that so those lanes could be split off from a tiny interface with globs of bandwidth.

    Is this really that hard?

    The only thing that couldn’t easily be changed is the memory and the processor, but those are quite small and are more then adequate in general. Intel is doing a good job with power states and ramping them up and down, taking that a step further shouldn’t be that hard.

    A internal hard drive could even be excluded leaving just a ‘shell’ on a USB thumb drive in the computer to boot off of. All they have to do is go to extremes and make things specialized for the task. Think about all the space could be saved if laptops just go for the bare minimum on the move. People rarely play games when they’re on the run besides like flash games and thats all a laptop really has to handle. Word, email, and flash videos on the run. At home people want more then that though.

    The should stop trying to be the jack of all trades and the ace of none.

    IF a notebook maker over came this hurdle it would usher in a completely new era of notebooks that cater to both the casual and the gamer.

  31. I find the switchable movement a curious thing. So you have the option of using Intel GMA or some semi-gimp NV GPU. Or do notebooks with “high end” GPUs like GTX 280M have switchable too? Do NV’s GPUs lack capable power management?

    It’s not surprising that many people would not even know about the switchable feature because I imagine that these people don’t benefit from more GPU speed anyway. At this point it’s only gamers that can see the difference between an IGP and other options because only games really show a tangible improvement. Even HD video is frequently covered by the IGP now.

  32. It’s quite good IMO… but… it’s typical of manufacturer to release a product not quite polished just to be first in the market.. i’m guessing they’d come with an update to enable switching the gpu off or something…

    Btw, I would love it if techreport can also do a review on ati’s implementation of XGP.. it was shown at CES… you know, the one where the gpu sits outside of the notebook and can be connected anytime… just wanna see how their implementation varies is all…

    IMHO, if intel/amd were’nt going to include gpu inside their cpu, this probably would have happen later… or they could also implement discrete gpu with speed steps.. like ati 5800 series… that should reduce cost and i think would be more reliable… the power saved should also be similar…

    p/s: i’m surprised that intel also welcome this.. i mean after all those issues between them… an enemy of my enemy is my friend?

  33. Yerp, i was about to mention that… ATI call it power play or sumthing… my 5850 have three steps… OC’ing it will make it unusable i think… and it’s quite snappy too…

  34. The new 5xxx Radeons seem to do very well on power at idle. For instance the power scaling on the 5850 from full tilt back to idle is downright excellent (>10:1 IIRC?).

    If this trend holds I’m not sure it would be worth the added cost and complexity on the desktop where performance/price is far more important than every last watt of power savings.

  35. Nice evolutionary step for switchable GPU tech, but why did they have to grab their branding from a 15yo RadioShack electronics label?

  36. While I’d love to see something like this on desktops (my Radeon 4870 chews up 60W at idle – enough to power a whole modern system!), I imagine it’s a lot more complex because of the mind-boggling number of hardware variations compared to notebooks, where vendors can work with nvidia (as ASUS did) to ensure compatibility.

    I wonder how this will work with new Core i5/i3 laptops, as the new intel PM55 chipsets apparently don’t support switchable graphics. Boo!

  37. You might want to read the specs for the 210M and 310M on NV’s site before getting overly exctied about the latter 😉

  38. Yeah, I’m interested in the 13″ one with the 310m in it. That might be a decent little combo if it’s not too expensive.

  39. Heck, why not put this in desktops. I don’t particularly want my big fat PCIe card chewing up many tens of watts while I read TR.

  40. Finally. This is something laptops have obviously needed for a long time. Current implementations (this laptop included) make the mistake of coupling an integrated GPU completely incapable of playing games with a weak GPU basically still incapable of playing games. Furthermore the weak, dedicated GPUs don’t take that much power anyway. I’ll be more interested when this is bundled with a mid-range or high end graphics solution that will actually benefit from the integrated GPU.