Home A closer look at some GeForce GTX 680 features
Reviews

A closer look at some GeForce GTX 680 features

Scott Wasson
Disclosure
Disclosure
In our content, we occasionally include affiliate links. Should you click on these links, we may earn a commission, though this incurs no additional cost to you. Your use of this website signifies your acceptance of our terms and conditions as well as our privacy policy.

When we first reviewed the GeForce GTX 680, we were in our usual rush to push out an article at the time of the product’s unveiling, after a solid week of poking and prodding Nvidia’s new GPU. We were also attempting some of the more complex analysis of real-time graphics performance we’d ever done, and these things take time to produce.

Something had to give.

Our choice was to take a mulligan on writing up several of the new hardware and software features built into the GTX 680 and its GK104 graphics processor, with the intention of fixing the oversight later. We figured omitting some stuff and patching it later was pretty fitting for a GPU introduction, anyhow.

Well, we’ve been working on writing up those features, and given additional time to describe them, we indulged our penchant for ridiculous detail. The word count shot up like Apple’s stock price, and the result is this article, because there’s no way we’re not totally milking this thing for more readership. Read on for more detail than you probably deserve to have inflicted on you concerning the GeForce GTX 680’s new dynamic clocking scheme, antialiasing capabilities, adaptive vsync, and more.

Nvidia silicon gets forced induction
As evidenced by the various “turbo” schemes in desktop CPUs, dynamic voltage and frequency tech is all the rage these days. The theory is straightforward enough. Not all games and other graphics workloads make use of the GPU in the same way, and even relatively “intensive” games may not cause all of the transistors to flip and thus heat up the GPU quite like the most extreme cases. As a result, there’s often some headroom left in a graphics card’s designated thermal envelope, or TDP (thermal design power), which is generally engineered to withstand a worst-case peak workload. Dynamic clocking schemes attempt to track this headroom and to take advantage of it by raising clock speeds opportunistically.

Although the theory is fairly simple, the various implementations of dynamic clocking vary widely in their specifics, which can make them hard to sort. Intel’s Turbo Boost is probably the gold standard at present; it uses a network of thermal sensors spread across the die in conjunction with a programmable, on-chip microcontroller that governs Turbo policy. Since it’s a hardware solution with direct inputs from the die, Turbo Boost reacts very quickly to changes in thermal conditions, and its behavior may differ somewhat from chip to chip, since the thermal properties of the chips themselves can vary.

Although distinct from one another in certain ways, both AMD’s Turbo Core (in its CPUs) and PowerTune (in its GPUs) combine on-chip activity counters with pre-production chip testing to establish a profile for each model of CPU or GPU. In operation, then, power draw for the chip is estimated based on the activity counters, and clocks are adjusted in response to the expected thermal situation. AMD argues the predictable, deterministic behavior of its dynamic voltage and frequency (DVFS) schemes is an admirable trait. The price of that consistency is that it can’t squeeze every last drop of performance out of each individual slab of silicon.

Nvidia’s take on this sort of capability is called GPU Boost, and it combines some traits of each of the competing schemes. Fundamentally, its logic is more like the two Turbos than it is like AMD’s PowerTune. With PowerTune, AMD runs its GPUs at a relatively high base frequency, but clock speeds are sometimes throttled back during atypically high GPU utilization. By contrast, GPU Boost starts with a more conservative base clock speed and ranges into higher frequencies when possible.

The inputs for Boost’s decision-making algorithm include power draw, GPU and memory utilization, and GPU temperatures. Most of this information is collected from the GPU itself, but I believe the power use information comes from external circuitry on the GTX 680 board. In fact, Nvidia’s Tom Petersen told us board makers will be required to include this circuitry in order to get the GPU maker’s stamp of approval. The various inputs for Boost are then processed in software, in a portion of the GPU driver, not in an on-chip microcontroller. This combination of software control and external power circuitry is likely responsible for Boost’s relatively high clock-change latency. Stepping up or down in frequency takes about 100 milliseconds, according to Petersen. A tenth of a second is a very long time in the life of a gigahertz-class chip, and Petersen was frank in admitting that this first generation of GPU Boost isn’t everything Nvidia hopes it will become in the future.

Graphics cards with Boost will be sold with a couple of clock speed numbers on the side. The base clock is the lower of the two—1006MHz on the GeForce GTX 680—and represents the lowest operating speed in thermally intensive workloads. Curiously enough, the “boost clock”—which is 1058MHz on the GTX 680—isn’t the maximum speed possible. Instead, it’s “sort of a promise,” says Petersen; it’s the clock speed at which the GPU should run during typical operation. GPU Boost performance will vary slightly from card to card, based on factors like chip quality, ambient temperatures, and the effectiveness of the cooling solution. GeForce GTX 680 owners should expect to see their cards running at the Boost clock frequency fairly regularly, regardless of these factors. Beyond that, GPU Boost will make its best effort to reach even higher clock speeds when feasible, stepping up and down in increments of 13MHz.


The GPU ramps up to 1100MHz in our Arkham City test run

Petersen demoed several interesting scenarios to illustrate Boost behavior. In a very power-intensive scene, 3DMark11’s first graphics test, the GTX 680 was forced to remain at its base clock throughout. When playing Battlefield 3, meanwhile, the chip spent most of its time at about 1.1GHz—above both the base and boost levels. In a third application, the classic DX9 graphics demo “rthdribl,” the GTX throttled back to under 1GHz, simply because additional GPU performance wasn’t needed. One spot where Nvidia intends to make use of this throttling capability is in-game menu screens—and we’re happy to see it. Some menu screens can cause power use and fan speeds to shoot skyward as frame rates reach quadruple digits.

Nvidia has taken pains to ensure GPU Boost is compatible with user-driven tweaking and overclocking. A new version of its NVAPI allows third-party software, like EVGA’s slick Precision software, control over key Boost parameters. With Precision, the user may raise the GPU’s maximum power limit by as much as 32% above the default, in order to enable operation at higher clock speeds. Interestingly enough, Petersen said Nvidia doesn’t consider cranking up this slider overclocking, since its GPUs are qualified to work properly at every voltage-and-frequency point along the curve. (Of course, you could exceed the bounds of the PCIe power connector specification by cranking this slider, so it’s not exactly 100% kosher.) True overclocking happens by grabbing hold of a separate slider, the GPU clock offset, which raises the chip’s frequency at a given voltage level. An offset of +200MHz, for instance, raised our GTX 680’s clock speed while running Skyrim from 1110MHz (its usual Boost speed) to 1306MHz. EVGA’s tool allows GPU clock offsets as high as +549MHz and memory clock offsets up to +1000MHz, so users are given quite a bit of room for experimentation.


EVGA’s precision with the three main sliders maxed out.

Although GPU Boost is only in its first incarnation, Nvidia has some big ideas about how to take advantage of these dynamic clocking capabilities. For instance, Petersen openly telegraphed the firm’s plans for future versions of Boost to include control over memory speeds, as well as GPU clocks.

More immediately, one feature exposed by EVGA’s Precision utility is frame-rate targeting. Very simply, the user is able to specify his desired frame rate with a slider, and if the game’s performance exceeds that limit, the GPU steps back down the voltage-and-frequency curve in order to conserve power.

We were initially skeptical about the usefulness of this feature for one big reason: the very long latency of 100 ms for clock speed adjustments. If the GPU has dialed back its speed because the workload is light and then something changes in the game—say, an explosion that adds a bunch of smoke and particle effects to the mix—ramping the clock back up could take quite a while, causing a perceptible hitch in the action. We think that potential is there, and as a result, we doubt this feature will appeal to twitch gamers and the like.

However, in our initial playtesting of this feature, we’ve not noticed any problems. We need to spend more time with it, but Kepler’s frame rate targeting may prove to be useful, even in this generation, so long as its clock speed leeway isn’t too wide. At some point in the future, when the GPU’s DVFS logic is moved into hardware and frequency change delays are measured in much smaller numbers, we expect features like this one to become standard procedure, especially for mobile systems.

Vsync starts taking night courses
You’re probably familiar with vsync, or vertical refresh synchronization, if you’ve spent much time gaming on a PC. With vsync enabled, the GPU waits for the display to finish updating the screen—something most displays do 60 times per second—before flipping to a new frame buffer. Without vsync, the GPU may flip to a new frame while the display is being updated. Doing so can result in an artifact called tearing, where the upper part of the display shows one image and the lower portion shows another, creating an obvious visual discontinuity.

Tearing is bad and often annoying. In cases where the graphics processor is rendering frames much faster than the display’s refresh rate, you may even see multiple tears onscreen at once, which is pretty awful. (id Software’s Rage infamously had a bad case of this problem when it was first released, compounded by the fact that forcing on vsync via the graphics driver control panel didn’t always work.) Vsync is generally a nice feature to have, and I’ll sometimes go out of my way to turn it on in order to avoid tearing.

However, vsync can create problems when GPU performance becomes strained, because frame flips must coincide with the display’s refresh rate. A 60Hz display updates itself every 16.7 milliseconds. If a new frame isn’t completed and available when an update begins, then the prior frame will be shown again, and another 16.7 ms will pass before a new frame can be displayed. Thus, the effective frame rate of a system with a 60Hz display and vsync enabled will be 60Hz, 30Hz, 20Hz, and so on—in other words, frame output is quantized. When game performance drops and this quantization effect kicks in, your eyes may begin to notice slowdowns and choppiness. For this reason, lots of folks (especially competitive gamers) tend to play without vsync enabled, choosing to tolerate tearing as the lesser of two evils.

Nvidia’s new adaptive vsync feature attempts to thread the needle between these two tradeoffs with a simple policy. If frame updates are coming in at the display’s refresh rate or better, vsync will be enforced; if not, vsync will be disabled. This nifty little compromise will allow some tearing, but only in cases where the smoothness of the on-screen animation would otherwise be threatened.


An illustration of adaptive vsync in action. Source: Nvidia.

If the idea behind this feature sounds familiar, perhaps that’s because we’ve heard of it before. Last October, in an update to Rage, id Software added a “Smart vsync” option that sounds very familiar:

Some graphics drivers now support a so called “swap-tear” extension.
You can try using this extension by setting VSync to SMART.
If your graphics driver supports this extension and you set VSync
to SMART then RAGE will synchronize to the vertical retrace of
your monitor when your computer is able to maintain 60 frames per
second and the screen may tear if your frame rate drops below 60 frames
per second. In other words the SMART VSync option trades a sudden drop
to 30 frames per second with occasional screen tearing. Occasional
screen tearing is usually considered less distracting than a more
severe drop in frame rate

So yeah, nothing new under the sun.

In the past, games have avoided the vsync quantization penalty via buffering, or building up a queue of one or two frames, so a new one is generally available when it comes time for a display refresh. Triple buffering has traditionally been the accepted standard for ensuring smoothness. When we asked Nvidia about the merits of triple buffering versus adaptive vsync, we got a response from one of their software engineers, who offered a nice, detailed explanation of the contrasts, including the differences between triple buffering in OpenGL and DirectX:

There are two definitions for triple buffering. One applies to OGL and the other to DX. Adaptive v-sync provides benefits in terms of power savings and smoothness relative to both.

  • Triple buffering solutions require more frame-buffer memory than double buffering, which can be a problem at high resolutions.
  • Triple buffering is an application choice (no driver override in DX) and is not frequently supported.
  • OGL triple buffering: The GPU renders frames as fast as it can (equivalent to v-sync off) and the most recently completed frame is display at the next v-sync. This means you get tear-free rendering, but entire frames are affectively dropped (never displayed) so smoothness is severely compromised and the effective time interval between successive displayed frames can vary by a factor of two. Measuring fps in this case will return the v-sync off frame rate which is meaningless when some frames are not displayed (can you be sure they were actually rendered?). To summarize – this implementation combines high power consumption and uneven motion sampling for a poor user experience.
  • DX triple buffering is the same as double buffering but with three back buffers which allows the GPU to render two frames before stalling for display to complete scanout of the oldest frame. The resulting behavior is the same as adaptive vsync (or regular double-buffered v-sync=on) for frame rates above 60Hz, so power and smoothness are ok. It’s a different story when the frame rate drops below 60 though. Below 60Hz this solution will run faster than 30Hz (i.e. better than regular double buffered v-sync=on) because successive frames will display after either 1 or 2 v-blank intervals. This results in better average frame rates, but the samples are uneven and smoothness is compromised.
  • Adaptive vsync is smooth below 60Hz (even samples) and uses less power above 60Hz.
  • Triple buffering adds 50% more latency to the rendering pipeline. This is particularly problematic below 60fps. Adaptive vsync adds no latency.
  • Clearly, things get complicated in a hurry depending on how the feature is implemented, but generally speaking, triple buffering has several disadvantages compared to adaptive vsync. One, the animation may not be as smooth. Two, there’s substantially more lag between user inputs and screen updates. Three, it uses more video memory. And four, triple-buffering can’t be enabled via the graphics driver control panel for DirectX games. Nvidia also contends that smart vsync is more power-efficient than the OpenGL version of triple buffering.

    Overall, we’re compelled by these arguments. However, we should note that we’re very much focusing on the most difficult cases, which are far from typical. We’ve somehow managed to enjoy fast action games for many years, both with and without vsync and triple buffering, before adaptive vsync came along.

    Whether or not you’ll notice a problem with traditional vsync depends a lot on the GPU’s current performance and how the camera is moving through the virtual world. Petersen showed a demo at the Kepler press day involving the camera flying over a virtual village in Unigine Heaven where vsync-related stuttering was very obvious. When we were hunting for slowdowns related to vsync quantization while strafing around in a Serious Sam 3 firefight, however, we couldn’t perceive any problems, even though frame rates were bouncing around between 30 and 60 FPS on our 60Hz display.

    With that said, we only noticed very occasional tearing in Sam 3 with adaptive vsync enabled, and performance was generally quite good. We’ve also played quite a bit of Borderlands on the GTX 680 with both FXAA and adaptive vsync, and the experience has consistently been excellent. If you own video card capable of adaptive vsync, I’d recommend enabling it for everyday gaming. I’m convinced it is the best current compromise.

    We should note, though, that this simple optimization isn’t really a solution to some of the more vexing problems with ensuring consistent frame delivery and fluid animation in games. Adaptive vsync doesn’t really address the issues with uneven frame dispatch that lead to micro-stuttering in multi-GPU solutions, as we explored here, nor is it as smart about display synchronization as the HyperFormance feature in Lucid’s upcoming Virtu MVP software. We’ll have more to say about Kepler and micro-stuttering in the coming weeks, so stay tuned.

    Because adaptive vsync is a software feature, it doesn’t have to be confined to Kepler hardware. In fact, Nvidia plans to release a driver update shortly that will grant this capability to some older GeForce products, as well.

    Some antialiasing improvements
    Nvidia has revealed a couple of new wrinkles in its antialiasing capabilities to go along with Kepler’s introduction. The first of those relates to FXAA, the post-process antialiasing technique that has been adopted by a host of new games in recent months.

    FXAA is nifty because it costs very little in terms of performance and works well with current game engines; many of those engines use techniques like deferred shading and HDR lighting that don’t mix well with traditional, hardware-based multisampling. At the same time, FXAA is limited because of how it works—by looking at the final, rendered scene, identifying jaggies using an edge-detection algorithm, and smoothing them out. Such post-process filters have no access to information from the sub-pixel level and thus do a poor job of resolving finely detailed geometry. They can also cause the silhouettes of objects to shift and crawl a bit from one frame to the next, because they’re simply guessing about the objects’ underlying shapes. (You’ll see this problem in the complex outlines of a gun in a first-person shooter.) Still, this brand of antialiasing is a darn sight better than nothing, which is what PC gamers have had in way too many games in recent years.

    During the Kepler press event, Nvidia claimed “FXAA was invented at Nvidia,” which is technically true, but is kind of like claiming the producers of Keeping up with the Kardashians invented pimping. They didn’t invent it; they just perfected it. I believe Alexander Reshetov at Intel Labs was the first to describe (PDF) this sort of post-process reconstruction filter. AMD later implemented MLAA in its Catalyst drivers when it introduced the Radeon HD 6800 series. Nvidia did the world the great service of developing and releasing its own variant of MLAA, which looks good and is very fast, in a form that’s easy for developers to integrate into their games. FXAA has since become very widely adopted.


    FXAA softens all edges effectively in Skyrim.

    The big news with Kepler is that Nvidia is finally integrating an FXAA option into its control panel, so end users can force on FXAA in games that wouldn’t otherwise support it. The disadvantage of using the control panel to invoke this feature is that FXAA’s edge smoothing will be applied to all objects on the screen, including menus and HUDs. Games that natively support FXAA avoid applying the filter to those elements, because it can cause some blurring and distortion. The upside, obviously, is that games with poor antialiasing support can have FXAA added.

    I tried forcing on FXAA in one of my old favorites, Borderlands, which has always desperately needed AA help. To my surprise, the results were uniformly excellent. Not only were object edges smooth, but the artifacts I’ve noticed with AMD’s MLAA weren’t present. The text in menus didn’t have strangely curved lines and over-rounded corners, and black power lines against a bright sky weren’t surrounded by strange, dark halos. FXAA is superior to MLAA in both of these respects.

    Post-process filters like this one may be a bit of a cheat, but the same can be said for much of real-time graphics. That’s fine so long as the illusion works, and FXAA can work astonishingly well.

    Building on the success of FXAA, Nvidia has another antialiasing method in the works, dubbed TXAA, that it hopes will address the same problems with fewer drawbacks. TXAA isn’t just a post-process filter; instead, it combines low levels of multisampling with a couple of other tricks.

    The first of those tricks is a custom resolve filter that borrows subpixel samples from neighboring pixels. We’ve seen custom filters of this sort before, such as the tent filters AMD introduced way back in its Radeon HD 2000-series cards. Routines that reach beyond the pixel boundary can provide more accurate edge smoothing and do an excellent job of resolving fine geometric detail, just as AMD’s CFAA tent filters did. However, they have the downside of slightly blurring the entire screen. I kind of liked AMD’s narrow tent filter back in the day, but most folks reacted strongly against the smearing of text and UI elements. Sadly, AMD wound up removing the tent filter option from its drivers for newer Radeons, though it’s still exposed in the latest Catalysts when using a Radeon HD 5870. TXAA is distinct from AMD’s old method because it’s not a tent filter; rather than reducing the weight of samples linearly as you move away from the pixel center, TXAA may potentially use a bicubic weighting function. That should, in theory, reduce the amount of blurring, depending on how the weighting is applied.

    TXAA has two basic quality levels, one based on 2X multisampling and the other based on 4X MSAA, both with custom resolve. The key to making TXAA effective will be ensuring that the resolve filter is compatible with deferred shading and HDR lighting, so it will be able to touch every edge in a scene, not just a subset, as too often happens with multisampling these days. Because this resolve step will happen in software, it will not take advantage of the MSAA resolve hardware in the GPU’s ROP units. Shader FLOPS, not ROP throughput, will determine TXAA performance.

    The other trick TXAA employs will be familiar to (former?) owners of Radeon X800 cards. The “T” in TXAA stands for “temporal,” as in the temporal variation of sample patterns. The idea is to increase the effective sample size for antialiasing by switching between different subpixel sample patterns from frame to frame. Back in the day, we found ATI’s implementation of this feature to create a “fizzy” effect around object edges, but that was long enough ago that the world was literally using a different display technology (CRTs). LCDs may tolerate this effect more graciously. Also, we expect Nvidia may use subtler variance in its sample patterns in order to avoid artifacts. How this method will combine with a kernel filter that reaches beyond the pixel boundary is a very interesting question.


    TXAA edge smoothing comparison. Source: Nvidia.

    Nvidia claims the first TXAA mode will exceed the quality of 8X multisampling with the same performance overhead as 2X MSAA. We suspect that, with the right tuning, the combination of techniques Nvidia intends to use in TXAA could work quite nicely—that the whole may seem greater than the sum of the parts. Then again, TXAA may fail to overcome its drawbacks, like inherent blurring, and wind up being unpopular. Will TXAA end up being regarded as Quincunx 2.0, or will it be something better? It’s hard to say.

    The trouble is, TXAA looks to be a moving target. For one thing, all of the information we’ve relayed above came with the caveat that TXAA will be highly tweakable by the game developer. Different implementations may or may not use temporal jittering, for instance. For another, TXAA is still very much a work in progress, and the folks at Nvidia who we interviewed admitted some of the specifics about its operation have yet to be decided, including the exact weighting method of the resolve filter.

    Nvidia did have one live demo of TXAA running at its Kepler press event, but we don’t have a copy of the program, so we can’t really evaluate it. TXAA will initially be distributed like FXAA, as something game developers can choose to incorporate into their games. The feature may eventually find its way into a control panel option in Nvidia’s graphics drivers, but I wouldn’t hold my breath while waiting for it. We probably wont get to see TXAA in action until games that use it arrive. The first title may be the upcoming MMO Secret World, which is slated to ship in June. After that, Borderlands 2 should hit this September with TXAA support. Nvidia tells us both Crytek and Epic are working with TXAA, as well.

    Since it is entirely a software algorithm, TXAA could in theory work on Fermi-based GPUs, as well as Kepler. However, Nvidia’s story on whether it will enable TXAA on Fermi-derived products seems to be shifting. We got mixed signals on this front during the Kepler press event, and the latest information we have is that TXAA will not be coming to Fermi cards. We think this sort of artificial limitation of the technology would be a shame, though, so here’s hoping Nvidia has a change of heart. One thing we know for sure: unlike FXAA, TXAA won’t work with Radeons.

    Bindless textures
    One of Kepler’s new capabilities is something Nvidia calls “bindless textures,” an unfamiliar name, perhaps, for a familiar feature. Ever since John Carmack started talking about “megatexturing,” the world of real-time graphics has been on notice that the ability to stream in and manage large amounts of texture data will be an important capability in the future. AMD added hardware support for streaming textures in its Southern Islands series of GPUs, and now Nvidia has added it in Kepler. Architect John Danskin said this facility will lead to a “dramatic increase” in the number of textures available at runtime, and he showed off a sample scene that might benefit from the feature, a ray-traced rendition of a room whose ceiling is covered entirely with murals.

    Unfortunately, neither AMD’s nor Nvidia’s brand of streaming textures is supported in DirectX 11.1, and we’re not aware of any roadmaps from Microsoft for the next generation of the API. For the time being, Kepler’s bindless texture capability will be exposed to programmers via an OpenGL extension.

    Hardware video encoding
    For a while now, video encoding has been hailed as an example of a compute-intensive task that can be accelerated via GPU computing. However, even highly parallel GPU computing can’t match the efficiency of dedicated hardware—and these days, nearly everything from smartphone SoCs to Sandy Bridge processors comes with an H.264 video encoding block built in. Now, you can add GK104 to that list.

    Nvidia says the GK104’s NVENC encoder can compress 1080p video into the H.264 format at four to eight times the speed of real-time playback. The encoding hardware supports the same H.264 profile levels as the Blu-ray standard, including the MVC extension for stereoscopic 3D video. Nvidia cites power efficiency as NVENC’s primary advantage over GPU-compute-based encoders.

    AMD claims to have added similarly capable video encoding hardware to its 7000-series Radeons, but we have yet to see a program that can take advantage of it. The length of the delay, given the fact that the 7970 has been out for roughly four months now, makes us worry there could be some sort of hardware bug holding things up. Fortunately, Nvidia was able to supply a beta version of CyberLink’s MediaEspresso with NVENC support right out of the chute.

    Although we were able to try out NVENC and test its performance, I’d caution you that the results we’re about to share are somewhat provisional. Many of these hardware encoders are essentially black boxes. They sometimes offer some knobs and dials to tweak, but making them all behave the same as one another, or the same as a pure-software routine, is nearly impossible—especially in MediaEspresso, which offers limited control over the encoding parameters. As a result, the quality levels involved here will vary. Heck, we couldn’t even get Intel’s QuickSync to produce an output file of a similar size.

    We started with a 30-minute sitcom recorded in 720p MPEG2 format on a home theater PC and compressed it into H.264/ACC format at 1280×720. Our target bitrate for NVENC and the MediaEspresso software encoder was 10 Mbps. We weren’t able to raise the bitrate for Intel’s QuickSync beyond 6 Mbps, so it was clearly doing a different class of work. We’ve still reported the results below; make of them what you will.

    While encoding with NVEnc, our GTX 680-equipped test system’s power consumption hovered around 126W. When using the software encoder running on the Core i7-3820, the same system’s power draw was about 159W, so the GK104’s hardware encoder has obvious benefits for power efficiency.

    For what it’s worth, all three of the output files contained video that looked very nice. We couldn’t easily detect any major difference between them with a quick check using the naked eye; none had any obvious visual artifacts. A more rigorous quality and performance comparison might turn up substantial differences, but we think most end users will likely find NVENC and QuickSync acceptable for their own video conversions. We may have to revisit this ground in a more detailed fashion in the future.

    Better display support
    For several generations, AMD has had a clear advantage over Nvidia in the number and variety of displays and output standards its GPUs have supported. The Kepler generation looks to close that gap with a host of notable additions, including (at last!) the ability to drive more than two displays simultaneously.

    The default output config on the GeForce GTX 680 consists of a pair of DVI ports (both dual-link), an HDMI port, and a full-sized DisplayPort connector. The HDMI output is compliant with version 1.4a of the HDMI standard, and the DisplayPort output conforms to version 1.2 of that spec. The GTX 680 can drive 4K-class monitors at resolutions up to 3840×2160 at 60Hz.

    Impressively, connecting monitors to all four of the card’s default outputs simultaneously shouldn’t be a problem. The GTX 680 can drive four monitors at once, three of which can be grouped into a single large surface and used as part of a Surround Gaming setup. That’s right: SLI is no longer required for triple-display gaming. Nvidia recommends using the fourth monitor as an “accessory display” for email, chatting, and so on. I recommend just buying three displays and using alt-tab, but whatever.

    The GeForce camp is looking to catch up to Eyefinity on the software front, as well, with a host of new driver features in the pipeline. Most notably, it will be possible to configure three displays as a single large surface for the Windows desktop while keeping the taskbar confined to one monitor. Also, users will be able to define custom resolutions. Last but perhaps niftiest, Surround Gaming is getting a new twist. When playing a game with bezel compensation enabled, the user will be able to punch a hotkey in order to get a peek “behind” the bezels, into space that’s normally obscured. If it works as advertised, we’d like that feature very much for games that aren’t terribly smart about menu placement and such.

    In fact, writing about these things makes us curious about how Eyefinity and Surround Gaming stack up with the latest generation of games. Hmmmmmm…

    So when can I buy one?
    Since the GTX 680’s introduction, we’ve noted that its availability at online retailers in North America has been spotty at best. We asked Nvidia for some insight into the situation, and they offered a couple of nuggets.

    First, although we don’t have exact numbers, we’re told that online retailers are receiving shipments of GTX 680 boards every day. The key to finding one, it seems, is signing up to receive notifications at Newegg or the like whenever cards come back in stock. Although they sell out quickly, there is purportedly a decent number of cards trickling into the market each day.

    Second, an upcoming change to how GTX 680 cards are manufactured may provide some relief. Right now, as is often the case with new graphics card launches, Nvidia is having all GTX 680 cards manufactured for it on contract. The firm is then distributing those cards to partners like EVGA and MSI for resale. Very soon, in “early April,” it will be enabling its board partners to begin manufacturing their own renditions of the GTX 680. We’ll likely see cards with custom board designs and non-reference coolers hitting the market as a result of this change. Once this conversion happens, the flow of new cards should increase, helping to meet some of the pent-up demand. Nvidia admits that fully satisfying the demand for the GTX 680 may still prove to be a challenge, but it expects the availability picture to improve at least somewhat with the change in manufacturing arrangements.

    Dude, you should totally follow me on Twitter.

    Latest News

    Apple Might Join Hands with Google or OpenAI for Their AI Tech
    News

    Apple Is Reportedly Planning to Join Hands with Google or OpenAI to License Their AI Tools

    YouTube Launches New Tool To Help Label AI-generated Content
    News

    YouTube Launches a New Tool to Help Creators Label AI-Generated Content

    YouTube released a tool that will make creators clearly label the parts of their content that are generated by AI. The initiative was first launched in November in an attempt...

    Ripple Dumps 240 Million XRP Tokens Amid 17% Price Decline
    Crypto News

    Ripple Dumps 240 Million XRP Tokens Amid 17% Price Decline

    Popular crypto payment platform Ripple has released 240 million XRP tokens in its latest escrow unlock for March. This comes at a time when XRP’s price has declined significantly. Data from...

    Crypto Expert Draws A Links Between Shiba Inu And Ethereum
    Crypto News

    Crypto Expert Draws Link Between Shiba Inu And Ethereum

    The Lucrative FTX Bankruptcy Trade and Ongoing Legal Battle
    Crypto News

    The Lucrative FTX Bankruptcy Trade and Ongoing Legal Battle

    Bitcoin (BTC) Price Set to Enter “Danger Zone” – Time to Back-Off or Bag More Coins?
    Crypto News

    Bitcoin (BTC) Price Set to Enter “Danger Zone” – Time to Back-Off or Bag More Coins?

    SNB to Kick Off Rate Cut Cycle Sooner Than Expected
    News

    SNB to Kick-Start Rate Cut Cycle Sooner Than Expected