Single page Print

The GTX 780 Ti invades Hawaii
At $699.99, the GeForce GTX 780 Ti is priced like the best graphics card in the world. Nvidia has endeavored to soften the blow with various enticements. I continue to be, er, a fan of the swanky aluminum-and-magnesium cooler that the 780 Ti shares with other high-end GeForces. Also, for the holiday season, the card comes with a pretty nice bundle of games—Splinter Cell: Blacklist, Arkham Origins, and Assassin's Creed 4—and a coupon for a $100 discount on Nvidia's Shield handheld game console, which can stream games from a GeForce GTX-equipped PC. None of these things make a $700 graphics card feel like a good deal, but they help bridge the gap with the R9 290X, which sells for $150 less.

Interestingly, the GeForce GTX Titan will soldier on at $1K, even though it's slower than the 780 Ti. The Titan doesn't have much appeal left to gamers, but its full complement of double-precision floating-point units should continue to make it attractive to folks developing GPU-computing applications in CUDA. Like the 780, the 780 Ti can only do double-precision math at 1/24th the single-precision rate, not one-third like the Titan.

The GTX 780 Ti faces some formidable competition from the Radeon R9 290X. When discussing its new product with us, Nvidia took some time to explain how the 780 Ti differs from the competition. Some of what they offered in this context was FUD about the variable performance of the 290X cards in the market. Nvidia apparently tested about 10 different 290X cards itself and saw large clock speed variations from one GPU to the next. This is an issue that the media is beginning to tackle—and that AMD says it's looking into, as well. We expect a driver update from AMD soon to address this problem.

However that story plays out in the coming days, Nvidia made a couple of relevant points in explaining how it avoided these issues in the GTX 780 Ti and other products. First, Nvidia's dynamic voltage and frequency scaling algorithm, GPU Boost 2.0, works similarly to AMD's PowerTune, targeting specific limits for temperature and power draw and pushing the GPU as hard as possible within the scope of those parameters. But GPU Boost 2.0 contains one variable that the PowerTune routine in AMD's newest graphics card lacks: a clearly stated base or minimum clock frequency that acts as a guarantee of performance. For the 780 Ti, the base clock is 876MHz. Unless something goes entirely wrong, the GPU shouldn't run any slower than that during normal use.

Nvidia was very meticulous about explaining how GPU Boost works when it introduced the feature alongside the first Kepler-based card, the GeForce GTX 680. Clearly, the firm wanted to avoid negative user reactions to variable clock speeds. The absence of a baseline performance guarantee in AMD's competing PowerTune algorithm isn't necessarily a major drawback, but it could become one if the user experience varies too greatly. Building a known baseline clock into the card's spec and operation is a good way to remedy that ill.

There's also been quite a bit of discussion about why the R9 290X's PowerTune limit is a relatively toasty 94°C and why its cooler generates so much noise. Much of that discussion has been focused on the GPU's power draw and the amount of resulting heat to be removed—and whether AMD's stock cooler is good at doing its job—but Nvidia offers a slightly different take.

The key variable, the firm contends, is thermal density, the amount of power to be removed within the surface area of the chip. The slide above, from Nvidia's product presentation, illustrates the difference in thermal density between the GK110 chip on the GTX 780 Ti and the Hawaii chip on the R9 290X. Hawaii's thermal density is substantially higher. The GK110's relatively large surface area and lower power limit allows the GTX 780 Ti to run quieter and at lower temperatures with a similar-sized cooler.

That's the theory, at least. We'll put it to the test shortly.