Nvidia's GeForce GTX 550 Ti is finally settling onto store shelves, having made peace with the fact that it's slower and considerably less exciting than some of its pricier elders. Already, the latest GeForce faces another challenge—or rather, a challenger that hails from the Radeon camp and wants to throw down with the Ti on its home turf.
AMD's Radeon HD 6790 is being introduced today, and it bears the exact same $149 price tag as the new kid from Nvidia. According to AMD, the 6790 should "comfortably" outpace its GeForce counterpart by as much as 30%, letting you enjoy games at a 1080p resolution. We found the GTX 550 Ti wasn't always up to the task of cranking out smooth frame rates at 1680x1050 with antialiasing enabled, so that sounds like an attractive premise—if the Radeon delivers, of course.
Beside being a competitor to the GTX 550 Ti, the 6790 also provides a middle ground between AMD's Radeon HD 5770, which sells for as little as $120 at Newegg these days, and the quicker Radeon HD 6850, which starts closer to the $165 mark. Before today's release, Radeon fans had to deal with the truly unbearable dilemma of having to choose between one of those two cards. No longer! That's called progress, folks.
Now, how did AMD conjure up an answer to Nvidia's new $149 GPU so quickly? Well, it kind of didn't. The Radeon HD 6790 is based on the Barts graphics processor already known for its starring roles in the Radeon HD 6850 and 6870. AMD has simply disabled a few bits and pieces to keep the 6790 from nipping at the heels of real 6800-series offerings. That said, you'd never know it from looking at the Radeon HD 6790 engineering sample AMD sent us for review. The card has the same imposing length, cooler, display output arrangement, dual six-pin power connectors as the Radeon HD 6870:
To keep things from getting confusing here in our labs, we thought it appropriate to slap a sticker on the 6790. Ahh, much better.
Of course, retail Radeon HD 6790 variants won't have quite the same garage-sale look. AMD tells us boards designs "will vary greatly from what you're seeing on . . . sample boards, including PCB, power connectors, cooler design etc." Certain retail 6790 cards won't require dual PCIe power connectors, which is probably a good thing—folks shopping for a $149 graphics card are probably lucky if their power supply has one PCIe power plug.
Here's Sapphire's take on the Radeon HD 6790, just for the sake of illustration:
Note the shorter circuit board and the snazzier-looking cooler. If the side were visible, you'd see a pair of DVI ports, one HDMI port, and a DisplayPort output. Sapphire's card will apparently have dual power connectors, but AMD tells us a PowerColor offering with only one PCIe plug will hit stores some time after today.
Clearly, the new Radeon has a lot in common with the 6800 series. Why didn't AMD simply call it the Radeon HD 6830? That kind of nomenclature wouldn't exactly upset tradition, after all, and it'd undoubtedly be more fitting from an architectural point of view.
The answer is simple: despite featuring a larger and more capable GPU, the Radeon HD 6790's specifications are strikingly similar to those of the Radeon HD 5770, which AMD now sells in pre-built PCs as the Radeon HD 6770.
|Juniper (Radeon HD 5770)||16||40||800||128||1040||166||40 nm|
|Barts (Radeon HD 6790)||16||40||800||256||1700||255||40 nm|
|Barts (Radeon HD 6850)||32||48||960||256||1700||255||40 nm|
|Barts (Radeon HD 6870)||32||56||1120||256||1700||255||40 nm|
From a bird's eye view, the 6790's only notable holdover from the 6800 series is the 256-bit memory interface, which the 5770's Juniper GPU is physically incapable of matching. AMD couldn't put Juniper on stilts, so the quick-and-easy alternative was to pay Barts a visit and shatter its tibias with a baseball bat. The fractures incapacitated four of Barts' SIMD arrays, leaving it with 800 ALUs and the ability to filter only 40 textures per clock. This latest example of GPU hobbling also destroyed half of Barts' rasterization capabilities, limiting it to pushing only 16 pixels/clock. As a result, the 6790 and the 5770 have identical ALU counts, and they can filter the same number of pixels and textures per clock. Make sense?
|GeForce GTS 450||12.5||25.1||25.1||601||783||57.7|
|GeForce GTS 450 AMP!||14.0||28.0||28.0||672||875||64.0|
|GeForce GTX 550 Ti||21.6||28.8||28.8||691||900||98.5|
|GeForce GTX 550 Ti Cyclone||22.8||30.4||30.4||730||950||103|
|GeForce GTX 550 Ti AMP!||24.0||32.0||32.0||768||1000||106|
|GeForce GTX 460 768MB||16.2||37.8||37.8||907||1350||86.4|
|GeForce GTX 460 1GB||21.6||37.8||37.8||907||1350||115|
|GeForce GTX 560 Ti||26.3||52.6||52.6||1263||1644||128|
|Radeon HD 5770||13.6||34.0||17.0||1360||850||76.8|
|Radeon HD 5770 SOC||14.4||36.0||18.0||1440||900||76.8|
|Radeon HD 6790||13.4||33.6||16.8||1344||840||134.4|
|Radeon HD 6850||24.8||37.2||18.6||1488||775||128|
|Radeon HD 6870||28.8||50.4||25.2||2016||900||134|
|Radeon HD 6950||25.6||70.4||35.2||2253||1600||160|
Studying maximum theoretical performance numbers sheds further light on the subject. The Radeon HD 6790 has slightly weaker number-crunching capabilities than the 5770 but considerably more memory bandwidth—even more than the Radeon HD 6870, in fact.
One traditional downside of hobbling an upmarket GPU to compete at the low end is die area. High-end GPUs tend to be larger, making them costlier to produce. Those higher costs in turn reduce margins, giving resulting products less wiggle room when the time comes to slide down the pricing scale. The Radeon HD 6790 isn't in too bad a position, though. While its Barts GPU is indeed quite a bit larger than Juniper, at 255 mm² vs. 166², it's not that much portlier than the GeForce GTX 550 Ti's GF116 chip, which I measured at about 225 mm². Nvidia might have a cost-efficiency edge, but I doubt it's a terribly great one. It's also worth noting that Nvidia needs a fully capable GF116 to make a GeForce GTX 550 Ti, but AMD can slip Barts chips that don't make the cut for the 6870 and 6850 into the 6790.
Now, let's try to see if the Radeon HD 6790 has the right mix of performance and power efficiency to give the GeForce a run for its money. Time for benchmarks!