Nvidia does indeed have a die shrink of its GT200 graphics processor in the works, and the first products based on it are imminent. Nvidia plans to announce the GeForce GTX 295 graphics card at the Consumer Electronics Show on January 8, just as AMD is raising the curtain on its new Phenom II CPUs.
The GTX 295 will be among the first graphics cards based on the 55-nanometer version of the GT200 GPU. The die-shrunk version of the GT200 ought to be smaller and run cooler than its 65nm predecessor, and it may enable higher clock speeds, as well. AMD's Radeon HD 4000-series GPUs are already manufactured using this same process; both firms use TSMC as their primary chip fabrication outlet.
The smaller, cooler chip will allow Nvidia to sandwich a pair of 55nm GT200 graphics processors together on a single video card, much like the GeForce 9800 GX2 or the Radeon HD 4870 X2. In fact, the GeForce GTX 295 will be just such a product, blessed with a pair of GPUs whose basic specs fall somewhere between today's GeForce GTX 260 and GTX 280 GPUs. The card will be a dual-PCB design encased in a plastic shroud, similar to the 9800 GX2.
Each of the GTX 295's two GPUs will run at a core clock of 576MHz, with shaders clocked at 1242MHz. Like today's GeForce GTX 260, each GPU will have seven active ROP partitions and a 448-bit path to 896MB of memory. That memory will be of the GDDR3 variety, running at 1000MHz (or 2000 MT/s). Unlike the GTX 260, though, the GTX 295's individual GPUs will have all ten of their thread processor clusters intact, for a total of 240 stream processors per chip.
In the aggregate, the two chips' capabilities add up to some pretty staggering theoretical peak specifications, as you can see in the table below. The GTX 295 trails the Radeon HD 4870 X2 only in terms of theoretical peak shader arithmetic; its memory bandwidth and texture filtering capacities are considerably higher than the top Radeon's.
| Peak bilinear
| Peak bilinear
| Peak shader
|GeForce GTX 280||19.3||48.2||24.1||141.7||622||933|
|GeForce GTX 295||32.3||92.2||46.1||256.0||1192||1788|
|Radeon HD 4870||12.0||30.0||15.0||115.2||1200||-|
|Radeon HD 4870 X2||24.0||60.0||30.0||230.4||2400||-|
Nvidia says it expects the GeForce GTX 295 to be the fastest "single" video card around when it arrives, and judging by what we know about current GPU architectures, I'd say it's likely to make good on that claim, assuming AMD doesn't have any surprises in store for us. (AMD has told us in the past that GDDR5 memory is primed for higher clock speeds, which is why I mention the possibility. Then again, AMD could counter simply by reducing prices on the 4870 X2. The RV770 GPU should still be quite a bit smaller and cheaper to produce than the 55nm variant of the GT200, and the 4870 X2 card design should cost less to manufacture, as well, since it has a single circuit board, 256-bit memory interfaces, and a less elaborate cooling apparatus.)
The GTX 295's peak board power rating will be 289W, and that power will be delivered via a pair of PCIe aux power connectors, one 6-pin and one 8-pin.
The GTX 295 will be the first SLI-on-a-stick product to come out of the starting gate with reasonably good support for mulitple monitors, thanks to Nvidia's new Release 180 drivers. The GTX 295 will also support dual-card configurations, for quad SLI, or one of its GPUs can be reassigned to handle PhysX calculations rather than graphics, which could prove useful if and when PhysX-accelerated games become more widely available.
Hit the gallery below for some beauty shots, supplied by Nvidia, of a GeForce GTX 295 reference card.
|Samsung's Galaxy Note 4 with the Exynos 5433 processor||56|
|You can now unlock your Chromebook with your phone||11|
|Deal of the week: A Radeon R9 290X for $233||116|
|AMD's new Fixer video is even crazier than the last||86|
|Leak pegs desktop Broadwell, Skylake for mid-year||54|
|Battlefield Hardline open beta scheduled for February 3||19|
|WSJ: Microsoft to back Cyanogen with $70M investment||56|
|You've goat to check out Silicon Power's new thumb drive||54|
|nvidia already released an official response: https://www.youtube.com/watch?v=spZJrsssPA0||+87|