Single page Print

What does caching do for graphics?
We've already spent ample time on this architecture's computing capabilities, so I won't revisit that ground again here. One question that we've had since hearing about the GF100's relatively robust cache architecture is what benefits caching might have for graphics—if any.

Most GPUs have a number of special-purpose pools of local storage. The GF100 is similar in that it has an instruction cache and a dedicated 12KB texture cache in each SM. However, each SM also has 64KB of L1 data storage that's a little bit different: it can be split either 48/16KB or 16/48KB between a local data store (essentially a software-managed cache) and a true L1 cache. For graphics, the GF100 uses the 48KB shared memory/16KB L1 cache configuration, so most of the local storage will be directly managed by Nvidia's graphics drivers, as it was in the GT200. The small L1 cache in each SM does have a benefit for graphics, though. According to Alben, if an especially long shader fills all of the available register space, registers can spill into this cache. That should avoid some worst-case scenarios that could greatly hamper performance.

More impressive is the GF100's 768KB L2 cache, which is coherent across the chip and services all requests to read and write memory. This cache's benefits for computing applications with irregular data access patterns are clear, but how does it help graphics? In several ways, Nvidia claims. Because this cache can store any sort of data, it has multiple uses: it has replaced the 256KB, read-only L2 texture cache and the write-only ROP cache in the GT200 with a single, unified read/write path that naturally maintains proper program order. Since it's larger, the L2 provides more texture coverage than the GT200's L2 texture cache, a straightforward benefit. Because it can store any sort of data, and because it may be the only local data store large enough to handle it, the L2 cache will hold the large amounts of geometry data generated during tessellation, too.

So there we have some answers. If it works well, caching should help enable the GF100's unprecedented levels of geometry throughput and contribute to the architecture's overall efficiency.

One more shot at likely speeds and feeds
Speaking of efficiency, that will indeed be the big question about the Fermi architecture and especially about the GF100. How efficient is the architecture in its first implementation?


Almost to scale? A GF100 die shot. Source: Nvidia.

The chip isn't in the wild yet, so no one has measured its exact die size. Nvidia, as matter of policy, doesn't disclose die sizes for its GPUs (they are, I believe, the last straggler on this point in the PC market). But we know the transistor count is about three billion, which is, well, hefty. How so large a chip will fare on TSMC's thus far troubled 40-nm fabrication process remains to be seen, but the signs are mixed at best.

Although we don't yet have final product specs, Nvidia's Drew Henry set expectations for the GF100's power consumption by admitting the chip will draw more power under load than the GT200. That fact by itself isn't necessarily a bad thing—Intel's excellent Lynnfield processors consume more power at peak than their Core 2 Quad predecessors, but their total power consumption picture is quite good. Still, any chip this late and this large is going to raise questions, especially with a very capable, much smaller competitor already in the market.

With the new information we have about the GF100's graphics bits and pieces, we can revise our projections for its theoretical peak capabilities. Sad to say, our earlier projections were too bullish on several fronts, so most of our revisions are in a downward direction.

We don't have final clock speeds yet, but we do have a few hints. As I pointed out when we are talking about texturing, Nvidia's suggestion that the GF100's theoretical texture filtering capacity will be lower than the GT200's gives us an upper bound on clock speeds. The crossover point where GF100 would match the GeForce GTX 280 in texturing capacity is a 1505MHz core clock, with the texturing hardware running at half that frequency. We can probably assume the GF100's clocks will be a little lower than that.

We have another nice hint that running the texturing hardware at half the speed of the shaders rather than on a separate core clock will impart a 12-14% frequency boost. In this case, I'm going to be optimistic, follow a hunch, and assume the basis of comparison is the GT200b chip in the GeForce GTX 285. A clock speed boost in that range would get us somewhere near 725MHz for the half-speed clock and 1450MHz for the shaders. The GF100's various graphics units running at those speeds would yield the following peak theoretical rates.

GT200 GF100 RV870
Transistor Count 1.4B 3.0B 2.15B
Process node 55 nm @ TSMC 40 nm @ TSMC 40 nm @ TSMC
Core clock 648 MHz 725 MHz 850 MHz
Hot clock 1476 MHz 1450 MHz --
Memory clock 2600 MHz 4200 MHz 4800 MHz
ALUs 240 512 1600
SP FMA rate 0.708 Tflops 1.49 Tflops 2.72 Tflops
DP FMA rate 88.5 Gflops 186 Gflops* 544 Gflops
ROPs 32 48 32
Memory bus width 512 bit 384 bit 256 bit
Memory bandwidth 166.4 GB/s 201.6 GB/s 153.6 GB/s
ROP rate 21.4 Gpixels/s 34.8 Gpixels/s 27.2 Gpixels/s
INT8 Bilinear texel rate
(Half rate for FP16)
51.8 Gtexels/s 46.4 Gtexels/s 68.0 Gtexels/s

I should pause to explain the asterisk next to the unexpectedly low estimate for the GF100's double-precision performance. By all rights, in this architecture, double-precision math should happen at half the speed of single-precision, clean and simple. However, Nvidia has made the decision to limit DP performance in the GeForce versions of the GF100 to 64 FMA ops per clock—one fourth of what the chip can do. This is presumably a product positioning decision intended to encourage serious compute customers to purchase a Tesla version of the GPU instead. Double-precision support doesn't appear to be of any use for real-time graphics, and I doubt many serious GPU-computing customers will want the peak DP rates without the ECC memory that the Tesla cards will provide. But a few poor hackers in Eastern Europe are going to be seriously bummed, and this does mean the Radeon HD 5870 will be substantially faster than any GeForce card at double-precision math, at least in terms of peak rates.

Otherwise, on paper, the GF100 projects to be superior to the Radeon HD 5870 only in terms of ROP rate and memory bandwidth. (Then again, it's now suddenly notable that we're not estimating triangle throughput. The GF100 will have a clear edge there.) That fact isn't necessarily a calamity. The GeForce GTX 280, for example, had just over half the peak shader arithmetic rate of the Radeon HD 4870 in theory, yet the GTX 280's delivered performance was generally superior. Much hinges on how efficiently the GF100 can perform its duties. What we can say with certainty is that the GF100 will have to achieve a new high-water mark in architectural efficiency in order to outperform the 5870 by a decent margin—something it really needs to do, given that it's a much larger piece of silicon.

Obviously, the GF100 is a major architectural transition for Nvidia, which helps explain its rather difficult birth. The advances it promises in both GPU computing and geometry processing capabilities are pretty radical and could be well worth the pain Nvidia is now enduring, when all is said and done. The company has tackled problems in this generation of technology that its competition will have to address eventually.

In attempting to handicap the GF100's prospects, though, I'm struggling to find a successful analog to such a late and relatively large chip. GPUs like the NV30 and R600 come to mind, along with CPUs like Prescott and Barcelona. All were major architectural revamps, and all of them conspicuously ran hot and underperformed once they reached the market. The only positive examples I can summon are perhaps the R520—the Radeon X1800 XT wasn't so bad once it arrived, though it wasn't a paragon of efficiency—and AMD's K8 processors, which were long delayed but eventually rewrote the rulebook for x86 CPUs. I suppose we'll find out soon enough where in this spectrum the GF100 will reside. TR

Like what we're doing? Pay what you want to support TR and get nifty extra features.
Top contributors
1. GKey13 - $650 2. JohnC - $600 3. davidbowser - $501
4. cmpxchg - $500 5. DeadOfKnight - $400 6. danny e. - $375
7. the - $360 8. Ryszard - $351 9. rbattle - $350
10. Ryu Connor - $350
Nvidia's GeForce GTX 980 and 970 graphics cards reviewedThe bigger Maxwell arrives in style 395
AMD's Radeon R9 285 graphics card reviewedTonga is quite the surprise 124
Asus' ROG Swift PG278Q G-Sync monitor reviewedEverything is awesome when you're part of a team 152
First impressions of Nvidia's Shield TabletMobile gaming done right 43
Custom-cooled Radeon R9 290X cards from Asus and XFX reviewedMore fans and pipes than a Phish concert 73
AMD's Radeon R9 295 X2 graphics card reviewedHawaii is surrounded by water, right? 251
A closer look at DirectX 12...or, rather, at Direct3D 12 114
Nvidia's GeForce GTX 750 Ti 'Maxwell' graphics processor...takes on the Radeon R7 265 and friends 202