What does caching do for graphics?
We've already spent ample time on this architecture's computing capabilities, so I won't revisit that ground again here. One question that we've had since hearing about the GF100's relatively robust cache architecture is what benefits caching might have for graphicsif any.
Most GPUs have a number of special-purpose pools of local storage. The GF100 is similar in that it has an instruction cache and a dedicated 12KB texture cache in each SM. However, each SM also has 64KB of L1 data storage that's a little bit different: it can be split either 48/16KB or 16/48KB between a local data store (essentially a software-managed cache) and a true L1 cache. For graphics, the GF100 uses the 48KB shared memory/16KB L1 cache configuration, so most of the local storage will be directly managed by Nvidia's graphics drivers, as it was in the GT200. The small L1 cache in each SM does have a benefit for graphics, though. According to Alben, if an especially long shader fills all of the available register space, registers can spill into this cache. That should avoid some worst-case scenarios that could greatly hamper performance.
More impressive is the GF100's 768KB L2 cache, which is coherent across the chip and services all requests to read and write memory. This cache's benefits for computing applications with irregular data access patterns are clear, but how does it help graphics? In several ways, Nvidia claims. Because this cache can store any sort of data, it has multiple uses: it has replaced the 256KB, read-only L2 texture cache and the write-only ROP cache in the GT200 with a single, unified read/write path that naturally maintains proper program order. Since it's larger, the L2 provides more texture coverage than the GT200's L2 texture cache, a straightforward benefit. Because it can store any sort of data, and because it may be the only local data store large enough to handle it, the L2 cache will hold the large amounts of geometry data generated during tessellation, too.
So there we have some answers. If it works well, caching should help enable the GF100's unprecedented levels of geometry throughput and contribute to the architecture's overall efficiency.
One more shot at likely speeds and feeds
Speaking of efficiency, that will indeed be the big question about the Fermi architecture and especially about the GF100. How efficient is the architecture in its first implementation?
The chip isn't in the wild yet, so no one has measured its exact die size. Nvidia, as matter of policy, doesn't disclose die sizes for its GPUs (they are, I believe, the last straggler on this point in the PC market). But we know the transistor count is about three billion, which is, well, hefty. How so large a chip will fare on TSMC's thus far troubled 40-nm fabrication process remains to be seen, but the signs are mixed at best.
Although we don't yet have final product specs, Nvidia's Drew Henry set expectations for the GF100's power consumption by admitting the chip will draw more power under load than the GT200. That fact by itself isn't necessarily a bad thingIntel's excellent Lynnfield processors consume more power at peak than their Core 2 Quad predecessors, but their total power consumption picture is quite good. Still, any chip this late and this large is going to raise questions, especially with a very capable, much smaller competitor already in the market.
With the new information we have about the GF100's graphics bits and pieces, we can revise our projections for its theoretical peak capabilities. Sad to say, our earlier projections were too bullish on several fronts, so most of our revisions are in a downward direction.
We don't have final clock speeds yet, but we do have a few hints. As I pointed out when we are talking about texturing, Nvidia's suggestion that the GF100's theoretical texture filtering capacity will be lower than the GT200's gives us an upper bound on clock speeds. The crossover point where GF100 would match the GeForce GTX 280 in texturing capacity is a 1505MHz core clock, with the texturing hardware running at half that frequency. We can probably assume the GF100's clocks will be a little lower than that.
We have another nice hint that running the texturing hardware at half the speed of the shaders rather than on a separate core clock will impart a 12-14% frequency boost. In this case, I'm going to be optimistic, follow a hunch, and assume the basis of comparison is the GT200b chip in the GeForce GTX 285. A clock speed boost in that range would get us somewhere near 725MHz for the half-speed clock and 1450MHz for the shaders. The GF100's various graphics units running at those speeds would yield the following peak theoretical rates.
|Process node||55 nm @ TSMC||40 nm @ TSMC||40 nm @ TSMC|
|Core clock||648 MHz||725 MHz||850 MHz|
|Hot clock||1476 MHz||1450 MHz||--|
|Memory clock||2600 MHz||4200 MHz||4800 MHz|
|SP FMA rate||0.708 Tflops||1.49 Tflops||2.72 Tflops|
|DP FMA rate||88.5 Gflops||186 Gflops*||544 Gflops|
|Memory bus width||512 bit||384 bit||256 bit|
|Memory bandwidth||166.4 GB/s||201.6 GB/s||153.6 GB/s|
|ROP rate||21.4 Gpixels/s||34.8 Gpixels/s||27.2 Gpixels/s|
|INT8 Bilinear texel rate
(Half rate for FP16)
|51.8 Gtexels/s||46.4 Gtexels/s||68.0 Gtexels/s|
I should pause to explain the asterisk next to the unexpectedly low estimate for the GF100's double-precision performance. By all rights, in this architecture, double-precision math should happen at half the speed of single-precision, clean and simple. However, Nvidia has made the decision to limit DP performance in the GeForce versions of the GF100 to 64 FMA ops per clockone fourth of what the chip can do. This is presumably a product positioning decision intended to encourage serious compute customers to purchase a Tesla version of the GPU instead. Double-precision support doesn't appear to be of any use for real-time graphics, and I doubt many serious GPU-computing customers will want the peak DP rates without the ECC memory that the Tesla cards will provide. But a few poor hackers in Eastern Europe are going to be seriously bummed, and this does mean the Radeon HD 5870 will be substantially faster than any GeForce card at double-precision math, at least in terms of peak rates.
Otherwise, on paper, the GF100 projects to be superior to the Radeon HD 5870 only in terms of ROP rate and memory bandwidth. (Then again, it's now suddenly notable that we're not estimating triangle throughput. The GF100 will have a clear edge there.) That fact isn't necessarily a calamity. The GeForce GTX 280, for example, had just over half the peak shader arithmetic rate of the Radeon HD 4870 in theory, yet the GTX 280's delivered performance was generally superior. Much hinges on how efficiently the GF100 can perform its duties. What we can say with certainty is that the GF100 will have to achieve a new high-water mark in architectural efficiency in order to outperform the 5870 by a decent marginsomething it really needs to do, given that it's a much larger piece of silicon.
Obviously, the GF100 is a major architectural transition for Nvidia, which helps explain its rather difficult birth. The advances it promises in both GPU computing and geometry processing capabilities are pretty radical and could be well worth the pain Nvidia is now enduring, when all is said and done. The company has tackled problems in this generation of technology that its competition will have to address eventually.
In attempting to handicap the GF100's prospects, though, I'm struggling to find a successful analog to such a late and relatively large chip. GPUs like the NV30 and R600 come to mind, along with CPUs like Prescott and Barcelona. All were major architectural revamps, and all of them conspicuously ran hot and underperformed once they reached the market. The only positive examples I can summon are perhaps the R520the Radeon X1800 XT wasn't so bad once it arrived, though it wasn't a paragon of efficiencyand AMD's K8 processors, which were long delayed but eventually rewrote the rulebook for x86 CPUs. I suppose we'll find out soon enough where in this spectrum the GF100 will reside.
148 comments — Last by spigzone at 10:31 AM on 08/19/10
|Nvidia's GeForce GTX Titan X graphics card reviewedYour GTX 980 is puny. I spit on it. Ptoo.||443|
|Five GeForce GTX 960 cards overclockedHow do I compare thee? Dunno, really||189|
|The TR Podcast 169.5 bonus edition: Q&A intensifiesYou ask, we attempt to answer||5|
|Samsung's Galaxy Note 4 with the Exynos 5433 processorA Korean import gives us a look at ARM's latest tech||110|
|We discuss the GeForce GTX 970 memory controversyDissecting the issue||94|
|Nvidia: the GeForce GTX 970 works exactly as intendedA look inside the card's unusual memory config||205|
|Nvidia's GeForce GTX 960 graphics card reviewedThe green team brandishes a sawed-off shotgun||236|
|Catalyst Omega driver adds more than 20 features, 400 bug fixes...and some performance improvements, to boot||162|
|The TR Podcast 175: the Zen of chipmaking and ARM's Cortex-A72 revealed||4|
|Elon Musk lays out vision for a battery-powered future||97|
|Inside ARM's Cortex-A72 microarchitecture||33|
|Asus' 144Hz MG279Q monitor may top out at 90Hz with FreeSync||54|
|Deal of the week: A Bay Trail netbook for $161, free case fans, and more||17|
|DirectX 12 Multiadapter shares work between discrete, integrated GPUs||95|
|Gigabyte's 9-series motherboards are Broadwell-ready||44|
|The TR Podcast will be live on Twitch shortly!||3|
|AMD delays FreeSync support for multi-GPU systems||40|