On why AMD wanted to buy ATI

The news about AMD moving to acquire ATI has prompted lots of discussion about the possible motivations at AMD HQ for this move. One line of thinking among the geekier types has focused on how AMD might meld ATI’s GPU technology with its own CPU technology. As I considered these possibilities, I found myself rummaging through my archives to find an article I’d written about the coming collision between GPUs and CPUs—back in early 2004. The article was originally published in the April 2004 issue of PC Enthusiast Magazine, and since that publication isn’t around anymore, I’ve decided to reprint it below. I think it sheds some light on one possible reason why AMD would want to own a graphics company like ATI.

Recall Moore’s law, which essentially states that the number of transistors possible on a chip will double every 18 months. The corollaries of this law—chip clock speeds will go up, performance will rise, die sizes will shrink, etc.—are often discussed as if they were the law itself, but Moore’s Law is primarily about transistor counts.

For years, the CPU guys have been taking advantage of Moore’s law in various ways, by adding transistors and building more complex processors capable of doing more work per clock cycle, and by ratcheting up clock speeds to gain greater absolute performance. Intel has long been a profit-making machine built on the assumption that Moore’s Law would continue to prove true. For some time now, doomsayers have been predicting the death of Moore’s Law, speculating that physical limitations would conspire to slow down advancements in chip fabrication technology. These discussions are intriguing in theory but snooze inducing in practice, because so far, Moore continues to be right.

The more interesting question, perhaps, is what chip designers will do with their growing transistor budgets. Recently, a talk by Bob Colwell, one of the architects of the Intel P6 core (inside everything from the Pentium Pro to Pentium III), became available online, and it makes for very enlightening reading. Colwell expresses concern about the direction the CPU design world is moving, noting that processor architects seem to be having trouble taking advantage of growing transistor counts to get more performance.

Intel’s new Prescott chip is emblematic of this problem. Prescott is a major overhaul of the Pentium 4 design, with a much deeper pipeline, many more transistors, larger caches, and hundreds of internal tweaks and additions. Prescott is slower clock for clock than its Northwood predecessor, runs hotter, and draws more power, although Intel may yet ramp Prescott up to higher clock speeds and get its power and heat problems in check. The Prescott P4 may still become a very decent product, but its problems are far from trivial.

The trouble is that CPU designers have already exploited many of the obvious tricks for extracting additional performance out of general-purpose microprocessors by adding transistors, and we may be approaching the end of the linear performance increases in CPU power to which we’ve become accustomed. The massive complexity of today’s CPUs is becoming increasingly difficult to manage; already, CPU designers are moving toward more parallelism. Intel’s Hyper-Threading is a start, and dual-core designs may be next, with a pair of processors glued together on a single die. However, software must be multithreaded in order to take advantage of symmetric multithreading and multiprocessing. Even then, the performance gains are of a different order. I like the creamy smooth computing experience that comes from multiple processors, but generally, a pair of 1GHz Athlons can’t compare to a 2GHz Athlon 64.

I have some faith in the ability of the smart folks at companies like AMD, IBM, and Intel to overcome parts of these CPU design challenges, but the fact is, it’s a tough road ahead.

Contrast that dilemma with what’s happening in graphics, an area where GPU performance and capabilities have been skyrocketing over the past ten years. GPUs have followed Moore’s predicted progression in a much more straightforward way than CPUs have. The original Radeon had about 30M transistors, the Radeon 8500 about 60M, and the Radeon 9700 roughly 110M. GPU clock speeds are up, too, but not by all that much, because GPUs don’t need clock speed increases in order to deliver better performance. The graphics guys share some concerns with the CPU guys related to power dissipation and heat density, but they have the incredible benefit of working on a computing problem that is inherently and thoroughly parallelizable.

At the most basic level, upping GPU performance is simple: add more pipelines, get more power. The current market leader, the Radeon 9800XT, has eight pipes, and some rumors point to NVIDIA’s upcoming NV40 having sixteen. GPUs can continue down this path for a good long time as Moore’s Law allows.

However, GPUs are relatively young inventions, and they are still growing up. Graphics chips used to be very much fixed-function devices—collections of custom logic designed to perform certain tasks—so that if an engineer wanted a feature like bump mapping, he’d add bump mapping logic. Nowadays, GPUs have pixel shader units, and software types write shader programs to produce bump-mapping effects.

GPU still contain a fair amount of function-specific logic for the sake of speed and efficiency, but the trend over time should be toward more generality, relative simplicity, and more parallel processing power. In the generation of GPUs after the one just on the horizon—the post-DirectX 9 era—pixel and vertex shaders will merge, or at least begin to merge, sharing a common instruction set and possibly on-chip resources as well. I’ve heard whispers that Microsoft’s Longhorn OS, the next big spin of Windows, will grant GPUs a real, unified storage hierarchy like CPUs have, complete with virtual memory. In short, GPUs are becoming highly parallel general-purpose processors, putting them on a collision course with CPUs. Already, researchers are looking into using DirectX 9-class GPUs as high-powered engines for scientific computing applications.

Similarly, as linear performance gains become more difficult to achieve, CPU guys will be irresistibly tempted to use their ballooning transistor budgets to incorporate more GPU-like capabilities into their processors. CPUs already have single-instruction multiple-data (SIMD) extensions like SSE2 and AltiVec, and Prescott’s new SSE3 extensions include basic vertex shader instructions. CPUs are just a little pokey at graphics-style math because they don’t have the internal parallelism GPUs do . . . yet.

When Intel and AMD run smack into ATI and NVIDIA, I predict big things . . . maybe a loud crash and twisted metal, or perhaps just mergers and acquisitions.

Comments closed
    • maxxcool
    • 13 years ago
    • maxxcool
    • 13 years ago

    I think ATI getting snapped up by amd fits.

    One thing to think about is AMD’s future designs where the cpu itself becomes modular allowing things to be “snapped” in and out of the core with little re-spin or to much time on the table.

    Their idea of a assembly line cpu would allow ATI to design and collaborate on putting GPU core elements into a CPU, and I’m sure that AMD’s design implemented onto a GPU is just as juicy.

    All this talk of PCI-E add in cards and accelerators makes sense. You cant keep shoving crap onto a die. 64 execution units on a gpu? 384 bit busses? Mammoth die sizes?

    It is scaling to its own closure, and its very close I think. I ranted sometime in 2005 about this and received A LOT of flak, but here we are on the road with SGI,SUN talking about math co-processors.

    Frankly burn me for a heretic but its about damn time. The fpu on cpus are fixed and in a lot of respects slow, this is where ATI’s firestream comes in. when SGI/SUN/CRAY and who ever announces their math cards. I will bet a boatload of nickels the logo will read xxxxxx by ati.

    I applaud the use of drop in cards, I think that the focus of fpu and graphic power should reside PURELY on a daughter card. Not to the cpu. The cpu should go back to managing the bus, memory and mundane integer tasks.

    I also think that south bridges need to be redesigned as well. At this point there is no need for ATA or SATA to talk to the CPU for anything.

    To hell with bus mastering. We need to get the USB/CDROM/SATA/LAN devices on their *OWN* processor bus that shares main memory, or a shared cache. At no point should the cpu be responsible for any of the processing load of these devices. I liked firewire, it was great…. Poorly marketed but great.

    Meh…. Rant….

    • Ardrid
    • 13 years ago

    Who knew Damage was a psychic? 😉

      • Toasty
      • 13 years ago

      I coulda told you that 5 years ago… 😉

    • Pete
    • 13 years ago

    Props for prescience, and for an article well-written.

      • blastdoor
      • 13 years ago

      Indeed, as Stephen Colbert might say, “YOU CALLED IT!!!!” (que the balloons)

Pin It on Pinterest

Share This