2016 will be remembered for a lot of things. For graphics cards, last year marked the long-awaited transition to next-generation process technologies. It was also the year that the graphics card arguably came into its own as a distinct platform for compute applications. Not an Nvidia presentation went by last year wherein Jen-Hsun Huang didn't tout the power of the graphics processor for self-driving cars, image recognition, machine translation, and more. The company's various Pascal GPUs set new bars for gaming performance, too, but it's clear that gaming is just one job that future graphics cards will do.
AMD is, of course, just as aware of the potential of the graphics chip for high-performance computing. Even before ATI's merger with AMD and the debut of graphics cards with unified stream processor architectures, the company explored ways to tap the potential of its hardware to perform more general computing tasks. In the more than ten years since, graphics chips have been pressed into compute duty more and more.
AMD's next-generation graphics architecture, Vega, is built for fluency with all the new tasks that graphics cards are being asked to do these days. We already got a taste of Vega's versatility with the Radeon Instinct MI25 compute accelerator, and we can now explain some of the changes in Vega that make it a better all-around player for graphics and compute work alike.
Memory, memory everywhere
In his presentation at the AMD Tech Summit in Sonoma last month, Radeon Technologies Group chief Raja Koduri lamented the fact that data sets for pro graphics applications are growing to petabytes in size, and high-performance computing data sets to exabytes of information. Despite those increases, graphics memory pools are still limited to just dozens of gigabytes of RAM. To help crunch these increasingly enormous data sets, Vega's memory controller—now called the High Bandwidth Cache Controller—is designed to help the GPU access data sets outside of the traditional pool of RAM that resides on the graphics card.
The "high-bandwidth cache" is what AMD will soon be calling the pool of memory that we would have called RAM or VRAM on older graphics cards, and on at least some Vega GPUs, the HBC will consist of a chunk of HBM2 memory. HBM2 has twice the bandwidth per stack (256 GB/s) that HBM1 does, and the capacity per stack of HBM2 is up to eight times greater than HBM1. AMD says HBM stacks will continue to get bigger, offer higher performance, and scale in a power-efficient fashion, too, so it'll remain an appealing memory technology for future products.
HBM2 is only one potential step in a hierarchy of new caches where data to feed a Vega GPU could reside, however. The high-bandwidth cache controller has the ability to address a pool of memory up to 512TB in size, and that pool could potentially encompass other memory locations like NAND flash (as seen on the Radeon Pro SSG), system memory, and even network-attached storage. To demonstrate the HBCC in action, AMD demonstrated a Vega GPU displaying a photorealistic representation of a luxurious bedroom produced from hundreds of gigabytes of data using its ProRender backend.
|AMD reveals suitably massive Ryzen Threadripper packaging||110|
|Google releases last developer preview before Android O release||5|
|Asus Lyra forms a small constellation for better Wi-Fi||4|
|GeForce 384.94 drivers are ready to break the law||5|
|Rumor: Specs of six-core Coffee Lake CPUs leak||59|
|Alphacool Eisblock HDX-2 and HDX-3 help M.2 SSDs beat the heat||13|
|Corsair Lighting Pro Expansion Kit lets builders turn up the lights||11|
|Gigabyte SA-SBCAP3350 puts formidable power on a single board||14|
|Adata D16750 power bank is tougher than the average juice pack||16|
|Like it'll be that simple?||+25|