Single page Print

Intel: gunning for graphics
The rumors have been flying for the past few months about Intel potentially entering the high-end graphics market, but this IDF put to rest any reasonable doubt. We didn't see any formal announcement of a product or a declaration of intent to compete with the likes of Nvidia and ATI, but the signs were everywhere, starting with this one in the IDF showcase.

This was hanging next to a demo, tended by a couple of Russian software developers who work for Intel, of real-time ray-tracing running on an eight-core Clovertown rig. Like so:

The system was churning out frames at about six per second as I watched. The resolution was low and image quality was iffy in places, like flat floor surfaces. Still, the fact that this box could produce ray-traced images in real time using the CPU was impressive in its own way.

This whole endeavor is, of course, presently just a research project at Intel, yet one couldn't help but notice the declarations that real-time ray tracing is "CPU-friendly," computationally efficient, and "The future for games." Not only does Intel seem to be eyeing graphics closely, but it seems to be considering the possibility that today's traditional graphics rendering method might be supplanted by ray-tracing algorithms performed on a CPU.

I've already written about Intel's 80-core teraflop chip, another of the company's research projects. The really intriguing information about Intel's intentions for graphics, though, came in the whitepaper the company released about its tera-scale computing research. The 80-core chip shown at IDF didn't have x86-compatible cores onboard, but Intel is apparently investigating the possibility of using scaled-down x86 cores in future tera-scale products. The paper also mentions the possibility of using special-purpose functional units to accomplish specific tasks. Here's the example given:

Notice the presence of a large number of "streamlined IA core" tiles in addition to three graphics-specific tiles (possibly for texturing) and an HD video unit. Now get this: the whitepaper on SSE4 claims that the dot product instruction in SSE4 is good for "3-D content creation, gaming, and support for languages like CG and HLSL." That's true enough, but the use of this example is telling. Those languages, of course, are high-level shading languages for graphics. Imagine, if you will, a future in which graphics shader programs are compiled to a subset of the x86 ISA, including SSE4, for execution on an array of simplified x86 cores on an Intel microprocessor. Looks to me like Intel has already imagined that future.

There's much more to Intel's tera-scale research than just graphics, of course—much more. But CPUs and GPUs really do seem to be on a collision course, and both types of chips are accelerating toward this destiny. Coming soon: our coverage of ATI's Stream computing event in San Francisco the day after IDF. 

Intel's Xeon D brings Broadwell to cloud, web servicesA big compute node in a small package 40
AMD previews Carrizo APU, offers insights into power savingsExcavator cores and other innovations to help improve efficiency 115
The TR Podcast 169.5 bonus edition: Q&A intensifiesYou ask, we attempt to answer 5
Samsung's Galaxy Note 4 with the Exynos 5433 processorA Korean import gives us a look at ARM's latest tech 110
Intel's Xeon E5-2687W v3 processor reviewedHaswell-EP brings the hammer down 114
AMD's FX-8370E processor reviewedEight threads at 95W 147
Intel's Core i7-5960X processor reviewedHaswell Extreme cranks up the core count 198
AMD spills beans on Seattle's architecture, reference serverCache networks and coprocessors 46