Intel confirms that Xe will have ray-tracing hardware acceleration


It would have been all too easy to overlook a blog post about professional rendering and visual effects over at Intel's IT Peer Network site. Most of the post talks about Intel's rendering framework and how it gets used in the VFX industry. However, a kernel of information about the company's upcoming Xe graphics chips was buried in the post. Given how starved we are for details on the all-new lineup, it's pretty tantalizing. Here, I'll spoil it for you: Xe is going to have hardware-accelerated ray-tracing.


Rendered on Intel hardware. CPUs, though.

That's probably not a huge shocker to anyone who follows the industry. AMD's next graphics hardware will apparently have some form of ray-tracing support, and I'm sure I don't have to explain what the "RT" in "GeForce RTX" stands for. With that said, Intel's post says specifically that "the Intel Xe architecture roadmap for data center optimized rendering includes ray tracing hardware acceleration support" for Intel's rendering framework [emphasis mine]. The presence of a feature in the datacenter-oriented parts says little about its availability to regular Joes like this writer.

Furthermore, Intel's rendering framework is largely oriented toward offline rendering, not real-time work as RTX is. Where Intel talks at length about its open-source libraries for rendering, nowhere in the post does it mention Microsoft's DXR—or Windows at all. Still, it's reasonable to think that Xe consumer parts could launch with DXR hardware acceleration considering Intel's emphasis on the importance of ray-tracing for both graphics and other purposes in the blog post.

I've largely glossed over the software side of Intel's announcement, but for those curious, the company says it just released an OSPRay plugin for Pixar's USD Hydra that enables "interactive, up-to-real-time, photo-realistic global-illuminated previews" in supported applications. That plugin uses the company's also-recently-released Open Image Denoise library, which is itself neural-network-based just like Nvidia's implementation. It makes use of the CPU's SSE, AVX, and even AVX-512 instructions (when available) to accelerate the denoising process. When Xe comes out, it will be interesting to see if it makes use of Open Image Denoise as-is, or whether that process will move to the GPU as in Nvidia's method.

Unfortunately we've still got a while to wait before we could even guess at expected performance, price, or power figures for Xe; Intel isn't expected to launch anything until next year. Here's hoping we hear some more details as we creep closer to launch.

Tip: You can use the A/Z keys to walk threads.
View options