AMD's heterogeneous queuing aims to make CPU, GPU more equal partners

Slowly but surely, details about AMD's Heterogeneous Systems Architecture are trickling out. In April, we learned about hUMA, which will allow the Kaveri APU's CPU and GPU components to access each others' memory. Today, we can tell you about hQ. Otherwise known as heterogeneous queuing, hQ defines how work is distributed to the processor's CPU and GPU.

The current queuing model allows applications to generate work for the CPU. The CPU can generate work for itself, too, and it does so efficiently. However, passing tasks to the GPU requires going through the OS, adding latency. The GPU is also a second-class citizen in this relationship; it can't generate work for itself or for the CPU.

Heterogeneous queuing aims to make the CPU and GPU equal partners. It allows both components to generate tasks for themselves and for each other. Work is packaged using a standard packet format that will be supported by all HSA-compatible hardware, so there's no need for software to use vendor-specific code. Applications can put packets directly into the task queues accessed by the hardware. Each application can have multiple task queues, and a virtualization layer allows HSA hardware to see all the queues.

AMD's current hQ implementation uses hardware-based scheduling to manage how the CPU and GPU access those queues. However, that approach may not be required by the final HSA specification. Although hQ is definitely part of the spec, AMD says the OS could get involved in switching the CPU and GPU between the various task queues.

At least initially, Windows will be the only operating system with hQ support. AMD is working with Linux providers, and it's also looking into supporting other operating systems.

Tip: You can use the A/Z keys to walk threads.
View options

This discussion is now closed.