Baidu's DeepBench can now measure inference performance

A lot of companies are pouring a lot of money into accelerating and improving deep learning tasks, but how do you tell which hardware offers are the best value for this purpose? Naturally, with benchmarks. China's Baidu—a company that can be fairly described as a Sino-Asian Google—operates a research division called Baidu Research. Last year, the research launched the open-source DeepBench tool to wide acclaim. Now the company has released a major update for the tool that includes benchmarks for deep learning inference as well as training.

For those new to the concept, "inference" is the actual practical work of using a neural network. Inference involves using an established model developed through deep learning training to make judgements on a new data set. Previous versions of DeepBench focused on training performance since it's often the more computationally-intensive portion of developing a new AI system. The new update includes new training and inference kernels, and specifies minimum precision requirements for both types of tasks, which can often be satisfactorily executed with reduced-precision computation.

The ability to benchmark inference performance is equally important, though, especially as dedicated hardware for the task becomes more popular. A thorough understanding of inference performance could result in better-adapted algorithms, further-optimized hardware, and ultimately, an improved experience for end users. If you want to get started, all the source is available at Baidu Research's github.

Tip: You can use the A/Z keys to walk threads.
View options

This discussion is now closed.