So you remember Google's Tensor Processing Unit? If not, all you really need to know is that the chip is a custom ASIC designed by Google to accelerate the inference phase of machine learning tasks. Google initially said that the TPU could improve performance-per-watt in those tasks by a factor of ten in comparison to traditional CPUs and GPUs. Now, the company has released some performance data in the form of a study analyzing the performance of the TPU since its quiet introduction in 2015.
The short version is that predicting a 10x uplift in performance-per-watt was Google's way of being modest. The actual increase for that metric was between 30 and 80 times that of regular solutions, depending on the scenario. When it comes to raw speed, Google says its TPU is between 15x to 30x faster than using standard hardware. The software that runs on the TPU is based on Google's TensorFlow machine learning framework, and some of these performance gains came from optimizing it. The writers of the study say that there are further optimization gains on tap, too.
Apparently, Google saw the need for a chip like the TPU as far back as six years ago. Google uses machine learning algorithms in many of its projects, including Image Search, Photos, Cloud Vision, and Translate. By its nature, machine learning is computationally intensive. By way of example, the Google engineers said that if people used voice search for three minutes a day, running the associated speech recognition tasks without the TPU would have required the company to have twice as many datacenters.
|Razer Electra V2 offers affordable immersion||0|
|Samsung 360 Round camera captures the world from all angles||6|
|National Seafood Bisque Day Shortbread||2|
|MSI GS63 Stealth laptop flies under the radar with a GTX 1050||4|
|Zotac GTX 1080 Ti ArcticStorm Mini proves that size doesn't matter||18|
|Aorus X9 packs two GTX 1070s in a slim chassis||11|
|ROG Strix X370-I and B350-I are itty-bitty boards for Ryzen builds||15|
|Qualcomm shows progress on 5G mobile broadband||21|
|Samsung foundry train stops at 8-nm LPP before heading to EUV||22|
|Honestly can't see the point of Vega64 for gamers. It's a power-hungry compute monster that barely outperforms Vega56 and no matter how much you overc...||+21|