Facebook speeds up machine learning with Nvidia Tesla power


— 4:53 PM on December 11, 2015

Neural networks, once considered a computing-resource-intensive approach to machine learning, have become a mainstay in the field over the last decade or so. They do still require an enormous amount of computing power to run, though, and plain old CPUs aren't enough for the task, particularly in deep learning applications. That's where GPU compute comes in.

Facebook is the latest player in this field to take advantage of Nvidia's Tesla compute cards—more specifically, the Tesla M40. The social media company has recently announced "Big Sur," a system designed specifically for training neural networks. Big Sur is built on an Open Rack V2 platform, and each box can take up to eight compute cards. Facebook claims its design is more versatile than the off-the-shelf solutions it was using before, and twice as fast as those systems, too. The company plans to submit Big Sur's design materials to the Open Compute Project, letting everyone who wants to build one in on the fun.

Facebook isn't the only company doing Serious Business with GPU accelerators. Recently, IBM added support for the Tesla K80 to its Watson platform. Microsoft's Computational Network Toolkit is CUDA-enabled, and its image recognition system also got a GPU-powered boost. Google's TensorFlow likewise works with CUDA. It's not every day one can say it's the dawn of a new era, but that's certainly looking to be the case of late with machine learning and artificial intelligence applications.

   
Register
Tip: You can use the A/Z keys to walk threads.
View options

This discussion is now closed.