Nvidia occasionally boasts that its Tesla graphics processors deliver supercomputer-like performance in desktops and workstations. But what happens if you stick them in an actual supercomputer? Does it become self-aware?
The people at Tokyo's Institute of Technology tried it, complementing their Tsubame supercomputer with 170 Tesla S1070 1U systems (each containing four Tesla GPUs and 960 stream processors). Tokyo Tech's Satoshi Matsuoka says the Tesla GPUs "delivered speed-ups that we had never seen before, sometimes even orders of magnitude." No word of self-awareness yet, though.
According to the Top500.org list, the supercomputer's peak floating-point performance jumped from 109,728 to 161,816 teraFLOPS between June and November, and maximum performance rose from 67.7 to 77.5 teraFLOPS in Linpack. The Tsubame's Sun Fire x4600/x6250 server cluster also has help from ClearSpeed CSX600 accelerators, though.
Nvidia writes in its press release that adding Tesla GPUs placed the Tsubame "again . . . amongst the top ranks in the world's Top 500 Supercomputers." That's true, but it looks like the cluster was ranked 24th in June, and it's now 29th. To be fair, it'd likely have slipped down further without the Tesla GPUs.