The chips on our video cards have as many as three times as many transistors as our CPU's, and most of the time, they aren't being used. Unless we're running a 3d game or some rendering programs, the 80+ million transistors are unused, yet the FAN STILL GOES!!!
What if nVidia and ATI (and whoever else feels like it) added a few hundred transistors to allow the computer to pump data for analysis through it. It could obviously be software based to allow users to disable it when they wanted nothing to be processed, and, like Prime95, SETI@home, FAH, United Devices, and all the others, it would have a low [or idle, depending on OS] priority. Intel, AMD, SIS, VIA, ATI, and nVidia could easily come up with a standard that would enable this.
I'm sure that it would not be too hard to implement. Think about games today. Take Counterstrike for example. It supports ATI's TRUFORM technology, but if the card does not have the capability, the game works just as well without it. The Radeon 8500 introduced SMOOTHVISION technology, but Counterstrike does not support it, so it is not implemented. No reprocussions there either. GPU based distributed computing should be simple to add to future chips, and software shouldn't be too hard to include in operating systems or in the driver set provided with the chip.
What do you people think about this idea. I'd like to know the popular opinion about this. Feedback from chip-making people would be great.