Flying Fox wrote:IIRC the FP units in modern day CPUs are doing double duty running SSE instructions too? So there is still a use for that. Simply put, the unit has been integrated, not taking up that much die space, and the vendors just leave it there. How much do you think Intel/AMD/SunOracle/etc can save if they take out the FPU from their cores? $5 from the chip price?
If they play their cards right, it could be worth another core, although that depends on how many cores the chip has in the first place.
Flying Fox wrote:Great, all those 1U-4U rack mounted computers are not servers anymore.Shining Arcanine wrote:A server is a machine in a standard ATX or blade case that is dedicated to handling multiple users. Perhaps I should have been more clear on that, as you are right that the term is too abstract to discuss specific things about it.
Pardon my lack of IT experience to make the distinction, but 1U-4U rack mounted computers are all blades to me.
Buub wrote:Shining Arcanine wrote:Outside of legacy scientific computing software where you can wait months or even years for computations to finish, I am not sure why anyone would need a hardware floating point unit in their CPU. Processors are fast enough that that the things that hardware floating point units made computable per unit time 10 years ago are computable per unit time with compiler generated integer instructions today. Aside from legacy scientific computing software, there is no killer application that takes advantage of hardware floating point units in CPUs, because even if the CPU is as optimal as possible, it is still too slow. Having these calculations be done on GPUs is the way forward and it is not just me who thinks this.
Dude! Why weren't you around when Intel and AMD were spending all that time adding floating point hardware to their CPUs years ago. You could have saved them so much time and money! Obviously, they were mislead as to this particular need.
Stream processing had not been invented at the time. CPUs with hardware floating point units needed to exist before people would see the need for GPUs and contribute to their eventual evolution into stream processors. It is like how you needed the vacuum tubes to exist before people would see the need for transistors and contribute to the creation of the integrated circuit.
Another way of thinking is that people could have told IBM about the utility of using electricity for doing calculations to save time and money making the Automatic Sequence Controlled Calculator.
Buub wrote:... or maybe you need to get out more, and there are a helluva lot more applications that benefit greatly from hardware FP than you claim. Ever had to recompute a massive spreadsheet that took more than an hour with hardware FP? It would take days with software FP. And then there is stuff like simple gaming, and its close cousin simulation. AMD suffered big time in the K6 days because their hardware FP wasn't as good as Intel's; something they fixed with the Athlon. Not to mention all the scientific computing you just mentioned, which may or may not fit a CUDA-like model. Your view of the computing world appears to be exceedingly small.
AMD suffered a great deal during that time for a multitude of reasons, not the least of which were poor chipsets made by third party manufacturers. Poor floating point performance was not a singular cause for their fiscal performance.
By the way, scientific computing is designed to fit the hardware provided to it. If that were not the case, we would have 10THz CPUs being fabricated for scientific computing right now.
Buub wrote:Your approach reminds me of grid computing. You can push stuff into a grid and take advantage of massively parallel computational power. Something that might otherwise take days can be done in minutes, making very complex problems rather simple. That is, if it fits the grid paradigm. Of course, you have to re-architect the solution to this completely non-traditional paradigm. And random data access is very different -- you can't just query a SQL database. Grid data is distributed in chunked files around the grid for fast parallel access, but is extremely inefficient to access in a random access pattern. You could put an actual SQL database on the grid, but it would likely melt down as many thousands of processes try to access the data at the same time, since it's not designed for these sorts of access patterns.
That is a fair summary of the problems that have occurred repetitively across the entire history of computing. They are nothing new and they will never go away. Your computer has these issues internally right now, only it is not as obvious unless you are either writing an operating system or designing hardware for its replacement.
Buub wrote:The point is, as others have quite eloquently pointed out, not everything fits the GPU model, and even if it did, GPUs are not consistently available, consistently featureful, or even consistently of the same API. Maybe some day when GPU units are built into every processor, ala AMD Fusion, the streaming processors can be more closely integrated with the CPU. But that's a long way off. What we have now is clumsily integrated and must be explicitly accommodated.
The same is true for CPUs and human computers. Not every fits the CPU model as it is currently done and even if it did, CPUs are not consistently available, consistently featureful, or even consistently of the same API. Maybe some day CPUs will be ubiquitous, but that is a long way off. What CPUs are now are clumsily used and must be explicitly accommodated.
Buub wrote:Sorry, but you're completely off base in your analysis. Yes, for certain problems GPU-based solutions are awesome, just as for certain problems grid-based parallelism is awesome. But the problem must fit the solution space in this particular case, rather than the other way around.
Science has always required that you find a way to force problems into a solution space, rather than the other way around. That is nothing new.