Your entire premise was to replace the fpu in servers with gpgpu's which simply cannot work at this point in time. And attempting to blame an admin for appropriately doing their job is compeltely off-base.
That is your premise, not mine. The only premises I have are the following:
- Floating Point Units are not needed in CPUs for the things that the majority of systems containing them are expected to do.
- Most floating point calculations benefit enough from being done on GPUs (i.e. they are stream calculations) that they should be processed there (i.e. on stream processors).
- The majority of servers do not do any significant quantity of floating point calculations to need floating point hardware.
There are quite a few calculations being done in web browsers to make the calculations in web applications work. The fact that high level web applications use the calculations does not imply that the usage of floating point arithmetic is significant at the hardware level.
One minute you're claiming that web servers and databases do not utilize floating point instructions; now you're limiting it to high level web applications? Do you really think that nobody notices when you try to change your argument in mid-stream?
If you want to know what I really
think, then I will tell you. You are more interested in attempting to contradict me than you are in saying something that is correct. There is a fundamental difference between what a web browser does and what a web server does and unless you understand the difference, you definitely should not try to correct those who do.
Streaming video is a file transfer with some special flow control added. It does not have any inherent dependency on floating point arithmetic. While versions that do transcoding do have such requirements, it is too computationally expensive to trans-code videos before streaming them from your servers unless you are Google or Netflix and if you were Google or Netflix, you would have multiple teams of engineers working for you that can write software that does these calculations on whatever processor architecture your datacenter uses. Since I assume that you are not Google or Netflix, the only thing you can do is send the bits as you have them on your disk. Otherwise, serving any significant quantity of users would bottleneck very quickly.
Research the difference between file serving and video streaming.
Sending a file from one computer to another is file streaming. Sending a video from one computer to another is video streaming. There is not much more to it than that.
VoIP is only intensive at the server-side and less than 1% of computers do it on a regular basis.
Well, since we're talking about server-grade CPUs the implication is that we're talking about servers. As for the "1%" that I am assuming was pulled from your butt, the fact of the matter is that 100% of servers used by VoIP providers "do it" on a regular basis.
If you want to talk about "server-grade CPUs", talk to morphine, or better yet, start your own thread. Morphine named the thread when he split it from a thread talking about floating point performance in AMD's Bulldozer CPU. While Bulldozer will certainly be used in servers, it is certainly not "server-grade", so nothing in this thread is about "server-grade CPUs".
By the way, when I talk about computers, I refer to anything that has a CPU. That includes cellphones, routers, desktops, laptops, set-top boxes, etcetera. Whether you believe it or not, less than 1% of those do VoIP. Also, I doubt that 100% of VoIP providers' servers do VoIP. Their web servers are likely dedicated to hosting their websites. There are also other internal servers that would be dedicated to managing their organizations' logins and other things that would not be used for any VoIP stuff either.
The meaning of the term significance is a key concept in this thread and I am not certain if people understand it. Finding a small pocket of instances of some event does not mean that the event occurs on a significant basis. The entire idea that floating point units in CPUs are not important is based on the notion of significance.
So... according to you, the entire telecom industry, the entertainment industry, and any businesses in general that have a need to store and transmit data over a secure connection, amount to a "small pocket of instances" that use FP? Gosh, I can't imagine why you aren't taken more seriously...
I doubt any encryption algorithm using floating point calculations is in use. In order for one to work, it would need to avoid rounding issues by restricting operations to the bits contained in the mantissa, which would mean emulating integer arithmetic with floating point numbers. If you think that businesses use floating point calculations to do secure communications, please name the algorithm that they use and explain how it uses floating point arithmetic. Using floating point arithmetic for encryption would cause a performance penalty, so I would be surprised if you could find anyone doing that.
I think you are reading more from my statements than is there to read. Saying that something is quite possibly <x> is a superset of saying that it is <x>. While being uncertain would imply saying that, saying that does not imply being uncertain. To assume such a thing is to affirm the consequent, which is wrong.
I think that you
are reading more from your statements than there is to read, by a wide margin, at least in regards to any sort of valid argument.
Have you ever had a genuine disagreement with someone where you did not hurl insults at them?
Disclaimer: I over-analyze everything, so try not to be offended if I over-analyze something you wrote.