Split from: Question on current-gen Intel vs. AMD processors

Discussion of all forms of processors, from AMD to Intel to VIA.

Moderators: Flying Fox, morphine

Re: Question on current-gen Intel vs. AMD processors

Postposted on Tue Mar 18, 2014 4:15 pm

just brew it! wrote:
Wicked Mystic wrote:In fact newest Intel processors are very strong on x87 intensive calculations. Can you explain why is that?

They're only "strong" relative to older implementations. x87 still performs poorly when pitted against anything more modern like SSE. The stack-based x87 architecture imposes some severe constraints on how much optimization the compiler can do, and is incompatible with vectorization.


Relative to integer performance for example. Comparing year 2014 Intel's top model agains Intel's 2003 top model. During these years, x87 vs integer performance ratio has gone up around 2 ratio. Simplified, if Intel's 2003 x87 vs integer performance ratio was 1:1, now it's about 2:1. On AMD same ratio is somewhere around 0.6:1, perhaps even lower.

While AMD has crippled x87 performance, Intel has raised it very much.

Glorious wrote:
Wicked Mystic wrote:SSE? SSE2? MMX? 3D-NOW!?


Again, AVX in 2011, AVX2 in 2013, but yet FPUs in CPUs are "pretty much useless" and only good for "running legacy software."

Let me quote you:

Wicked Mystic wrote:That's just good thing because floating point units on processors are pretty much useless. Running legacy software is an exception of course.


The Haswell New Instructions (which include AVX2) weren't available until summer last year. Thus any software that uses them is, by definition, new and *not* legacy.

What you are saying is insane.


What I was saying is that modern software is mostly legacy. 32-bit, Directx9, no use for GPU. Modern software should use GPU effectively.

I expect FPU's to get much weaker when GPU calculations get enough support. Now there is not enough support and that is why FPU has use. Legacy software.

Glorious wrote:Is this supposed to be where you're going to "gotcha" me with how Intel CPUs run lots and lots of "legacy" software? :o

Yeah, OK. I'm going to stop you there.

I'm not really sure what you think the point of a computer is, but for me, and I think everyone else, the point is to get stuff done. If "legacy" software works fine, that's, well, fine.


I just wanted to know how you see processor history.
Wicked Mystic
Gerbil
 
Posts: 51
Joined: Thu Oct 03, 2013 6:36 am

Re: Question on current-gen Intel vs. AMD processors

Postposted on Tue Mar 18, 2014 5:07 pm

I'm still not sure exactly what your point is, but offhand I see two major problems with trying to supplant a floating point unit with a GPU. Right now the GPU is a specialized co-processor, and like most other parts of the system it isn't very aware of what's going on inside the CPU its self. It's been a very long time since offloading all floating point calculations on to a separate co-processor was a practical idea.

Second, GPUs have completely different instruction sets and threading models, and x86 is sort of notoriously hard to kill. Chances are the switch, if any, will go in the opposite direction.
Redocbew
Gerbil
Gold subscriber
 
 
Posts: 16
Joined: Sat Mar 15, 2014 11:44 am

Re: Question on current-gen Intel vs. AMD processors

Postposted on Tue Mar 18, 2014 6:12 pm

Redocbew wrote:I'm still not sure exactly what your point is, but offhand I see two major problems with trying to supplant a floating point unit with a GPU. Right now the GPU is a specialized co-processor, and like most other parts of the system it isn't very aware of what's going on inside the CPU its self. It's been a very long time since offloading all floating point calculations on to a separate co-processor was a practical idea.

To expand on that a bit further, GPU will almost always be less efficient than an on-die FPU when dealing with general compute workloads. GPU compute is designed to handle repetitive calculations on large amounts of data that can be streamed from/to RAM. This makes it spectacularly good for some types of problems, but ill-suited for others. For operations involving smaller amounts of data, or where the operations are less regular, overhead of coordinating the GPU compute engine with the rest of the application will more than cancel out any gains in raw performance.

You *need* to have a FPU in the CPU, that is tightly coupled to the CPU's internal registers and control flow, whether you've got a GPU or not.
(this space intentionally left blank)
just brew it!
Administrator
Gold subscriber
 
 
Posts: 37947
Joined: Tue Aug 20, 2002 10:51 pm
Location: Somewhere, having a beer

Re: Question on current-gen Intel vs. AMD processors

Postposted on Tue Mar 18, 2014 6:15 pm

just brew it! wrote:You *need* to have a FPU in the CPU, that is tightly coupled to the CPU's internal registers and control flow.

Quiet, old-timer! You're still thinking in terms of legacy software!
There is a fixed amount of intelligence on the planet, and the population keeps growing :(
morphine
Grand Admiral Gerbil
Silver subscriber
 
 
Posts: 10063
Joined: Fri Dec 27, 2002 8:51 pm
Location: Portugal (that's next to Spain)

Re: Question on current-gen Intel vs. AMD processors

Postposted on Tue Mar 18, 2014 6:51 pm

He's still trolling and you're still biting.



Damn, he's good.
<insert large, flashing, epileptic-fit-inducing signature (based on the latest internet-meme) here>
Chrispy_
Minister of Gerbil Affairs
Gold subscriber
 
 
Posts: 2065
Joined: Fri Apr 09, 2004 3:49 pm

Re: Question on current-gen Intel vs. AMD processors

Postposted on Tue Mar 18, 2014 7:36 pm

just brew it! wrote:For operations involving smaller amounts of data, or where the operations are less regular, overhead of coordinating the GPU compute engine with the rest of the application will more than cancel out any gains in raw performance.

You *need* to have a FPU in the CPU, that is tightly coupled to the CPU's internal registers and control flow, whether you've got a GPU or not.


Isn't this the true goal of AMD's APUs with HSA?
FightingScallion
Gerbil
Silver subscriber
 
 
Posts: 42
Joined: Mon Mar 10, 2014 9:59 pm

Re: Question on current-gen Intel vs. AMD processors

Postposted on Wed Mar 19, 2014 6:26 am

FightingScallion wrote:
just brew it! wrote:For operations involving smaller amounts of data, or where the operations are less regular, overhead of coordinating the GPU compute engine with the rest of the application will more than cancel out any gains in raw performance.

You *need* to have a FPU in the CPU, that is tightly coupled to the CPU's internal registers and control flow, whether you've got a GPU or not.

Isn't this the true goal of AMD's APUs with HSA?

Not exactly.

The goal of HSA is to elevate the GPU to "first class citizen" status relative to the traditional cores, with seamless access to virtual memory and cache coherency. This will make it easier to code stuff to run on the GPU, but the GPU still won't *replace* the "normal" cores. The GPU -- consisting of a large number of relatively simple cores -- will still be best suited to applications where you have large amounts of data that you need to crunch through in parallel (and ill-suited to pretty much everything else).
(this space intentionally left blank)
just brew it!
Administrator
Gold subscriber
 
 
Posts: 37947
Joined: Tue Aug 20, 2002 10:51 pm
Location: Somewhere, having a beer

Re: Question on current-gen Intel vs. AMD processors

Postposted on Wed Mar 19, 2014 6:32 am

just brew it! wrote:
FightingScallion wrote:
just brew it! wrote:For operations involving smaller amounts of data, or where the operations are less regular, overhead of coordinating the GPU compute engine with the rest of the application will more than cancel out any gains in raw performance.

You *need* to have a FPU in the CPU, that is tightly coupled to the CPU's internal registers and control flow, whether you've got a GPU or not.

Isn't this the true goal of AMD's APUs with HSA?

Not exactly.

The goal of HSA is to elevate the GPU to "first class citizen" status relative to the traditional cores, with seamless access to virtual memory and cache coherency. This will make it easier to code stuff to run on the GPU, but the GPU still won't *replace* the "normal" cores. The GPU -- consisting of a large number of relatively simple cores -- will still be best suited to applications where you have large amounts of data that you need to crunch through in parallel (and ill-suited to pretty much everything else).



Using a GPU for floating point operations is like using tractor trailer for transporting goods: It's great for large loads that aren't particularly latency sensitive and can fully utilize the GPU (i.e. fill the trailer efficiently). It's horrible for mixed loads and conditional calculation or loads that aren't very highly parellelized. It's also not too great for relatively short bursts of computation where latency is almost as important as throughput.
4770K @ 4.7 GHz; 32GB DDR3-2133; GTX-770; 512GB 840 Pro (2x); Fractal Define XL-R2; NZXT Kraken-X60
--Many thanks to the TR Forum for advice in getting it built.
chuckula
Gerbil Elite
Gold subscriber
 
 
Posts: 569
Joined: Wed Jan 23, 2008 9:18 pm
Location: Probably where I don't belong.

Re: Question on current-gen Intel vs. AMD processors

Postposted on Wed Mar 19, 2014 7:20 am

Wicked Mystic wrote:Yep. On architechture side Intel is far from being superior. Intel's manufacturing process however is clearly superior.

That's just good thing because floating point units on processors are pretty much useless. Running legacy software is an exception of course.


Hmm... I'm an AMD fan but I won't go so far as to say that Intel's architecture is far from superior, because Ivy Bridge and Haswell ARE superior. I gotta admire those Intel engineers for creating such an insanely sophisticated piece of technology. Then again, they do have half the world's money (exaggeration) at their disposal for R&D. Then again also, money can only get you so far. Look at ATIC. They're practically pumping money from the ground but GF isn't exactly on the leading edge these days despite all the money they're pumping in it.

As for FPUs in today's CPUs being pretty much useless.... er, no. Fire up a modern day FPS without an FPU and I can assure you it's gonna throw you an error message in about 5 clock cycles.
The three pillars of my digital life: AMD FX-8350, Google Nexus 7 (Qualcomm Snapdragon S4 Pro), Intel Core i5-2450M
ronch
Gerbil Elite
 
Posts: 665
Joined: Mon Apr 06, 2009 7:55 am
Location: C:\Program Files\

Re: Question on current-gen Intel vs. AMD processors

Postposted on Wed Mar 19, 2014 7:25 am

Wicked Mystic wrote:Relative to integer performance for example. Comparing year 2014 Intel's top model agains Intel's 2003 top model. During these years, x87 vs integer performance ratio has gone up around 2 ratio. Simplified, if Intel's 2003 x87 vs integer performance ratio was 1:1, now it's about 2:1. On AMD same ratio is somewhere around 0.6:1, perhaps even lower.

While AMD has crippled x87 performance, Intel has raised it very much.


Yes, Intel makes a superior product. I'm not sure why this is a problem.

At any rate, do you have any cites?

Wicked Mystic wrote:What I was saying is that modern software is mostly legacy. 32-bit, Directx9, no use for GPU. Modern software should use GPU effectively.


Why?

First off, by throwing directx9 in there you CLEARLY aren't talking about "software", no, you're CLEARLY talking about GAMES.

Second off, most software has no use for a GPU.

Third off, least common denominator dude. There are millions of computers still using 32-bit OSes, including ~20% of current gamers! There are also millions of computers with no GPU or atrociously slow GPUs. Making "modern" commercial software that excludes them is just stupid.

Wicked Mystic wrote:I expect FPU's to get much weaker when GPU calculations get enough support. Now there is not enough support and that is why FPU has use. Legacy software.


I repeat: AVX2 is TOTALLY *NEW* and it includes FP instructions.

Intel's newest state-of-the-art chip is beefing up CPU floating point performance with things like 3 operand FMA. It is not for legacy software because it has only been available in the wild for less than year, and thus anything that uses it is definitely *not* legacy.

Wicked Mystic wrote:I just wanted to know how you see processor history.


Well, I deal with stuff like occasionally migrating extant DEC MACRO code for the VAX/VMS, code that for all I know originated in MACRO-11 and ran on PDP, to something that's, you know, newer. Likewise with dealing with strange DEC FORTRAN 77 extensions that seem to have originated on a PDP, again. Emulation is great, but running on software that's 1) been supported as of at least 15 years ago and 2) won't send someone under 55 screaming, is nice. :wink:

I'm significantly younger than that, but I'm weird and even the old-timers don't want to know about it anymore.

So perhaps my definition of "legacy" is entirely different than yours. :wink:

It's all perspective. I'm much less angry about those situations, for instance, than when I see document.all on one of our webpages. That's when I mutter about how it's been deprecated for ages and not even supported on the newest version of IE.

..And I'd be MUCH LESS ANGRY about both of those than if you demand I rewrite our LP solver to use a GPU, which our systems don't even have! :roll:
Glorious
Darth Gerbil
Gold subscriber
 
 
Posts: 7884
Joined: Tue Aug 27, 2002 6:35 pm

Re: Question on current-gen Intel vs. AMD processors

Postposted on Wed Mar 19, 2014 8:23 am

Redocbew wrote:I'm still not sure exactly what your point is, but offhand I see two major problems with trying to supplant a floating point unit with a GPU. Right now the GPU is a specialized co-processor, and like most other parts of the system it isn't very aware of what's going on inside the CPU its self. It's been a very long time since offloading all floating point calculations on to a separate co-processor was a practical idea.

Second, GPUs have completely different instruction sets and threading models, and x86 is sort of notoriously hard to kill. Chances are the switch, if any, will go in the opposite direction.


My point is that there is no need for strong floating point units like years ago. Not everything can be offloaded but much can. Then there is no need for strong FPU on processor.

just brew it! wrote:To expand on that a bit further, GPU will almost always be less efficient than an on-die FPU when dealing with general compute workloads. GPU compute is designed to handle repetitive calculations on large amounts of data that can be streamed from/to RAM. This makes it spectacularly good for some types of problems, but ill-suited for others. For operations involving smaller amounts of data, or where the operations are less regular, overhead of coordinating the GPU compute engine with the rest of the application will more than cancel out any gains in raw performance.

You *need* to have a FPU in the CPU, that is tightly coupled to the CPU's internal registers and control flow, whether you've got a GPU or not.


FPU needs to be, at least for compatibility reasons. Another question how strong FPU needs to be. AMD put "half" FPU per core. What if
"quarter" FPU per core (two modules, one FPU) is enough? Or one FPU per 4 modules? It's going to that direction.

Chrispy_ wrote:He's still trolling and you're still biting.

Damn, he's good.


You are too stupid to say anything else?

just brew it! wrote:Not exactly.

The goal of HSA is to elevate the GPU to "first class citizen" status relative to the traditional cores, with seamless access to virtual memory and cache coherency. This will make it easier to code stuff to run on the GPU, but the GPU still won't *replace* the "normal" cores. The GPU -- consisting of a large number of relatively simple cores -- will still be best suited to applications where you have large amounts of data that you need to crunch through in parallel (and ill-suited to pretty much everything else).


Not completly replace but make less use on FPU. Offload as much as possible to GPU = no need for powerful FPU.

ronch wrote:Hmm... I'm an AMD fan but I won't go so far as to say that Intel's architecture is far from superior, because Ivy Bridge and Haswell ARE superior. I gotta admire those Intel engineers for creating such an insanely sophisticated piece of technology. Then again, they do have half the world's money (exaggeration) at their disposal for R&D. Then again also, money can only get you so far. Look at ATIC. They're practically pumping money from the ground but GF isn't exactly on the leading edge these days despite all the money they're pumping in it.

As for FPUs in today's CPUs being pretty much useless.... er, no. Fire up a modern day FPS without an FPU and I can assure you it's gonna throw you an error message in about 5 clock cycles.


Architechturally Ivy Bridge and Haswell are far from superior compared to Bulldozer. Manufacturing technology is superior and that makes difference. Intel Steamroller vs Intel Core i7? I think Intel Steamroller would win that.

FPU usage has gone down. As said many times already. you must have FPU but putting powerful FPU is just wasting recources. Expect if running legacy software of course.

Glorious wrote:Why?

First off, by throwing directx9 in there you CLEARLY aren't talking about "software", no, you're CLEARLY talking about GAMES.

Second off, most software has no use for a GPU.

Third off, least common denominator dude. There are millions of computers still using 32-bit OSes, including ~20% of current gamers! There are also millions of computers with no GPU or atrociously slow GPUs. Making "modern" commercial software that excludes them is just stupid.


DirectX 9 is 11 years old technology.

And most software has no heavy use for x87.

Stupid or not, supporting 32-bit software is supporting legacy software. Can you deny that?

Glorious wrote:Well, I deal with stuff like occasionally migrating extant DEC MACRO code for the VAX/VMS, code that for all I know originated in MACRO-11 and ran on PDP, to something that's, you know, newer. Likewise with dealing with strange DEC FORTRAN 77 extensions that seem to have originated on a PDP, again. Emulation is great, but running on software that's 1) been supported as of at least 15 years ago and 2) won't send someone under 55 screaming, is nice. :wink:


And there was 3D-programming professionals who didn't know what is Voodoo.

Glorious wrote:So perhaps my definition of "legacy" is entirely different than yours. :wink:

It's all perspective. I'm much less angry about those situations, for instance, than when I see document.all on one of our webpages. That's when I mutter about how it's been deprecated for ages and not even supported on the newest version of IE.

..And I'd be MUCH LESS ANGRY about both of those than if you demand I rewrite our LP solver to use a GPU, which our systems don't even have! :roll:


Maybe my definition is different. No, I don't demand to rewrite software. That wasn't the point. Do you think your systems are modern? Or are they legacy? Sounds like legacy to me. I don't care if it's impossible to make them modern, I don't care if it costs too much, I don't care if they actually work. Those things do not make them modern.
Wicked Mystic
Gerbil
 
Posts: 51
Joined: Thu Oct 03, 2013 6:36 am

Re: Question on current-gen Intel vs. AMD processors

Postposted on Wed Mar 19, 2014 9:13 am

Wicked Mystic wrote:My point is that there is no need for strong floating point units like years ago. Not everything can be offloaded but much can. Then there is no need for strong FPU on processor.


Are you under the complete misapprehension that we need to conserve transistors on modern CPU cores? :o

On the contrary dude, those designers are desperate to find ways to use transistors any way other than just cookie-cutting more cores onto an IC. "Saving" transistors by weakening/eliminating FPUs frees that space up to do.... what?

Seriously, what is the point? Where else could they more profitably use them? It'd be pointless self-sabotage for little-to-no conceivable gain.

Wicked Mystic wrote:FPU needs to be, at least for compatibility reasons. Another question how strong FPU needs to be. AMD put "half" FPU per core. What if "quarter" FPU per core (two modules, one FPU) is enough? Or one FPU per 4 modules? It's going to that direction.


Yeah, the no FPU for no modules direction, because AMD isn't exactly leading the industry with their design success here. They are bleeding market-share, badly.

Wicked Mystic wrote:Not completly replace but make less use on FPU. Offload as much as possible to GPU = no need for powerful FPU.


A decision made by desperation, not deliberate design.

Wicked Mystic wrote:FPU usage has gone down. As said many times already. you must have FPU but putting powerful FPU is just wasting recources. Expect if running legacy software of course.


Cite?

And, again, what resources are being wasted? Intel is already putting entire semi-decent GPUs on-die, they have more transistors available than they know what to do with.

Wicked Mystic wrote:DirectX 9 is 11 years old technology.


Why do I care if I am not making a game, and if I am, why would eschew it for "modernity" if keeping it made portability with the Xbox360 easier?

Wicked Mystic wrote:And most software has no heavy use for x87.


Most software has no real use for any sort of FP at all, and a lot of people misuse it when what they really want is fixed point.

But, for those who do need it, yeah, THEY NEED IT.

And, by the way, as people have pointed out, you keep interchangeably referring to on-die FPUs and x87 as if they were the same thing. You can claim you understand the difference all you like, but you are continually arguing as if you don't.

Wicked Mystic wrote:Stupid or not, supporting 32-bit software is supporting legacy software. Can you deny that?


BUT WHO CARES?

Programming isn't cheap, and even with GOOD and thus EXPENSIVE programmers, refactoring/porting or just migrating code can STILL introduce all sorts of additional problems. And if you don't actually have the source...

They aren't going to fix what ain't broke to make some silly clown on the internet feel superior about his "modernized software."

And there is NEW software written for the NEW FP instructions in Haswell that isn't legacy, DirectXanything or 32-bit.

Wicked Mystic wrote:And there was 3D-programming professionals who didn't know what is Voodoo.


Non Sequitor.

Wicked Mystic wrote:Maybe my definition is different. No, I don't demand to rewrite software. That wasn't the point. Do you think your systems are modern? Or are they legacy? Sounds like legacy to me. I don't care if it's impossible to make them modern, I don't care if it costs too much, I don't care if they actually work. Those things do not make them modern.


You don't know what either word means.
Glorious
Darth Gerbil
Gold subscriber
 
 
Posts: 7884
Joined: Tue Aug 27, 2002 6:35 pm

Re: Question on current-gen Intel vs. AMD processors

Postposted on Wed Mar 19, 2014 9:42 am

Glorious wrote:Are you under the complete misapprehension that we need to conserve transistors on modern CPU cores? :o

On the contrary dude, those designers are desperate to find ways to use transistors any way other than just cookie-cutting more cores onto an IC. "Saving" transistors by weakening/eliminating FPUs frees that space up to do.... what?

Seriously, what is the point? Where else could they more profitably use them? It'd be pointless self-sabotage for little-to-no conceivable gain.


Then again Intel is still stuck with quad cores on 200 on LGA1150 and under 100 dollars only dual cores are available.

More integer cores perhaps? Or better GPU?

Glorious wrote:Yeah, the no FPU for no modules direction, because AMD isn't exactly leading the industry with their design success here. They are bleeding market-share, badly.


Intel has no problems keeping market share, they have more money and pbribing resources.

Glorious wrote:A decision made by desperation, not deliberate design.


I don't think AMD engineers are that desperate.

Glorious wrote:Cite?

And, again, what resources are being wasted? Intel is already putting entire semi-decent GPUs on-die, they have more transistors available than they know what to do with.


That FPU usage has gone down? Despite very low FPU resources vs Intel processors, AMD does quite well in gaming. Take 20 years back. That time strong FPU was needed for games. For newer stuff, compare Pentium 4 and Athlon. Pentium 4 did quite well in gaming despite very very low FPU resources.

And still no more than quad core to LGA1150.

Glorious wrote:Why do I care if I am not making a game, and if I am, why would eschew it for "modernity" if keeping it made portability with the Xbox360 easier?


So it's not legacy?

Glorious wrote:Most software has no real use for any sort of FP at all, and a lot of people misuse it when what they really want is fixed point.

But, for those who do need it, yeah, THEY NEED IT.

And, by the way, as people have pointed out, you keep interchangeably referring to on-die FPUs and x87 as if they were the same thing. You can claim you understand the difference all you like, but you are continually arguing as if you don't.


Lot of people want fixed point so more fixed point, less FPU makes sense. Those who really need it, probably use GPU's or similar technique (supercomputers).

Where I have said that on die FPU and x87 is same thing? Because someone claims I have does not mean I really have.

Glorious wrote:BUT WHO CARES?

Programming isn't cheap, and even with GOOD and thus EXPENSIVE programmers, refactoring/porting or just migrating code can STILL introduce all sorts of additional problems. And if you don't actually have the source...

They aren't going to fix what ain't broke to make some silly clown on the internet feel superior about his "modernized software."


So it's legacy or not?

You have good points. However they do not change anything on this same old question: Legacy or not? You are going to answer that or do you just keep writing about everything else? I already said I don't care if it's expensive to make, if it's not worth it, if anybody cares. If it's legacy, it is.

Glorious wrote:And there is NEW software written for the NEW FP instructions in Haswell that isn't legacy, DirectXanything or 32-bit.


Not much btw.

Glorious wrote:You don't know what either word means.


Perhaps.
Wicked Mystic
Gerbil
 
Posts: 51
Joined: Thu Oct 03, 2013 6:36 am

Re: Question on current-gen Intel vs. AMD processors

Postposted on Wed Mar 19, 2014 9:50 am

Just a few highlights from Wicked Mystic's latest spree:

1. Intel's x87 performance is better than AMD's, therefore Intel is hopelessly behind AMD in every possible way!
--> LMAO. Let's apply your logic to AMD: AMD's on-board graphics are better than Intel's (at least not counting Iris Pro), so therefore AMD is hopelessly behind Intel in every possible way!

2. Poor little AMD would be better than Intel if it weren't for the fact that Intel "stole" all the advanced fabs and won't share them!
--> Yeah, go back and read the original Bulldozer (or even Piledriver) review for AMD's 32nm flagship chip against Intel's 32nm overclocked notebook chip called Sandy Bridge. Even when both Intel & AMD used the same process node, the vastly smaller and lower power Sandy Bridge cleaned Bulldozer's clock using fewer transistors, smaller caches, lower clock speeds, and using less power to do it. That's not "cheating" with manufacturing, that's superior design. P.S. --> You can insult Intel's graphics all day, but the IGP in Sandy Bridge still outperforms the IGP in Bulldozer!

3. Nobody uses AVX in software development... BUT HSA!
--> Yeah, you could try to make the first argument, but then you shoot yourself down with the second argument and expose yourself as the shill that you really are.

Ah man.. I'm almost going to be sorry when they finally drop the hammer.... almost.
4770K @ 4.7 GHz; 32GB DDR3-2133; GTX-770; 512GB 840 Pro (2x); Fractal Define XL-R2; NZXT Kraken-X60
--Many thanks to the TR Forum for advice in getting it built.
chuckula
Gerbil Elite
Gold subscriber
 
 
Posts: 569
Joined: Wed Jan 23, 2008 9:18 pm
Location: Probably where I don't belong.

Re: Question on current-gen Intel vs. AMD processors

Postposted on Wed Mar 19, 2014 10:02 am

chuckula wrote:Just a few highlights from Wicked Mystic's latest spree:

1. Intel's x87 performance is better than AMD's, therefore Intel is hopelessly behind AMD in every possible way!
--> LMAO. Let's apply your logic to AMD: AMD's on-board graphics are better than Intel's (at least not counting Iris Pro), so therefore AMD is hopelessly behind Intel in every possible way!


In that case Intel's Pentium 4 was very bad processor from beginning? Stupid engineers on Intel and therefore they cannot make good processor.

Also, it's worth to use x87 instead of SSEx or AVX?

chuckula wrote:2. Poor little AMD would be better than Intel if it weren't for the fact that Intel "stole" all the advanced fabs and won't share them!
--> Yeah, go back and read the original Bulldozer (or even Piledriver) review for AMD's 32nm flagship chip against Intel's 32nm overclocked notebook chip called Sandy Bridge. Even when both Intel & AMD used the same process node, the vastly smaller and lower power Sandy Bridge cleaned Bulldozer's clock using fewer transistors, smaller caches, lower clock speeds, and using less power to do it. That's not "cheating" with manufacturing, that's superior design. P.S. --> You can insult Intel's graphics all day, but the IGP in Sandy Bridge still outperforms the IGP in Bulldozer!


Compare architechtures while Intel has superior manufacturing technology makes no sense.

With equally good process node Steamroller would have much higher clocks than now.

chuckula wrote:3. Nobody uses AVX in software development... BUT HSA!
--> Yeah, you could try to make the first argument, but then you shoot yourself down with the second argument and expose yourself as the shill that you really are.


Where I said that nobody uses AVX?

chuckula wrote:Ah man.. I'm almost going to be sorry when they finally drop the hammer.... almost.


Yeah really. I seem to be way too intelligent for this forum. And you look to be stupid enough.
Wicked Mystic
Gerbil
 
Posts: 51
Joined: Thu Oct 03, 2013 6:36 am

Re: Question on current-gen Intel vs. AMD processors

Postposted on Wed Mar 19, 2014 10:07 am

Easy on the personal attacks there. That was uncalled for.

And unless this thread sees some evolution, it's going to lockville or getting a split.
There is a fixed amount of intelligence on the planet, and the population keeps growing :(
morphine
Grand Admiral Gerbil
Silver subscriber
 
 
Posts: 10063
Joined: Fri Dec 27, 2002 8:51 pm
Location: Portugal (that's next to Spain)

Re: Question on current-gen Intel vs. AMD processors

Postposted on Wed Mar 19, 2014 10:12 am

I remember why this post started...the OP, Zaeem, was asking questions related to picking components for a build. The point about Intel's current processors being better than AMD processors with more cores came up, and he (or she) wanted more information, hence the separate thread.

But seeing as Zaeem has not repled to anyone in this thread...
Damage wrote:Don't try to game the requirements by posting everywhere, guys, or I'll nuke you from space.

-Probably the best Damage quote ever.
superjawes
Graphmaster Gerbil
Gold subscriber
 
 
Posts: 1141
Joined: Thu May 28, 2009 9:49 am

Re: Question on current-gen Intel vs. AMD processors

Postposted on Wed Mar 19, 2014 12:45 pm

Wicked Mystic wrote:Then again Intel is still stuck with quad cores on 200 on LGA1150 and under 100 dollars only dual cores are available.

More integer cores perhaps? Or better GPU?


Or just good old market segmentation?

Wicked Mystic wrote:Intel has no problems keeping market share, they have more money and pbribing resources.


And, much more importantly, a vastly superior product.

I mean, I used to exclusively buy AMD kit, in fact I still have an Opteron 175 in use. But, then, see, there was this thing called the "Core 2" :P

Haven't looked back ever since. The price differential just isn't enough to offset the performance disparity.

Wicked Mystic wrote:I don't think AMD engineers are that desperate.


Is this an argument? The point is that they had to focus on more cores and the APU concept, because they could no longer compete on a per-core basis.

Wicked Mystic wrote:That FPU usage has gone down?


You've actually measured the overall use of FPU across the entire x86 codebase? :o

Wicked Mystic wrote:Despite very low FPU resources vs Intel processors, AMD does quite well in gaming. Take 20 years back. That time strong FPU was needed for games.


Dude, in 1994, twenty years ago, all rendering in PC games was done on the CPU. Other than like, maybe, blitting.

Wicked Mystic wrote:For newer stuff, compare Pentium 4 and Athlon. Pentium 4 did quite well in gaming despite very very low FPU resources.


http://techreport.com/review/2347/intel ... rocessor/9

Pentium 4 won 1 out of 3. :roll:

Wicked Mystic wrote:So it's not legacy?


If it is being newly sold and supported right now, no, clearly it is not.

Wicked Mystic wrote:Lot of people want fixed point so more fixed point, less FPU makes sense. Those who really need it, probably use GPU's or similar technique (supercomputers).


You have no idea what you are talking about. *I* really need it, and I don't use GPUs or a supercomputer.

Wicked Mystic wrote:Where I have said that on die FPU and x87 is same thing? Because someone claims I have does not mean I really have.


You keep switching between them. Maybe you haven't explicitly contradicted yourself but it's awfully confusing, and it makes it difficult for us to understand what your ultimate point is.

Wicked Mystic wrote:So it's legacy or not?

You have good points. However they do not change anything on this same old question: Legacy or not? You are going to answer that or do you just keep writing about everything else? I already said I don't care if it's expensive to make, if it's not worth it, if anybody cares. If it's legacy, it is.


It is not legacy.

Everything you are complaining about is code that runs on currently sold and supported computers. It runs fine, and even tends to run better with newer versions of the CPU lineage. x87 might be discouraged by Intel, but it hasn't even been officially deprecated. They don't just sell their chips for windows 64. :)

On the other hand... The first two examples I gave, what I called legacy, were both code that only runs natively on computers that haven't been supported or sold for over 15 years.(substantially longer, in fact) They were deprecated back when I was a child.

The second example I gave, document.all, is an extension that's been deprecated for ages, isn't supported at all in Chrome, or even in IE11 (and it was MS-introduced!)

Those are true "legacy code" situations. Because they don't work anymore or because you can't find a machine to run them that you aren't emulating.
Glorious
Darth Gerbil
Gold subscriber
 
 
Posts: 7884
Joined: Tue Aug 27, 2002 6:35 pm

Re: Question on current-gen Intel vs. AMD processors

Postposted on Wed Mar 19, 2014 1:22 pm

Oh man, re-reading that TR review made me revisit this:

Wicked Mystic wrote:For newer stuff, compare Pentium 4 and Athlon. Pentium 4 did quite well in gaming despite very very low FPU resources.


very, very low you say? :o

http://techreport.com/review/2347/intel ... rocessor/6

The P4 wasn't fantastic, but it certainly kept up with the contemporary Athlon. The disparity that does exist is perfectly in-line with how the P4 was generally weaker than the Athlon.

In other words....

... you are completely making this up. :wink:
Glorious
Darth Gerbil
Gold subscriber
 
 
Posts: 7884
Joined: Tue Aug 27, 2002 6:35 pm

Re: Question on current-gen Intel vs. AMD processors

Postposted on Wed Mar 19, 2014 1:52 pm

Glorious wrote:Oh man, re-reading that TR review made me revisit this:

Wicked Mystic wrote:For newer stuff, compare Pentium 4 and Athlon. Pentium 4 did quite well in gaming despite very very low FPU resources.


very, very low you say? :o

http://techreport.com/review/2347/intel ... rocessor/6

The P4 wasn't fantastic, but it certainly kept up with the contemporary Athlon. The disparity that does exist is perfectly in-line with how the P4 was generally weaker than the Athlon.

In other words....

... you are completely making this up. :wink:


More like he's done a little bit of reading, made up his mind that he likes AMD and old Intel CPUs, fails to comprehend that "FPU" doesn't solely apply to the elderly x87 unit kept around for backwards compatibility, thinks SIMD instructions are a fundamentally separate bit of business (presumably because they can work with integer data) and rationalizes everything else based on that. It's tedious as hell.
You win [adhesive medical strips!]
Concupiscence
Gerbil
Silver subscriber
 
 
Posts: 88
Joined: Tue Sep 25, 2012 7:58 am
Location: Dallas, TX

Re: Question on current-gen Intel vs. AMD processors

Postposted on Thu Mar 20, 2014 4:49 am

Glorious wrote:Or just good old market segmentation?


Then again six core part is 130W TDP.

Glorious wrote:And, much more importantly, a vastly superior product.

I mean, I used to exclusively buy AMD kit, in fact I still have an Opteron 175 in use. But, then, see, there was this thing called the "Core 2" :P

Haven't looked back ever since. The price differential just isn't enough to offset the performance disparity.


Because of better manufacturing process, yes.

Core 2 was slow in practice because of no IMC.

Glorious wrote:
Wicked Mystic wrote:I don't think AMD engineers are that desperate.


Is this an argument? The point is that they had to focus on more cores and the APU concept, because they could no longer compete on a per-core basis.


They designed entirely new architechture. That is the reason. Per-core basis improved Phenom would have been much better.

Glorious wrote:
Wicked Mystic wrote:That FPU usage has gone down?


You've actually measured the overall use of FPU across the entire x86 codebase? :o


Importance of FPU performance has gone down, if that's better.

Glorious wrote:
Wicked Mystic wrote:Despite very low FPU resources vs Intel processors, AMD does quite well in gaming. Take 20 years back. That time strong FPU was needed for games.


Dude, in 1994, twenty years ago, all rendering in PC games was done on the CPU. Other than like, maybe, blitting.


Exactly, so no more need to do all rendering on CPU so less CPU resources is needed.

Glorious wrote:
Wicked Mystic wrote:For newer stuff, compare Pentium 4 and Athlon. Pentium 4 did quite well in gaming despite very very low FPU resources.


http://techreport.com/review/2347/intel ... rocessor/9

Pentium 4 won 1 out of 3. :roll:


That is quite well. That's not catastrophic.

Glorious wrote:
Wicked Mystic wrote:So it's not legacy?


If it is being newly sold and supported right now, no, clearly it is not.


DirectX 9 came out 2003. That time we had single core processors.

Glorious wrote:
Wicked Mystic wrote:Lot of people want fixed point so more fixed point, less FPU makes sense. Those who really need it, probably use GPU's or similar technique (supercomputers).


You have no idea what you are talking about. *I* really need it, and I don't use GPUs or a supercomputer.


But most people do not. Including gamers.

Glorious wrote:
Wicked Mystic wrote:Where I have said that on die FPU and x87 is same thing? Because someone claims I have does not mean I really have.


You keep switching between them. Maybe you haven't explicitly contradicted yourself but it's awfully confusing, and it makes it difficult for us to understand what your ultimate point is.


I have made my point clear many times: no need for high power FPU except some very special situations.

Glorious wrote:
Wicked Mystic wrote:So it's legacy or not?

You have good points. However they do not change anything on this same old question: Legacy or not? You are going to answer that or do you just keep writing about everything else? I already said I don't care if it's expensive to make, if it's not worth it, if anybody cares. If it's legacy, it is.


It is not legacy.

Everything you are complaining about is code that runs on currently sold and supported computers. It runs fine, and even tends to run better with newer versions of the CPU lineage. x87 might be discouraged by Intel, but it hasn't even been officially deprecated. They don't just sell their chips for windows 64. :)

On the other hand... The first two examples I gave, what I called legacy, were both code that only runs natively on computers that haven't been supported or sold for over 15 years.(substantially longer, in fact) They were deprecated back when I was a child.

The second example I gave, document.all, is an extension that's been deprecated for ages, isn't supported at all in Chrome, or even in IE11 (and it was MS-introduced!)

Those are true "legacy code" situations. Because they don't work anymore or because you can't find a machine to run them that you aren't emulating.


Ok, that's your definition for it. I consider even some new software legacy. Like that Directx 9. Many games still use it and 2003 we had Single core Pentium 4 or single core Athlon64 processors. Considering processor development, that is very legacy.

Glorious wrote:Oh man, re-reading that TR review made me revisit this:

Wicked Mystic wrote:For newer stuff, compare Pentium 4 and Athlon. Pentium 4 did quite well in gaming despite very very low FPU resources.


very, very low you say? :o

http://techreport.com/review/2347/intel ... rocessor/6

The P4 wasn't fantastic, but it certainly kept up with the contemporary Athlon. The disparity that does exist is perfectly in-line with how the P4 was generally weaker than the Athlon.

In other words....

... you are completely making this up. :wink:


You did notice that:

- Newer version was recompiled to use SSE2. Most software never get this treatment.

- Old version it was

Pentium 4 1,7 110,5
Athlon 1.2 67

That is HUGE difference. Even with SSE2 Athlon is still faster. And remember, very few software is recompiled after it's published.

Concupiscence wrote:More like he's done a little bit of reading, made up his mind that he likes AMD and old Intel CPUs, fails to comprehend that "FPU" doesn't solely apply to the elderly x87 unit kept around for backwards compatibility, thinks SIMD instructions are a fundamentally separate bit of business (presumably because they can work with integer data) and rationalizes everything else based on that. It's tedious as hell.


Problem is that SSE2 needs software support. While recompiling is quite easy, very few software gets it after shipped.

Again, can you explain why Intel now has strong x87 FPU "just for compatibility reasons"? On Pentium 4 that "x87 for compatibility reasons" was very weak. And that was year 2000.
Wicked Mystic
Gerbil
 
Posts: 51
Joined: Thu Oct 03, 2013 6:36 am

Re: Question on current-gen Intel vs. AMD processors

Postposted on Thu Mar 20, 2014 6:43 am

Wicked Mystic wrote:That is HUGE difference. Even with SSE2 Athlon is still faster. And remember, very few software is recompiled after it's published.
...
Problem is that SSE2 needs software support. While recompiling is quite easy, very few software gets it after shipped.

SSE2 has been around for over a decade, and is now widely deployed. Most commercial software that is still being actively supported/developed has been recompiled for it by now, provided there's a significant benefit from it. Media codecs, game engines, and the like (which tend to benefit greatly) also tend to be on shorter release cycles since new features and enhancements get added regularly.

The transition away from WIndows XP and 32-bit is also driving recompilation of a lot of applications.

Open Source software (which represents a growing percentage of the overall market, even on Windows) gets recompiled A LOT. I'd guesstimate that 99% of all the software running on my Ubuntu Linux box was compiled within the past 3 years.

Why do you keep harping on the "but recompilation is required" point anyway? Use of GPU compute isn't just recompilation, it's a complete REWRITE. Which do you think is the path of least resistance, for a developer looking for a boost in FP performance?
(this space intentionally left blank)
just brew it!
Administrator
Gold subscriber
 
 
Posts: 37947
Joined: Tue Aug 20, 2002 10:51 pm
Location: Somewhere, having a beer

Re: Question on current-gen Intel vs. AMD processors

Postposted on Thu Mar 20, 2014 7:40 am

To be fair, recompiling software won't magically give you SSE2 support. The compiler might insert one or two instructions here and there, but you won't actually see any significant improvement until you change your memory layout to fit SSE2 and rewrite your math using compiler intrinsics. SSE2 is no magic bullet and autovectorization is still a pipe dream.

Also keep in mind that most software today is written in languages that don't even have access to SIMD instructions (Java, C#, Python, PHP, Ruby, need I go on?) Hell, even PhysX was using x87 instructions until recently.

So yeah, desktop software today is still written for legacy x87 instructions.
BlackStar
Gerbil
 
Posts: 92
Joined: Fri May 11, 2007 3:38 am

Re: Question on current-gen Intel vs. AMD processors

Postposted on Thu Mar 20, 2014 8:01 am

Wicked Mystic wrote:How many 64-bit games are around?

Your arguments seem a bit dated. I'm too tired to care though so I'll just note these points.

Battlefield 4, Warframe, Need for Speed: Rivals, Titanfall, and World of Warcraft all have 64 bit executables or are 64 bit only...I'm sure I've missed a few.

I can count the number of major dx9 only releases on one hand this year, where a few years ago I might've agreed with you on some points.

Brain. fried. g'night.
Meow.
Savyg
Gerbil Elite
Silver subscriber
 
 
Posts: 699
Joined: Thu Aug 26, 2004 6:18 am
Location: The ever growing army of the undead.

Re: Question on current-gen Intel vs. AMD processors

Postposted on Thu Mar 20, 2014 10:19 am

BlackStar wrote:To be fair, recompiling software won't magically give you SSE2 support. The compiler might insert one or two instructions here and there, but you won't actually see any significant improvement until you change your memory layout to fit SSE2 and rewrite your math using compiler intrinsics. SSE2 is no magic bullet and autovectorization is still a pipe dream.

Even without those additional steps it helps though. The stack-based x87 architecture makes decent optimization by the compiler very difficult. Switching to the much more sane SSE2 instruction set makes the optimizer's job a lot easier, resulting in more efficient code even without vectorization.

BlackStar wrote:Also keep in mind that most software today is written in languages that don't even have access to SIMD instructions (Java, C#, Python, PHP, Ruby, need I go on?) Hell, even PhysX was using x87 instructions until recently.

The interpreters and runtime libraries for those languages can still be compiled to use it though.
(this space intentionally left blank)
just brew it!
Administrator
Gold subscriber
 
 
Posts: 37947
Joined: Tue Aug 20, 2002 10:51 pm
Location: Somewhere, having a beer

Re: Question on current-gen Intel vs. AMD processors

Postposted on Thu Mar 20, 2014 10:38 am

Why isn't this thread locked yet, it's a perfect example of one person with an agenda using subterfuge and distraction to wind people up and keep the argument going.

There is nothing productive beyond page 2 because you're all playing the same impossible game.
<insert large, flashing, epileptic-fit-inducing signature (based on the latest internet-meme) here>
Chrispy_
Minister of Gerbil Affairs
Gold subscriber
 
 
Posts: 2065
Joined: Fri Apr 09, 2004 3:49 pm

Re: Question on current-gen Intel vs. AMD processors

Postposted on Thu Mar 20, 2014 10:42 am

Wicked Mystic wrote:Then again six core part is 130W TDP.


Yeah, because the -E variants are for the MARKET SEGMENT of performance/enthusiasts.

Wicked Mystic wrote:Because of better manufacturing process, yes.


No, you are wrong: http://techreport.com/review/11473/a-qu ... ocessors/5

Wicked Mystic wrote:Core 2 was slow in practice because of no IMC.


lolwut? It was faster than the competition, which had an IMC.

Wicked Mystic wrote:They designed entirely new architechture. That is the reason. Per-core basis improved Phenom would have been much better.


So why didn't they do that, then? Why did they design a new architecture instead?

Oh, right: because they couldn't improve per-core performance. It wasn't a deliberate design choice, it was desperation.

You know, like I said.

Wicked Mystic wrote:Importance of FPU performance has gone down, if that's better.


It isn't. You are just making stuff up.

Wicked Mystic wrote:Exactly, so no more need to do all rendering on CPU so less CPU resources is needed.


Right... Ok. So, by that logic, we should all currently being running FPU-weakened Pentium-100s with gigantic GPUs attached. Because, after all, less CPU resources will be needed and it's not like we have ANYTHING ELSE to use them on. :o

Ever heard of the Jevons Paradox?

Wicked Mystic wrote:That is quite well. That's not catastrophic.


You have a completely arbitrary criteria.

Wicked Mystic wrote:DirectX 9 came out 2003. That time we had single core processors.


libc came out in the 1970s. Back then we had core-memory.

So what?

Wicked Mystic wrote:But most people do not. Including gamers.


1. Sabotage FPUs
2. ...???
3. PROFIT!

Look, what is the point of getting rid of the FPU? Yes, in a majority of cases it isn't really necessary, but that doesn't mean it isn't important when it is necessary. The 80/20 rule doesn't mean that you don't need the rest of the features!

Here, read this: http://www.joelonsoftware.com/articles/ ... 00020.html

Also, why do you keep specially focusing on "Gamers" when you are talking about the ISAs and performance of general-purpose CPUs and software in general? It just reinforces my belief that you are being a raging absolutist because you only care about one use case and can't even conceive of any differing use cases.

Wicked Mystic wrote:I have made my point clear many times: no need for high power FPU except some very special situations.


No, you've also railed against legacy software, specifically x87, directx9 and 32-bit. Your "point" is all over the place.

Everyone agrees that FP isn't used in most software. Everyone agrees that in most cases computers are now "fast enough."

That doesn't mean, whatsoever, that there is any justification in getting rid of FPUs or making CPUs slower. Because, when people need it, they need it. You are "solving" a problem that doesn't exist.

Wicked Mystic wrote:Ok, that's your definition for it. I consider even some new software legacy. Like that Directx 9. Many games still use it and 2003 we had Single core Pentium 4 or single core Athlon64 processors. Considering processor development, that is very legacy.


Your definition of it is useless and that's why NO ONE cares or agrees.

wicked mystic wrote:Pentium 4 1,7 110,5
Athlon 1.2 67

That is HUGE difference. Even with SSE2 Athlon is still faster. And remember, very few software is recompiled after it's published


INACCURATE:

Pentium 4. 1.7Ghz (Original): 110.5
Athlon 1.2Ghz DDR (Original): 90.5
Athlon 1.2Ghz DDR (PII): 67.0

Image

It is extremely difficult for me to believe that you made that mistake legitimately, seeing as how you actually made a point about recompilation.

Wicked Mystic wrote:can you explain why Intel now has strong x87 FPU "just for compatibility reasons"? On Pentium 4 that "x87 for compatibility reasons" was very weak. And that was year 2000.


No, Williamette was weak in general. Here, look:

http://techreport.com/review/2347/intel ... ocessor/13

Damage wrote:First and foremost, it's clear the Athlon 1.33GHz is still the big dawg of PC processors. It's easily the fastest x86-compatible CPU around. Intel's new entry, the 1.7GHz Pentium 4, performs about like a 1.2GHz Athlon in most situations.

Not that there's anything wrong with that.


In other words, it was just generally weaker.

Damage wrote:the Pentium 4's performance balance is pretty darn good. By that I mean it handles a variety of types of math—integer, floating point, SIMD—equally well (more or less).


Res ipsa loquitur

Damage wrote:n my original Pentium 4 review I echoed some sentiments I've heard in a number of places before and since, that the P4's FPU isn't very good. Truth is, the Pentium 4's balance between integer and floating-point performance is very, very similar to the Pentium III's. And it's not far from the Athlon's, either. Sure, the processor executes a relatively low number of instructions per clock, but the P4's floating-point units aren't especially bad in this respect, even without the help of SSE or SSE2.


Res ipsa loquitur x2
Last edited by Flying Fox on Thu Mar 20, 2014 10:55 am, edited 1 time in total.
Reason: Edit by mod - fixed a link
Glorious
Darth Gerbil
Gold subscriber
 
 
Posts: 7884
Joined: Tue Aug 27, 2002 6:35 pm

Re: Question on current-gen Intel vs. AMD processors

Postposted on Thu Mar 20, 2014 10:49 am

You can't win this with logic or facts. No evidence or proof is going to change his mind because he's been *successfully* trolling you all for three pages now.

Look, you're arguing about mundane deficiencies of an abandoned architecture now. This topic is titled current-gen.
<insert large, flashing, epileptic-fit-inducing signature (based on the latest internet-meme) here>
Chrispy_
Minister of Gerbil Affairs
Gold subscriber
 
 
Posts: 2065
Joined: Fri Apr 09, 2004 3:49 pm

Re: Question on current-gen Intel vs. AMD processors

Postposted on Thu Mar 20, 2014 12:14 pm

just brew it! wrote:
Wicked Mystic wrote:That is HUGE difference. Even with SSE2 Athlon is still faster. And remember, very few software is recompiled after it's published.
...
Problem is that SSE2 needs software support. While recompiling is quite easy, very few software gets it after shipped.

SSE2 has been around for over a decade, and is now widely deployed. Most commercial software that is still being actively supported/developed has been recompiled for it by now, provided there's a significant benefit from it. Media codecs, game engines, and the like (which tend to benefit greatly) also tend to be on shorter release cycles since new features and enhancements get added regularly.


We talked about Pentium 4. What you said did not help much when Pentium 4 processors were useful.

just brew it! wrote:The transition away from WIndows XP and 32-bit is also driving recompilation of a lot of applications.

Open Source software (which represents a growing percentage of the overall market, even on Windows) gets recompiled A LOT. I'd guesstimate that 99% of all the software running on my Ubuntu Linux box was compiled within the past 3 years.


32-bit software is still around because of 32-bit Windows Vista/7/8(/9??).

Most Windows software is not open source, expecially games. Linux is different thing.

just brew it! wrote:Why do you keep harping on the "but recompilation is required" point anyway? Use of GPU compute isn't just recompilation, it's a complete REWRITE. Which do you think is the path of least resistance, for a developer looking for a boost in FP performance?


We talked about Pentium 4.

BlackStar wrote:To be fair, recompiling software won't magically give you SSE2 support. The compiler might insert one or two instructions here and there, but you won't actually see any significant improvement until you change your memory layout to fit SSE2 and rewrite your math using compiler intrinsics. SSE2 is no magic bullet and autovectorization is still a pipe dream.

Also keep in mind that most software today is written in languages that don't even have access to SIMD instructions (Java, C#, Python, PHP, Ruby, need I go on?) Hell, even PhysX was using x87 instructions until recently.

So yeah, desktop software today is still written for legacy x87 instructions.


This is funny because with Pentium 4 (that eas year 2000) Intel send clear message to get rid of x87.

Savyg wrote:
Wicked Mystic wrote:How many 64-bit games are around?

Your arguments seem a bit dated. I'm too tired to care though so I'll just note these points.

Battlefield 4, Warframe, Need for Speed: Rivals, Titanfall, and World of Warcraft all have 64 bit executables or are 64 bit only...I'm sure I've missed a few.

I can count the number of major dx9 only releases on one hand this year, where a few years ago I might've agreed with you on some points.

Brain. fried. g'night.


That makes few titles out of thousands.

Same words about DX9. Hopefully PS4 and X1 will get popularity soon.

Chrispy_ wrote:Why isn't this thread locked yet, it's a perfect example of one person with an agenda using subterfuge and distraction to wind people up and keep the argument going.

There is nothing productive beyond page 2 because you're all playing the same impossible game.


You have lot to say on this topic. Why don't you just go away?

Glorious wrote:
Wicked Mystic wrote:Then again six core part is 130W TDP.


Yeah, because the -E variants are for the MARKET SEGMENT of performance/enthusiasts.


Any cheaper six cores around?



What?

Glorious wrote:lolwut? It was faster than the competition, which had an IMC.


In practice you can notice difference between Phenom II and Core 2 Quad. Phenom II is faster.

http://www.anandtech.com/show/2715/4

After playing through the several levels on each platform, we thought the Phenom II 940 offered a better overall gaming experience in this title than the Intel Q9550 based on smoother game play. It is difficult to quantify without a video capture, but player movement and weapon control just seemed to be more precise.


Just look at benchmarks and say your opinions with zero experience.

Glorious wrote:So why didn't they do that, then? Why did they design a new architecture instead?

Oh, right: because they couldn't improve per-core performance. It wasn't a deliberate design choice, it was desperation.

You know, like I said.


Because they thought that single thread performance is overrated? That is.

Glorious wrote:
Wicked Mystic wrote:Importance of FPU performance has gone down, if that's better.


It isn't. You are just making stuff up.


Whatever.

Glorious wrote:S
Wicked Mystic wrote:Exactly, so no more need to do all rendering on CPU so less CPU resources is needed.


Right... Ok. So, by that logic, we should all currently being running FPU-weakened Pentium-100s with gigantic GPUs attached. Because, after all, less CPU resources will be needed and it's not like we have ANYTHING ELSE to use them on. :o

Ever heard of the Jevons Paradox?


Supercomputers are going on that direction. We however need backwards compatibility for legacy reasons.

Glorious wrote:
Wicked Mystic wrote:That is quite well. That's not catastrophic.


You have a completely arbitrary criteria.


Much better than on POV-ray, right?

Glorious wrote:
Wicked Mystic wrote:DirectX 9 came out 2003. That time we had single core processors.


libc came out in the 1970s. Back then we had core-memory.

So what?


So that Direct 9 software is legacy.

Glorious wrote:
Wicked Mystic wrote:But most people do not. Including gamers.


1. Sabotage FPUs
2. ...???
3. PROFIT!

Look, what is the point of getting rid of the FPU? Yes, in a majority of cases it isn't really necessary, but that doesn't mean it isn't important when it is necessary. The 80/20 rule doesn't mean that you don't need the rest of the features!

Here, read this: http://www.joelonsoftware.com/articles/ ... 00020.html

Also, why do you keep specially focusing on "Gamers" when you are talking about the ISAs and performance of general-purpose CPUs and software in general? It just reinforces my belief that you are being a raging absolutist because you only care about one use case and can't even conceive of any differing use cases.


Very few people actually need FPU power. Or are you claiming that AMD engineers were just stupid when they decided to reduce FPU power and out more on INT power?

Gamers needed FPU power, a lot. Do not need as much now.

Glorious wrote:
Wicked Mystic wrote:I have made my point clear many times: no need for high power FPU except some very special situations.


No, you've also railed against legacy software, specifically x87, directx9 and 32-bit. Your "point" is all over the place.

Everyone agrees that FP isn't used in most software. Everyone agrees that in most cases computers are now "fast enough."

That doesn't mean, whatsoever, that there is any justification in getting rid of FPUs or making CPUs slower. Because, when people need it, they need it. You are "solving" a problem that doesn't exist.


Get rid of strong FPU's vs get rid of FPU's altogether. That is difference. Again, Intel thought exactly that way 14 years ago. AMD thought that way 2009. Both were wrong?

Glorious wrote:
Wicked Mystic wrote:Ok, that's your definition for it. I consider even some new software legacy. Like that Directx 9. Many games still use it and 2003 we had Single core Pentium 4 or single core Athlon64 processors. Considering processor development, that is very legacy.


Your definition of it is useless and that's why NO ONE cares or agrees.


Useless or not, you seem to care.

Glorious wrote:
wicked mystic wrote:Pentium 4 1,7 110,5
Athlon 1.2 67

That is HUGE difference. Even with SSE2 Athlon is still faster. And remember, very few software is recompiled after it's published


INACCURATE:

Pentium 4. 1.7Ghz (Original): 110.5
Athlon 1.2Ghz DDR (Original): 90.5
Athlon 1.2Ghz DDR (PII): 67.0

Image

It is extremely difficult for me to believe that you made that mistake legitimately, seeing as how you actually made a point about recompilation.


As i already said, I don't care about benchmarks a lot. I even didn't read that well.

This is much better http://ic.tweakimg.net/ext/i/987861386.gif

Notice that FPU part.

Glorious wrote:
Wicked Mystic wrote:can you explain why Intel now has strong x87 FPU "just for compatibility reasons"? On Pentium 4 that "x87 for compatibility reasons" was very weak. And that was year 2000.


No, Williamette was weak in general. Here, look:

http://techreport.com/review/2347/intel ... ocessor/13


But x87 FPU was very weak. Pentium 4 was designed to have very high clock speeds. That didn't work out however.

Glorious wrote:
Damage wrote:First and foremost, it's clear the Athlon 1.33GHz is still the big dawg of PC processors. It's easily the fastest x86-compatible CPU around. Intel's new entry, the 1.7GHz Pentium 4, performs about like a 1.2GHz Athlon in most situations.

Not that there's anything wrong with that.


In other words, it was just generally weaker.


Generally weaker because clock speeds were not high enough.

Exactly same can be said about AMD Bulldozer.

[
Glorious wrote:quote="Damage"]the Pentium 4's performance balance is pretty darn good. By that I mean it handles a variety of types of math—integer, floating point, SIMD—equally well (more or less).


Res ipsa loquitur

Damage wrote:n my original Pentium 4 review I echoed some sentiments I've heard in a number of places before and since, that the P4's FPU isn't very good. Truth is, the Pentium 4's balance between integer and floating-point performance is very, very similar to the Pentium III's. And it's not far from the Athlon's, either. Sure, the processor executes a relatively low number of instructions per clock, but the P4's floating-point units aren't especially bad in this respect, even without the help of SSE or SSE2.


Res ipsa loquitur x2[/quote]

That is total BS. Pentium 4 had some strengths (like good L2 cache and much better memory interface). However x87 was very weak, it was meant to be that way. How Intel could get support for SSE2 (and kill x87) if x87 speed is good?

Those quotes show that Damage looks to be "I look at benchmarks and make opinions" writer.

Chrispy_ wrote:You can't win this with logic or facts. No evidence or proof is going to change his mind because he's been *successfully* trolling you all for three pages now.

Look, you're arguing about mundane deficiencies of an abandoned architecture now. This topic is titled current-gen.


Go to hell. Your have said your point many times and nobody cares, idiot.

At least you got what you wanted, i quit here. Have fun. Bye.
Wicked Mystic
Gerbil
 
Posts: 51
Joined: Thu Oct 03, 2013 6:36 am

Re: Split from: Question on current-gen Intel vs. AMD proces

Postposted on Thu Mar 20, 2014 12:46 pm

*Click*
There is a fixed amount of intelligence on the planet, and the population keeps growing :(
morphine
Grand Admiral Gerbil
Silver subscriber
 
 
Posts: 10063
Joined: Fri Dec 27, 2002 8:51 pm
Location: Portugal (that's next to Spain)

Previous

Return to Processors

Who is online

Users browsing this forum: chuckula, LoneWolf15 and 4 guests