news ps4 architect discusses consoles custom amd processor

PS4 architect discusses console’s custom AMD processor

Since February, we’ve known that Sony’s upcoming PlayStation 4 console will have a "supercharged PC architecture" based on a custom AMD processor with GCN-derived graphics and eight Jaguar CPU cores. The chip’s GPU and CPU elements will share 8GB of GDDR5 memory over an interface with a claimed 176GB/s of bandwidth, and the graphics component promises comparable horsepower to the Radeon HD 7850. Gamasutra recently discussed the PS4’s underlying hardware with lead architect Mark Cerny, providing new insight on the guts of the console.

Cerny says the PS4’s custom silicon incorporates not only the CPU and GPU, but also a "large number of other units." The chip has a dedicated audio unit to perform processing for voice chat and multiple audio streams. It also has a hardware block designed explicitly for zlib decompression. The main processor is backed by a secondary chip that enables an ultra-low-power mode for background downloading. In that mode, the CPU and GPU shut down, leaving only the auxiliary chip, system memory, networking, and storage active.

A 256-bit interface links the console’s processor to its shared memory pool. According to Cerny, Sony considered a 128-bit implementation paired with on-chip eDRAM but deemed that solution too complex for developers to exploit. Sony has also taken steps to make it easier for developers to use the graphics component for general-purpose computing tasks. Cerny identifies three custom features dedicated to that mission:

  • An additional bus has been grafted to the GPU, providing a direct link to system memory that bypasses the GPU’s caches. This dedicated bus offers "almost 20GB/s" of bandwidth, according to Cerny.
  • The GPU’s L2 cache has been enhanced to better support simultaneous use by graphics and compute workloads. Compute-related cache lines are marked as "volatile" and can be written or invalidated selectively.
  • The number of "sources" for GPU compute commands has been increased dramatically. The GCN architecture supports one graphics source and two compute sources, according to Cerny, but the PS4 boosts the number of compute command sources to 64.

If developers take advantage of the PS4’s apparently robust support for mixed GPU workloads, we could see more compute tasks being offloaded to the GPU in PC games. Let’s hope developers don’t rely too much on Sony’s customizations, though.

Interestingly, Cerny has little to say about the PS4’s Jaguar-based CPU. All the focus on general-purpose GPU computing suggests the console’s CPU component is relatively weak, which isn’t a big surprise. Jaguar will replace AMD’s low-power Bobcat CPU architecture, and it’s designed primarily with mobile systems like tablets in mind.

0 responses to “PS4 architect discusses console’s custom AMD processor

  1. GOOD but obviously not BETTER than a high end PC.

    “the graphics component promises comparable horsepower to the Radeon HD 7850”

    That says it all, the Radeon 7850 is about 40% slower than a Geforce 680.

  2. Here’s what a Epic programmer [url=<]had to say[/url<] about that demo: [quote<]Update: Brian Karis, senior graphics programmer at Epic Games adds some more insight in the comments below, explaining some of the more obvious differences - particularly in terms of the very different lighting schemes. At the technical level, the two demos are closer than it seems: "The biggest changes actually came from the merging of two separate cinematics, the original Elemental and the extended Elemental we showed at PS4's launch event. Each had different sun directions and required some compromises to join them. This resulted in some major lighting differences that aren't platform related but were due to it being a joined cinematic. Another effect, in the original you could see the mountains through the door where in the merged one we made the view through the door white since the mountains outside were no longer the same. Same deal with the mountain fly by. The old mountain range doesn't exist in the new one. These changes from the merge make direct comparisons somewhat inaccurate. "Feature wise most everything is the same, AA resolution, meshes, textures (PS4 has tons of memory), DOF (I assure you both use the same Bokeh DOF, not sure why that one shot has different focal range), motion blur. "Biggest differences are SVOGI has been replaced with a more efficient GI solution, a slight scale down in the number of particles for some FX, and tessellation is broken on ps4 in the current build which the lava used for displacement. We will fix the tessellation in the future."[/quote<]

  3. That is what everyone keeps reading the wrong way.
    They literally state” offering up to 113 percent improved CPU performance compared to the prior generation AMD Embedded G-Series APU, and up to a 125 percent advantage compared to the Intel Atom when running multiple industry-standard compute intensive benchmarks”.
    So its not an improvement of 13% and 25%, and its not 113% and 125% of the performance of those other parts, but it is 113% and 125% over de performance of brazos and atom, and thus, twice as fast.
    Perhaps their wording is off, but then again, why then would those benchies show the exact same thing?(granted, I’m not familiar with that site).

    Seems to me like they once again shot themselves in the foot(or perhaps it was deliberate) by greatly understating their supposed performance improvement.

    We’ll see soon enough I suppose, but if this is true, then atleast the CPU of the PS4 and Xbox infinity might not be as bad as people think at this moment.

  4. And when you realize that you cant, you will blame it on lazy game developers and the their lousy PC port?

  5. that source may be in error as AMD has posted a meager metric (on their own website) where a quadcore jag only outperforms a atom cpu from 2010 by just 25%.


  6. It seems I was not clear: apparently, jaguar cores, in single threaded loads are equal to c2duo. Furthermore, a quad core jaguar at 2,0 GHz is apparently as fast as a quad core c2duo(q6600) at 2,4GHz in multithreaded workloads.

    According to that source that is. A 8 core core2duo would be quite excellent imho, certainly when coded for it with things like AVX.

  7. The PPE performance has been tested, it’s those SPEs that are harder to gague:

    [url<][/url<] In a word, abysmal, even compared to the ancient 1.6GHz G5.

  8. If you believe a 1.6ghz single core Atom outperforms a 3.2ghz PPE core by 50% on average, there is no helping you. An IPC here, and IPC there, proven, case closed. But go ahead and make the last post, I’m really waiting for your next assertion of superiority, it really makes my day. Say something really clever for me.

  9. 4 Jaguar cores at 1.6 GHz have a TDP of 15W (G-series). Double that for a 8-core model, so 30W. How much for 2 GHz? My guess Is far from double again, maybe 40W? Still very little compared to the GPU that will have a TDP at around 80W, if it compares to the Radeon 7790 card.

  10. yeah, the mention the lower power at the bottom of the page, in that giant pile of legaleze. i would imagine that atom from 2010 is abysmal for low power states from the era. i just wish it was a more modern comparison but it is a marketing slide 😛

    well see when damage gets his reviews done what it comes down to 🙂

  11. Oh yeah … … thanks!

    That’s not [b<] precisely [/b<] what I sound like. I raised hell, and I raised a debate .. though I'll grant it my not have been a hell of a debate ..

  12. “blindly dogmatic PC fanbois spewing illogic = easy fun”

    That is precisely what you sound like. Not sure how that raises the debate or makes them look any more stupid than you make yourself look.

  13. One point worth considering is that while that D525 Atom may be “old”, Atom’s performance has barely altered in IPC terms. Intel have been ramping power consumption instead, to get it into the phone space. So it may not be too bad a comparison after all!

  14. I think it’s “clean-sheet” in the same way that the original Core Duo CPU was – they took an existing proven design and rebuilt it for power efficiency, removing features as necessary to achieve that. In this case I believe AMd used K8. The upshot is that yes, it’s definitely better in terms of IPC than Bulldozer. Each core is also a “full” core.

  15. If what you’re trying to elaborate here is that it’s nice to see an AMD CPU in a console then yes, that’s fair enough and not really worthy of the down-votes in itself.

    As a side note, I do wonder who outside of the tech industry is aware of AMD/ATi history… I think we tend to take it as a given that people understand that AMD’s graphics section was ATi, was ArtX, etc.

  16. “programmers will go crazy exploring the potentials and optimizing for the PS4 just because it’s so satisfying.”

    Hahaha wut. But seriously, the rest of your post was pretty damn good, why fling that in?!

  17. Amd has at least 1 composite score up for jag, sorta…

    [url<][/url<] >> at the bottom : AMD GX-415GA scored 209, AMD G-T56N scored 98, and Intel Atom D525 scored 93, based on an average of Sandra Engineering 2011 Dhyrstone, Sandra Engineering 2011 Whetstone and EEMBC CoreMark Multi-thread benchmark results. From that, A *very* rough idea : 209 / 4 = 52pts per core rounded for the quad-jag @ 1.5ghz stock speed 93 / 2 = 46pts per core rounded for the dual intel [email protected] 1.83ghz stock speed bring the avg speed up on the jag by X 1.1816 to bump the core speed to a theoretical 1.83ghz to get 61.5pts per core x 4 for 245pts in rough theory over all. not shabby at all... but by AMD's own numbers it is clearly going after the atom and not meant to compete against more complex cores. so its neat, it is better than a intel cpu for the target market... but computing x86 monster it is not. on the less exiting front, the d525 is a atom processor from *2010*... and not even openCL capable... so it is a VERY poor comparison and i think it will be a very close tie with a more modern mobile cpu benched against it with a openCL enabled intel atom. one thing to remember is for every core you add, you loose efficiency. so upping the core count to 8 while increasing the overall theoretical output reduces the per core output by some a small margin. rest of the blurb shows amd gimping the atom by only giving it 1 gig of ram... kinda lame. AMD G-T56N system configuration used iBase MI958 motherboard with 4GB DDR3 and integrated graphics. AMD GX-415GA system configuration used AMD "Larne" Reference Design Board with 4GB DDR3 and integrated graphics. Intel Atom D525 system configuration used MSI MS-A923 motherboard with platform integrated 1GB DDR3 and integrated graphics. All systems running Windows® 7 Ultimate for Sandra Engineering and Ubuntu version 11.10 for EEMBC CoreMark. EMB-37

  18. Props. That was nicely done, except maybe for the void of reason, tiring as infinity bit .. an overly gratuitous ad hominem rather jarringly out of character for a rod sterling nartrative … and I might have gone with spiglight zone.

  19. No it is not valid to compare CPU’s that way. I’ve already demonstrated that while you have yet to provide a shred of evidence to back up what you’re saying. You’re reasoning does not matter vs. the benchmarks.

    The fact is there is nothing to “confront” in your comparison of the PPE/Atom, you’re using the same faulty logic to compare them that you used to say “similar CPU’s should be within 10% performance of each other”. And since I have no benchmarks of Atom/PPE running the same software I haven’t made any mention of their performance in comparison to each other at all. What I have said is that generally a in-order CPU will lose big to a OoOE CPU, that is all.

    And lets be clear here: the best explanation of Xenon’s and Jaguar’s IPC performance has been numbers from MS and AMD. There are no guessing or logic games on my part and unless you can show me those 2 companies are lying in this case don’t bother replying.

    Also you’re not learning anything because you’re not understanding what I’m writing nor have you even tried to read the RWT I linked. If you could seriously understand that article you’d have a better understanding of how many CPU’s worked than most people out there.

  20. Heh sort im just not that as you put it anul about these things. Drive people nuts calling stuff thingies and whatzits just to watch em twitch;/

  21. There is a level of speculation beyond that which is known to normal TR commentors. It is a level void of reason and as tiring as infinity. It is the middle ground between trolling and honest belief, between plussed and minused comments, and it lies at the pit of timeframe graphs and the summit of AMD’s knowledge. This is the dimension of imagination. It is an area which we call the spigzone.

  22. [quote<]You may be right, but I remember the 360 developer kits were literal G5 processors (Power Macs in fact, funny enough) because the architectures were so similar.[/quote<] Well, there was probably no other suitable machine. All IBM's stuff is/was servers, usually big ones. I'd say this is rather like PS4 developers being seeded with BD-family machines. [quote<]It caused some issues because the fully featured G5s were actually faster than the cut down Xenon (and by extension the PPE in Cell) due to some cache capacities or bandwidths or something.[/quote<] Actually the size of performance difference would be very interesting to know, since so little is known about PPE/Xenon performance. Other than, "bad".

  23. What? Computer nerds who aren’t anal retentive about nomenclature? That’s unpossible!

  24. [quote<]Also I have not made any claims about Atom's performance vs the PPE, stop whacking on strawmen.[/quote<] Ah, if only we could forget about the things in life we don't find convenient! I'm talking about Atom and comparing CPUs based on features because these are valid techniques. You seem to be very fascinated with the performance impact of OoO, but you refuse to confront my comparisons of Atom to the PPE core. Do they not have extremely similar architectural features, [i<]including being in order[/i<]? It seems that the best explanation you can offer for the 0.2 vs 1.1 IPC claim is that the PPE core is in-order. Well look over here, [i<]so is the Atom core[/i<]. Your position is equivalent to claiming an Atom core would crush the PPE core with something like 50% greater performance, in the general case. This is insane, unsupportable. It doesn't matter that you have MS saying 0.2 IPC over here, and AMD saying 1.1 or 1.0 IPC over here, we don't know were those numbers came from, those numbers cannot be compared. People have benchmarked dual-core Atom and its ilk with big graphics cards, I recall seeing very bad performance. Atom is not crushing the PPE core as in seen in the PS3. This seems to me to defeat your claims, which require Atom to be faster than the PPE core. [quote<]You do not understand the subject you are trying to talk about.[/quote<] Seriously, OoO is not exactly the most advanced topic out there. I've [i<]heard[/i<] of it. [quote<]You made the claim that CPU's could be compared based on features and that CPU's with similar features/specs will end up performance within about 10% of each other, thus Jaguar couldn't possibly be over 5 times the IPC per clock vs. Xenon.[/quote<] I'll admit I pulled 10% out of thin air, but I do stand by the point in general. CPUs really do perform a lot like their specifications say they should. And I don't know why you think Bulldozer and Sandy make a interesting comparison for this discussion. C2D vs K8 was a good one, forced me to revise the 10% statement, but not BD vs Sandy. [quote<]Now you're arguing about implementation details, which if you're original idea was correct, shouldn't really matter all that much. This is a clear cut example of shifting goal posts.[/quote<] Thanks for clarifying what you meant, but I think its all part of my argument. [quote<]The cache misses relates to branch prediction and not cache performance for Xenon or the PPE FYI. Xenon's L2 was perfectly capable of feeding all the CPU's.[/quote<] Now let me start by saying that my position does not strictly require that the PPE core in PS3 performs better per clock than the cores in the Xenon, although it would help me. This is because I do not agree that you can compare this 0.2 from MS to some 1.1 or 1.0 from AMD. I can be right even if the PPE core in PS3 gets 0.2 IPC, you'd just have to find the IPC in "the same code" on the AMD parts to be lower than stated. That said, I think the dismal specifications of that shared L2 are very relevant. Name another CPU with such an under-provisioned cache. People were blaming BD's L3 for holding up its performance, I recall, and it looks better off than what Xenon calls L2. You want to say the comparison is valid, but in doing so you simply try to gloss over the potentially serious unknowns associated with a poor L2 implementation and lots of active threads. At the same time you say its too big a leap to compare Atom to PPE. So to clarify this all for you, there are three parts to my argument, the 5x to 5.5x performance per clock difference is not reasonable because: (1) this is not consistent with the performance of other similar processors (Atom), and (2) processors with similar features can be expected to perform similarly (lets say within 30%), and (3) if the performance gap was as bad as you claim, it would lead to nonsense conclusions like a single core Atom is about 50% faster than the PPE core as used in the PS3. If you will not confront the problem of Atom vs PPE, I do not intend to continue this discussion. Frankly this is not an entertaining discussion, and however much you profess to know that I do not, I am learning quite little.

  25. [quote<]Most latency can be hidden[/quote<] That's the crux right there - how well latency will be hidden. For most console applications, memory latency might not be a big deal as the console is largely only doing one thing at a time, However, once you get into multiplayer gaming, with communication apps, games checking for/downloading software updates on the fly, connecting with authentication servers for DRM purposes, background OS housekeeping processes,etc, you wonder whether memory latency would be an issue. Unified memory is great for development flexibility. However, one would think that going with 4-8GB DDR3 and 4GB GDDR5, accessible to both the GPU and CPU components of the APU through some kind of link would still have provided copious GDDR5 for graphics, where high bandwidth is most useful, plus enough low latency memory to handle everything else, without costing more. As far as I understand, GDDR5 is good for GPGPU stuff when the data being loaded into memory is uniform (ie, chunks of the data set being analyzed). If the GPU was processing a bunch of smaller heterogeneous operations - which would not be ideal for GPGPU anyway - the latency might be more of a factor.

  26. I see what you mean, things like going through the PCIe bus rather than having the CPU and GPU having one unified address pool are limitations. APUs are going in the right direction, but Windows (and specifically the lack of a real time operating system for PCs) is still a limitation.

  27. [quote<]The PS3 had some pretty neat hardware at the time too, but the Xbox 360 completely reigned that in simply by being subpar.[/quote<] Had the Cell been paired with something more like the 360s GPU, maybe. But it ended up being used to fill in for the weak RSX. The 360 had the first unified shader GPU shipping in mass as far as I know, it was like a 1k/2k series hybrid. Meanwhile the RSX was a cut down 7800, with some things as far down as 7600 features. Developers had to offload GPU tasks to the Cell just to bring the RSX to par. That ended up using up most of that theoretical advantage. Also the unified RAM, being able to split system/video memory however the developer wanted was a huge boon.

  28. You made the claim that CPU’s could be compared based on features and that CPU’s with similar features/specs will end up performance within about 10% of each other, thus Jaguar couldn’t possibly be over 5 times the IPC per clock vs. Xenon. Sandy Bridge vs Bulldozer alone proves this idea wrong. Also MS themselves has said you’re dead wrong WRT to Xenon’s IPC and since we already know Jaguar’s IPC good luck trying to argue theory which is clearly in the wrong against facts even if you ignore the other info. I’ve already posed.

    Now you’re arguing about implementation details, which if you’re original idea was correct, shouldn’t really matter all that much. This is a clear cut example of shifting goal posts.

    The cache misses relates to branch prediction and not cache performance for Xenon or the PPE FYI. Xenon’s L2 was perfectly capable of feeding all the CPU’s. Branch prediction is generally considered the Achilles heel of any in-order CPU and is one of the main reasons why they tend to lose so badly vs a OoOE CPU. You do not understand the subject you are trying to talk about.

    Xenon is nearly the exact same CPU as the PPE, the 2 major differences was that it had VMX128 SIMD instead of Altivec and the larger 1MB L2 which was shared across all 3 cores, which BTW is a good thing and generally regarded as a huge advantage vs. Cell/PPE memory management. The reason why is because it greatly simplifies memory management which is no joke when you’re trying to manage 6 threads.

    You can reject my claims all you like but that doesn’t make you correct, especially since everything I have said is backed up by actual real world benchmarks and data from MS/AMD. Data of that sort really isn’t up for debate or a matter of opinion. In the end making comparisons is all well and good but they have to be correct to be worth anything!

    Also I have not made any claims about Atom’s performance vs the PPE, stop whacking on strawmen.

  29. So you’re spewing illogical statements back? Really? Way to fail to take the high ground.

  30. Incorrect. Cache sizing and render buffers are utterly crucial in this regard; not to mention raw GPU horsepower.

  31. “I say the brute power to overtake the PS4 isn’t there yet.”
    You have precisely fuck-all evidence for that claim.

    You keep talking about efficiency like it’s *the only metric*. It is not. Overall performance of a given system is a function of power available vs. power delivered to the task at hand. Your argument has been that the PS4 is *so good* at delivering that power that it will eclipse all existing PC hardware in performance terms for some time to come. Unfortunately for you that is complete hogwash.

    Go home, troll.

  32. Unified memory was the most requested feature by game developers. It did wonders for the Xbox 360. The flexibility allows developers can use as much or as little of it as they want for graphics without being constrained by the split or wasting some of it.

  33. [quote<]Now don't go shifting your goal posts, that isn't a way to have interesting or helpful discussion at all.[/quote<] Shifting goal posts? You'll have to elaborate on that. [quote<]It seems the performance differential between the 2 CPU's really is that large because Xenon kind've sucks a bit. L2 cache misses are mentioned as being particularly brutal. [/quote<] So the three way shared 1MB half-speed L2 wasn't so hot for as many as six threads. Actually I am quite surprised to see that number in writing from MS, but again MS's configuration is much worse off than Sony's regarding L2 performance, which has one core with 512k L2, one would hope at 100% of the core clock speed. And remember we are concentrating on the Sony implementation here, an important distinction, it seems. I have to admit that I had not realized the significance of that until I went back and saw just what sort of L2 XBox360 has. [quote<]And Xenon was a lot easier to work with due to its cache structure compared to the PPE IIRC. [/quote<] Well maybe if you are involving the SPE's somehow, but otherwise I am unaware of any reason that the PS3's L2 should be especially hard to use. Its just a L2 cache, right? Perhaps you can provide some more links to illustrate your claim. [quote<]I think the numbers given in the rest of your post are derived from faulty reasoning and are not worth analyzing for reasons already mentioned earlier.[/quote<] I reject your claim. How do you survive life if you can't make comparisons? I even pointed out the ridiculous situation that a cell phone processor could be expected to outperform PPE, or that the XBox-1 would be almost as fast, or that a Wii CPU would be a fearsome competitor. Faulty reasoning, you say. Nice. Lets say that MS manages to get 0.2 IPC when the XBox360 is loaded up. This doesn't upset my position because it appears to have a really bad L2 implementation for 6 threads, the latency for even a single thread must be rather more like an L3 cache from a modern x86 implementation. But that is a very different situation from what Sony faces in the PS3. You say it is impossible to compare the Atom to a PPE, totally incomparable you say, yet will will compare PPE to Xenon when there is evidence of a major architectural difference. Atom at least seems to have a comparable L2 implementation to PPE as seen in PS3. Seriously, come here and explain why you think little old Atom should have about 3x the IPC of the PPE core. [i<]Really I'd like to hear you explain that.[/i<] So, I still reject the claim that the Jaguar core will get 5x to 5.5x the performance per clock compared to the PPE core, [i<]specifically[/i<] as used in PS3.

  34. That’s what developer wanted. There is a very high cost to move memory between the CPU ddr3 pool to GPU gddr5 pool. Eliminating this narrow bottleneck dramatically open possibilities.
    Most latency can be hidden.

    Gpgpu greatly benefit from gddr5 vs ddr3, it’s actually Cuba hw greatest strength.
    If a tesla card was ddr3, compute operation would be memory limited.

  35. Hilarious :). The gtx 680 cost more then the whole ps4 as projected, and pack way more power, yet the result is near identical.
    And that’s without using HSA & the full 8 gb of ram.

    95% of gamers pc won’t be able to run this demo as well as the ps4, so expect game developer to have to gimp console games for the pc market. Low res texture pack and low res worlds…

    Be ready to have to buy a 4gb gtx 680 class to play ps4 games at full fidelity.

  36. You think more development will help the PS4 close that glaring gap? Then you’re going to have to concede the PC will benefit from further development as well. This is a moving target the PS4 just can’t catch. Even if the planets align for a miraculous event like a facebook era console bringing better visuals than a PC, you know how long that will last? A couple months. About as long as it takes for the next round of card releases. That’s how it was with the Xbox and the 360, but back then the consoles didn’t have to pull their punches to account for fitting bloatware and casual gaming accessories into the budget. I’m telling you this time the hardware will be obsolete before it even hits shelves.

  37. And here I am strictly a PC gamer, I don’t even own a console … boy did I get royally screwed.

  38. It’s an x86 APU, the processor is resolution agnostic, it’s the software that targets 1080p. Just like on your PC.

  39. From the Youtube description of that video:

    “What a difference a year – and new console hardware – can make. Check out the changes Epic made to Unreal Engine 4 in the transition from GTX 680 to PlayStation 4, but bear in mind that UE4 is still in development.”

    [b<] "but bear in mind that UE4 is still in development" [/b<] ... because ... the developers had just found out about the 8GB GDDR5 RAM and the final development kits aren't yet shipped? You are aware that is a VERY preliminary look at what the PS4 is capable of, right?

  40. Hahahah…..that’s where your argument completely falls down……if the PS4 is designed to target 1080p 60Hz performance then the manufacturers will only choose hardware that is capable of that, even with all its optimizations, anything more would be inefficient and wasteful.

    A “target” that pc’s left in the dirt several years ago.

    Having said that resolution and frame rate considerations are only part of the story in terms of performance.

    Image quality is another aspect of performance. I’d much rather have 1080p 60Hz photo realism than 4k 120hz realism equivalent to what we have now.

    We all know that image quality in terms of texture detail, lighting shadow effects etc etc have a huge effect on performance….are you saying that the image quality that PS4 will produce at 1080p 60Hz will be far superior to what a pc can achieve? That is something that may have to be given some credence or at least more consideration, I must admit that simply in terms of the amount of vram the PS4 has available to it gave me concerns as regard to the texture detail it could handle when compared to discrete cards with much less memory. Of course it may be argued that at 1080p current pc’s can max out all the image quality settings available, but we have yet to see how much these current effects may be “dialed up” or what new image quality enhancing effects may be invented to specifically take advantage of the PS4 architecture which as a design as the OP has highlighted does have many advantages over the pc.

  41. Bypass the talk and speculation – straight on to the example! [url<][/url<] Considering half of next gen games will probably use Unreal 4 it's pretty telling it looks worse on the PS4 vs current hardware. Fat load of good all that supposed "efficiency" does you.

  42. [quote<] You're cobbling together illogical analogies that have little to do with proving your assertions. It's muddy, illogical and irrational. [/quote<]

  43. The design only had 4 gb of ddr5 memory up until very near the end. also devs realy realy wanted unfied memory.

    Now supposedly on the new xbox its 8 gb of ddr 3 and 32 megs of edram but both are usable by both gpu and cpu so it has all the benfits of unified memory.

  44. Is no one wondering why Sony didn’t split up the memory between DDR3 and GDDR5? Would it not have made sense to have 4-8GB of DDR3 for most CPU-related memory requirements and 2-4GB of GDDR5 for graphics?

    GDDR5 is essentially DDR3, optimized for much higher bandwidth, but at the cost of higher latency. While high bandwidth is great for loading huge textures, would not the latency be bad for a CPU requiring memory access for a multiple smaller simultaneous operations?

  45. As a budget pc gamer my first step will be to target a computer that plays the same games as the consoles.. at low/medium settings. That likely wont take long to find after the games come out.

    Then x years later I will buy a new computer and by then fairly much anything I buy will run the games well enough for me at medium/high settings…

    Then x years after that whatever I buy will play the games at max settings.

    As a pc gamers I replay games as I buy new computers.. so why would I care? Console power only serves to speed the progress of my games on my chosen platform. No matter the power of the console now in gona have fun as hell over the next x years… I just wish the consolves had MORE so I could enjoy MORE POWER over those years.. that’s all.

  46. Now don’t go shifting your goal posts, that isn’t a way to have interesting or helpful discussion at all.

    You gave your reasoning and I’ve given you 2 easily verifiable examples (ie. K8 vs C2D, K10 vs Sandy Bridge) which show that it doesn’t pan out. There are plenty of ideas that work great on paper and then fail in the real world. Happens to everybody at one time or another.

    Bear in mind that those IPC numbers are averages. Effective IPC will vary from program to program on the same CPU, no getting around that. BTW apparently Microsoft themselves did a optimization presentation in 2010 and also mentioned that they were also only getting .2 IPC out of Xenon. Slide 16 from the Powerpoint in this download:
    [url<][/url<] So the developer was just repeating what he had already been told by MS. There is another presentation out there by MS in 2008 where they gave similar numbers too FWIW. It seems the performance differential between the 2 CPU's really is that large because Xenon kind've sucks a bit. L2 cache misses are mentioned as being particularly brutal. And Xenon was a lot easier to work with due to its cache structure compared to the PPE IIRC. I think the numbers given in the rest of your post are derived from faulty reasoning and are not worth analyzing for reasons already mentioned earlier.

  47. I believe people on TR are quite a bit more intelligent then the average bunch. Even if they don’t know what a word means they will either look it up or learn from context. I do it all the time and it’s relevant to understanding the world around you.

    People are capable of learning, especially those on TR.

    I don’t believe ‘dumbing down’ my arguments is fruitful. I’ve already tried this over the years and they either devolve into insults or you can’t get your point across. Sometimes you lose information in translation when it reaches a certain point and making things easier to understand, by making them less complicated in a heated argument can definitely do that.

    However, I’m more then willing to explain things and if someone asked me “Bensam what’s a strawman?” I would answer them. I have even defined a strawman in a few of my arguments to make sure the person I’m arguing with (Chuckula) and anyone reading them understands what I mean. I often times link definitions or topic matter so people have supplemental material to help them understand what I’m talking about, if they don’t.

  48. Don’t worry, you won’t be alone, the majority of console advocates have the very same personality traits and gaming worlds that are equally as empty, devoid of any worthwhile experiences.

  49. You went a little overboard.

    The Ps4 will run next gen game better then 95% of the existing gaming PC (according the steam HW survey) but you still have totally tricked out PC out there with more raw power.

    Now, it is possible that game developer will use the Ps4 in a way that cause issue on the PC platform sure. But most console port will take care of that so a 4GB Titan doesn’t keep swapping in/out data over the Pcie bus for example. (lower res world data, lower res textures)

    I dont think game developer would sabotage their PC port… right… ?

  50. -36 … great, my soul just died, now neither heaven or hell is interested in me. ‘m just a soulless tortured spirit wandering an empty abyss for eternity.

  51. Why would you evoke the PS3 as a yardstick to judge the possibilities of the PS4?

    Developers uniformly panned the PS3 for it’s extremely difficult learning and optimization curve while they are universally praising the PS4 for it’s extremely easy learning and optimization curve.

    They could hardly be more different from each other.

    “I’m afraid AI, physics, texture bling and game mechanics for most games will never be tailored to the possibilities of the PS4.”

    The exact opposite will happen, especially with next gen engines that will automate and streamline so much of the tediousness that programmers will go crazy exploring the potentials and optimizing for the PS4 just because it’s so satisfying.

  52. [quote<]Also if you're reasoning proved true than Intel wouldn't have had their Sandy Bridge CPU's performing 30% faster per clock vs. AMD's Bulldozer, but they did.[/quote<] Well, BD has a longer pipeline vs K10 and all those, and it is a more narrow design, so I am not surprised it suffered. They went for clock speed. Many accused it of being "a P4" which I think is unfair, but it does have a little of that smell about it. Also I fully expected AMD to screw up for a round or two on the new design. [quote<]But we do have developer comments about Xenon's performance, which is an IPC of .2 and we do know the IPC of Jaguar, which is 1.1, because AMD has said so publicly as well.[/quote<] I take issue with this. The developer in question no doubt observed that IPC of 0.2, and AMD no doubt has some valid reason to state 1.1, but come on, these numbers cannot be compared. We need the IPC for the same task, i.e. a benchmark. I don't understand how you can find any value in these numbers. [quote<]Go spend some time looking at the charts in the RWT article I linked, there isn't much technical knowledge necessary to read a bar chart and its fast and easy to do. BTW the C2D was generally about 20% faster per clock vs the K8.[/quote<] Bar charts are too technical for me, so I'll have to go with your 20% figure. That said, its true that there have been impressive ongoing improvements in the K8-K10 and "core" families over the course of many design refreshes. Perhaps the most impressive improvements have been from AMD, because they haven't altered the basic feature set of their core, while Intel has. Has AMD made 30% headway on IPC without altering their basic feature set? I think that sounds a little on the high side. So lets say that a v1.0 design from the likes of AMD or IBM can have 30% more performance per clock after many years of refinement. Lets say Bobcat has 90% the IPC of Jaguar, Atom has 75% the IPC of Bobcat, and the PPE core has (ideally) 75% the IPC of Atom, and that the PPE core is actually only 77% the ideal IPC (1/1.3). This is 0.9 * 0.75 * 0.75 * 0.77 = 38.8% the IPC, PPE vs Jaguar. Are you going to argue that these numbers are optimistic? Assuming an IPC of about 0.67 for a Pentium 3, the original XBox would be only slightly slower than a 3.2ghz processor with an IPC of 0.2 (500 million "IPS" vs 640 million, 28% speed up for one thread). Hell, there are cell phones clocked high enough that if they could achieve an IPC of 0.4, they would beat the performance that is attributed to the PPE. (Those phones also have quad cores and lots of RAM.) Of course all this talk of IPC is a bit vague, but it serves to illustrate just how bad 0.2 would be.

  53. it will probably turbo to 2GHz but I don’t think a 2GHz 8 core jaguar with GPU will have the TDP to support all 8 cores at 2GHz, they have to keep the TDP of this system below 50W for a reasonable cooling solution on a set top.

  54. That would be sweet though I’m pretty sure a Kaveri + Dual Graphics solution is designed to do just that at the same price point with an added bonus of being Dual Graphics compatible with ALL 8xxx boards.

  55. VERY successful.

    That said, workstations provide no GAMING performance advantage over consumer grade components even at two or three times the price.

  56. -35 … WTF? That just go me kicked out of heaven. Thanks a lot you effing BASTARDS.


  57. When one is talking PS4 one is talking next gen games. The point is will a current high end gaming PC provide a better gaming experience than a PS4 on next gen games.

    Optimizing game code for the PS4 will be at the top of the developers priorities. Optimizing game code for high end gaming PCs will be at the bottom of their priorities.

    I say the brute power to overtake the PS4 isn’t there yet.

    Guess we’ll just have to … wait and see … won’t we?

  58. Agreed, time will tell.

    I’m not saying the PC won’t surpass the PS4 in gaming performance, only that it will take a few years.

    By the way a leaked developer oriented AMD PDF surfaced a while back (and was quickly pulled) that showed some 2014 roadmap system integration elements being pulled forward into Kaveri, which may explain the delay and the stop gap Richmond.

    It’s likely AMD redesigned Kaveri at the same time they designed and engineered the PS4 and Xbox 720 architectures to optimize Kaveri for gaming, taking fuller advantage of console coding, extending HSA to Sea Islands and making AMD 8xxx GPUs fully additive to Kaveri. It would provide a potent one-two punch to the competition. Kaveri on it’s own would be punching far above it’s weight in next gen gaming and provide AMD GPUs a significant cost/performance advantage over Nvidia on Kaveri based systems.

    It would also open the low end computer market to a truly high quality gaming experience with next gen AAA games which would drive game sales in general and AMD discrete AIBs in particular.

    Developers have all kinds of incentive to work closely with AMD to this end. The rapid ascendency of Gaming Evolved over TWIMTBP reflects this reality.

    I suspect Kaveri and it’s successors will rapidly become the chip of choice for gamers naturally followed by AMD GPUs and two years from now will see AMD own the lions share of the PC gaming market over Intel and Nvidia.

  59. This might be The Most Obvious Post, but how many of us would love a PC based on the PS4’s SoC?

    $399 for a mITX motherboard with that thing soldered on along with the 8GB of GDDR5, but capable of booting Windows and stripped of all of the ancillary tech that makes the PS4 a PS4? $550 for the barebones kit with a case?

    I’d buy the hell out of that for the mother of all HTPCs…

  60. The Apple/IBM G5 was a whole hell of a lot faster per clock then the simple, in-order racehorse Xenon CPUs in the Xbox 360 ever were.

    They also ran incredibly hot and were, IIRC, very costly to make at the time. Basically just like Steamroller vs. Jaguar cores are now. 🙂

  61. Successful troll is successful.

    Anyway, PS4 isn’t going to defeat the hardware that put in a current workstation platform. That’s not the point. For its price point, it will eat any low-end gaming system you can build for lunch. PS4 and 360’s successor will be able to handle 2Megapxiel without too much difficult unlike their predecessors. Most of the world is going to operating at 2Megapixels until next generation HDTV reaches critical mass (not likely for another decade).

  62. You may be right, but I remember the 360 developer kits were literal G5 processors (Power Macs in fact, funny enough) because the architectures were so similar. It caused some issues because the fully featured G5s were actually faster than the cut down Xenon (and by extension the PPE in Cell) due to some cache capacities or bandwidths or something.

  63. Re read your original statement and take your foot out of your mouth. No one is arguing that the PS4 should be faster then the IGP solutions. I should hope so. Those systems however are not sold as gaming systems but there is a ton of PC hardware out there that easily thumps the PS4 in performance. The Titan alone for example has 2.5 times the computing capability of the entire PS4.

  64. Up to recently my posts have been short and to the point with little to no snark, so whatever hang up that may or may not exist is minor at best. It may even only be a perception on your part.

    Also if you’re reasoning proved true than Intel wouldn’t have had their Sandy Bridge CPU’s performing 30% faster per clock vs. AMD’s Bulldozer, but they did.

    This is because CPU design is not an exact science, there are still plenty of unknowns out there. On top of that other companies are forced to spend less on chip development due to a lack of financial resources. And on top of THAT there also exists a competency disparity between companies: IOW some of them will not only have bigger design teams they’ll have better ones too.

    Looking at various features and specs of different CPU’s is no way to make a comparison, your logic there is faulty, you have to look at benchmarks. You don’t have any benchmarks of the PPE vs. Atom of course. No one does. But we do have developer comments about Xenon’s performance, which is an IPC of .2 and we do know the IPC of Jaguar, which is 1.1, because AMD has said so publicly as well. So no guessing necessary, certainly not of the quality you’re doing.

    You can see the presentation slides here if you like: [url<][/url<] Go spend some time looking at the charts in the RWT article I linked, there isn't much technical knowledge necessary to read a bar chart and its fast and easy to do. BTW the C2D was generally about 20% faster per clock vs the K8.

  65. Well you’re clearly hung up on something. Put away your technical papers cause clearly thats not helping you with the obvious.

    So, as we’ve established, there is no magic in CPU design. There are physical limitations which guide designs to behave in certain ways given the same design goals. The biggest surprise might be 10% performance give or take, plus problems with fabrication, which is where a company like Intel can really get ahead. So IBM, AMD, Intel and anyone else are going to arrive at a similar place performance wise, given similar designs, except where fabrication can make a difference. Intel generally will end up at the top of the range… the others less so… but generally the same place.

    That is my point.

    Now, if we want to understand the performance of PPE we should go about it by [i<]comparing[/i<] it to processors that are better understood. That's reasonable, thats how people function everyday. The PPE core is of course not well benchmarked. However it is known to be high clocking, narrow, in order, SMT-enabled, and cheap. These properties are shared with the Atom, which is less high clocking, cheaper still, and importantly, in-order, SMT-enabled and narrow. From this we have an approximation of the performance we can expect. We know that the PPE core clocks a lot higher than Atom, although it is hard to know what the limits of each design actually are. Obviously a design that is intended to clock higher will pile on the pipeline stages and pay for it in IPC when the code gets branchy. So we can imagine that the PPE core will not equal the performance of Atom per clock, although to its advantage, it is also living in a world with much less emphasis on low power, giving unspecified advantages to the designers. So when someone comes out and says that Jaguar will have an IPC 5x as high as PPE, I call BS. I don't know exactly how they will compare, but even 3x the IPC would surprise me. Probably it is a bit over 2x the IPC, but with possibly 2x the clock speed going to the PPE core, all is not lost. I should also say that I expect Jaguar to get over 2x the IPC, and superior overall performance, while offering significant power savings. Probably it also easier to write "good enough" code for, and offers more consistent performance across different sorts of tasks. By the way, I am making those guesses with an eye towards Bulldozer and the Pentium 4 family. By making [i<]comparisons[/i<] between them and their competitors.

  66. No I’m hung up on the in-order vs OoOE thing, not the ISA.

    I haven’t even mentioned word one about comparing ISA’s up until this post so I’m baffled as to how you can read what I wrote and think that I have “a hang up” about PPC?

    The rest of your post is bizzare, and seems to literally be “I think this therefore it must be so” which is presented as “proof” and/or “logical”.

    While its true there is no “magic pixie dust” associated with either ISA both of them carry some inherent advantages and disadvantages which will effect chip design and performance massively. For instance, you are aware that many x86 instructions are of variable length and have highly variable decode times sometimes taking up to hundreds or even thousands of clock cycles to process depending on the architecture in question?

    Go look at the SOG for the K7 vs the K8 or K10, all OoOE x86 chips from the same manufacturer and design company, yet all have different guidelines and perform differently on the same code!! The PPE and Atom were made and designed by 2 very different companies so you can expect the difference to be even bigger!!

    If the SOG’s are too technical for you then I’d highly suggest you read David Kanter’s articles at RWT which are far more accessible for the layman. Here he compares the C2D to K8: [url<][/url<] If you don't want to read through the walls of text there are plenty of charts showing benchmarks comparing the 2 chips in many different work loads yet achieving very different levels of performance starting on page 3.

  67. What do you mean by “conquered the platform?”

    I think it’ll be at least a year, maybe two before mainstream PCs can catch up to the basic capabilities of the PS4 at the same retail price – which is an important factor in any comparison. If it comes in under the $500 price tag, I doubt there’s going to be any PC that can match the console with the same budget for the same kind of flexibility and near-constant performance it’s capable of.

  68. Actually, architecturally the Xbox 360 is the superior console. The Xenos GPU had unified shaders before the PC, was closer to something you could actually buy later on (ATi HD2000 series) and there are definitely more and better console ports from the PC to the Xbox because architecturally they’re so similar.

    The PS3 on the other hand had terrible single-threaded throughput with Cell (Xbox’s Xenon was a triple-core), the Nvidia RSX was based on the Geforce 7900GTX but lacked (still lacks) many features found in GPUs back in 2007 and it isn’t as flexible with AA and memory usage as the Xbox is. The 360 might have had lower theoretical throughput, but it was the better platform in the long term.

  69. So you believe a lot of people here don’t know what a strawman argument is? If so, why do you call it that then? How does it help your argument somehow?

    If you believe your “opponent” doesn’t know what a strawman is, mentioning it sounds like you’re mainly trying to score points with the ‘elite” observers that know what you mean, instead of trying to win the argument with your opponent. It’s almost equivalent to ridiculing them.

    You’d be in a much stronger position if you argued your point in simple terms that [i<]everyone[/i<] can understand

  70. I can.


    The PS4’s APU will be sold by AMD and will be available to the general public, albeit as a more cut-down version, probably with four cores and half the graphics power. Its very likely that, in the same way the Xbox 360 was the development mule for the modern APU, so the PS4 will be AMD’s dev mule to get into the next phase of their roadmap which is…

    [url<],5-4-347512-22.jpg[/url<] Happening this year! We don't know all the technical details, but the PS4 likely fulfills all those goals for 2013 in one fell swoop. It has unified memory, pageable parameters from CPU pointers and will probably do that whole memory sharing thing as well. Next year, AMD is probably going to reveal the Excavator APU family and it's probably going to stomp all over the PS4 like nobody's business. So, time is the PC's efficient metric. In the span of five years, the PS4's internal hardware will only be more efficient, but not more powerful. AMD's future equivalent APU, on the other hand, will likely be 2-3 times more powerful and consume the same if not less electricity thanks to architectural changes and extension optimisations.

  71. [quote<]But that performance was for that CPU and not Xenon or Cell, a total apples to oranges comparison.[/quote<] I think you're pretty hung up on the whole PPC vs x86 thing. I maintain that the PPE core is rather similar to the Atom core, probably the best analog available, and certainly the most similar CPU core that has been widely benchmarked. The various G3 and G4 chips were not all that dissimilar to P2/P3 performance per clock, the G5 was pretty similar to an Opteron, as far as I can tell. There is no magic pixie dust associated with either PPC or x86. The physics behind the decoder is the same, the human brains at work on the designs are more or less the same, the results are going to be broadly similar given broadly similar architectural features.

  72. 2005 Ferrari FXX Enzo 0-60 mph 2.8
    2011 Smart Fortwo Electric Drive Passion 0-60 22.4s
    Just 8x faster at 0-60

  73. Okay, I pulled it straight out of my ass … but my ass is stone cold PSYCHIC!! Trust me, you do NOT want to tangle with what comes out of my ass.

  74. any benchmarks to prove your point? or did you just make that up? obviously many people disagree with your theory since there’s nothing to back up your claim.

  75. Damn, what kind of cretin votes down a sweet comment like that? This site needs a better class of blindly reactive twits.

  76. -31 … I am typing this from the ‘other side’. It was a particularly clumsy and drawn out Hara-kiri, excruciatingly painful. Just wanted to let you know to put a little cheer in your day.


  77. Cheaper and uses a fraction of the power, both vital considerations. If it gets the job done why not?

  78. I’m strictly a PC gamer, I don’t even own a console. I just appreciate the bang up job Sony did with the PS4, but I’m far more excited about Kaveri and Sea Islands. I’m even more exited to see the 2015 roadmap. What magic comes AFTER Kaveri ?!?!? … This inquiring mind is dying to know.

    I just argue the PS4 because … blindly dogmatic PC fanbois spewing illogic = easy fun. And the PS4 totally kicks ass!

  79. The PS4 HSA APU architecture, which was relatively inexpensive to develop, is highly scalable and amenable to backward compatibility. An AMD HSA based PS5 with built in backward compatibility, 4-6x the power of the PS4 and capable of 4k gaming would be relatively easy and inexpensive to develop. The need to extend a console lifecycle to recoup development costs is nearly eliminated. It’s likely Sony will considerably shorten the time interval, introduce a PS5 in five years or so to accommodate 4k gameplay and better graphics and sell both consoles at the same time. With the game engines and toolsets then available it should be relatively easy and inexpensive to port a subset of a PS5 game to the PS4. Everybody wins, everyone’s happy, everyone profits.

    This was doubtless a factor in Microsoft and Sony going with an AMD APU solution.

  80. It certainly is not! But it has limited utility. Only a few work loads show any benefit from using it and some will actually be slower if you try and use it! Counter intuitive but that is reality for you.

    What has me more excited about Jaguar is that it is a pretty modern many core CPU that runs x64 code and has a pretty good cache structure that is similar to most modern CPU’s. For ports us PC users might finally get to see games actually make use of most of our hardware with a decent degree of efficiency!

  81. But that performance was for that CPU and not Xenon or Cell, a total apples to oranges comparison.

    Different architectures perform differently even if they use similar features but in general in order CPU’s lose big to OoOE CPU’s.

    To use a bad car analogy its like you’re effectively saying a modern Toyota V6 and a older Ford V6 engine must perform the same and get about the same MPG since they’re both V6’s.

  82. I only use it when it’s relevant and in particular there is one person that engages in it quite a bit then tries to mask it behind a semi-legitimate trollish argument. I really don’t think I’ve used it outside of arguments with Chukula on TR (and right now as a bag on Chukula).

    It’s really a shame most people don’t know what a strawman is so they can call it though. They’re pretty common online when you start dealing with the less savy people that say and do anything to win a argument.

  83. Calm down sir, we know you are excited about the PS4 (as I’m excited about Haswell-E 😀 ) but no need to act superior it will only rattle us PC gamers.

  84. I’m late but still missing an argument. It seems to me most games are cross platform, coming out on both a console and a PC. Is there any PS3 games today that have truly conquered the platform as Sony bragged about during release?

    I’m afraid AI, physics, texture bling and game mechanics for most games will never be tailored to the possibilities of the PS4. Instead use the lowest common denominator, which will be 8x jaguar cores and a slow PCIe bus.

  85. Agreed. PS4 will allow developers lower level access to graphics than the software layers in Windows will for PC. Additionally, developers will only need to optimize for a fixed hardware target, which should make bughunting and optimizing a little easier. So the “7850-class” GPU in the PS4 will be able to provide better efficiency and performance than a comparably spec’ed desktop 7850 (how mcuh better? That’s speculation for now).

    Having said that, the low-rent CPU in the PS4 won’t hold a candle to even midrange and low-end x86 CPUs in desktops, especially for non-parallelizable tasks. And the PC is not a static platform; so the PS4 hardware capabilities will be eclipsed at some point. Does that matter? No, people get consoles for the games, and there are many PS3 exclusives today that are not available on PC (just as there are plenty of PC games that are not available on 360, PS3, etc). And there are certain games and genres that play better on one platform than the other – FPS, MMOs, TBS and RTS on PC, driving games, 3PS and platformers on console – in general (exceptions exist, of course).

    Bottom line: new consoles and PCs will continue to coexist and complement but not replace each other. Buy based on game library, living room setup, budget and personal interests.

  86. [quote<]A high end gaming computer is a racing modified semi truck with a 12 liter engine while a PS4 is a Formula 1 car with a 3 liter engine.[/quote<] I think consoles would be a go-kart in that analogy rather than an F1 car. F1 is the pinnacle of speed and technology, which doesn't sound like a console to me. P.S. F1 cars have been using 2.4L engines the last decade or so, and next year they are switching to 1.6L turbocharged engines.

  87. Then we’ll have to check back in ten years at the end of the PS4’s life cycle to see if they have bothered to update the software.

  88. [quote<]OoOE is a hell of a thing.[/quote<] I can think of a certain in-order CPU benchmarked many times here at TR. Its IPC was a lot more than 20% that of Bobcat.

  89. Sophomoric ‘gotcha’.

    The hardware is capable but the software tool chain isn’t coded for it. If it were, then it could.

  90. They won’t mutate past the inefficiencies of Windows and game code needing to accommodate a vast array of hardware configurations.

    Nor will game developers be extracting the additional potential of new PC CPU and GPU architecture and increased power at nearly the rate they will be extracting additional potential from the fixed PS4 hardware for the first several years.

    They will be intently focused on extracting performance from the PS4 while they will be viewing new and more powerful PC hardware as a distraction, doing little optimization for that hardware and mostly going with whatever brute force performance increase is available with existing code … with the possible exception of AMD’s HSA APUs and GPUs. AMD might be able to get code optimized for PS4 and Xbox 720 AMD HSA APUs included in the PC code to semi-optimize for their PC based HSA APUs and GPUs. Being intimately involved at every level of the hardware, middleware and software chain will definitely position them to help this occur.

  91. They also almost never get used to their full extent in a PC desktop environment much less in a modern PC game.

    The context in which to consider things is just how much developers were able to eke out of Cell and Xenon, which are crap compared to a 8 core 1.7-2Ghz Jaguar.

  92. Kaveri will come far closer than the Haswell GT3, but Kaveri will still not come close to the PS4’s performance. But it doesn’t need to. It just needs to provide a high quality 1080p gaming experience in the price range the vast majority of PC’s are sold in.

  93. spigzone a lot of people said that about the xbox 360 and ps3. and for their time they had more impressive hardware.

    The pc mutates and will start to mutate quickly as soon as these consoles start effecting games.

  94. Efficiency isn’t one thing and performance another. That makes no sense. Neither does using the analogy of a smart phone soc vs. something a hundred times it’s die size and power usage to ‘prove’ that point.

    A Ferrarri has 10 times the horsepower of a SMART car and vastly more capable suspension, but it’s not remotely close to going ten times faster on a race track than a smart car.

    You’re cobbling together illogical analogies that have little to do with proving your assertions.

    It’s muddy, illogical and irrational.

    Can you do better please?

  95. The vast majority of PCs use integrated graphics.

    I’ve never seen a review of ANY Intel CPU sans discrete GPU able to run a modern AAA game at 1089p 60hz at even the lowest game settings.

    Have you?

  96. Funny because Sony has already said that the only 4k capability that the PS4 has is video playback (and that is through a dedicated decoding engine).

  97. What can I say, I’ve been watching Chael Sonnen trash talk Jon Jones. It’s contagious.

  98. I ‘work hard’ when going up against an opponent fully weaponized with clarity, logic, rationality and knowledge.

    Here I’m laying in a hammock idly flicking bits of rationality into a murky pond of goggle eyed goldfish.

  99. One that has a ton of PC systems that can breeze through 4k resolutions without issue and a bunch more that are a few years old that can push out lowly 1080 resolutions on discontinued hardware just fine and enough to run into 60 hz display limits.

  100. Again you FAIL to prove anything. Efficiency is one thing, performance is another. It really isn’t that hard of a concept to grasp. A PC doesn’t have to be as efficient when it can brute force its way blowing past the PS 4. Many PC’s are capable of doing that a few times over already.

    iOS custom OS geared towards their specific capabilities and ARM chips with everything unified on die therefore by your own logic, the iPad/iPhone/etc are far more efficient, and “STOMP STOMP KICK KICK” the living crap out of the PS4.

    The SMART car is a more efficient car however a Ferrari kicks the living hell out of it in terms of performance. Just like a PC can kick the living snot out of a PS4 in the performance metric.

  101. “Oh really because pretty much any PC can fly through games at 1080P just fine so how is it exactly bottlenecked?”

    What kind of mind comes up with a statement like that?

  102. A marshmallow reply isn’t an argument, it’s a soft squishy word blob that melts into a sticky gooey mess at the first appearance of the heat of logic and rationality.

  103. TOTAL FAILURE to name a single metric the PS4 isn’t more efficient in than a PC.

    Here, I’ll name a few metrics where the PS4 stomps a PCs ass in gaming.

    1. Custom OS – hugely more efficient than running WIndows.
    2. CPU and GPU on the same die – substantially faster and more efficient data movement between CPU and GPU.
    3. 8GB GDDR5 feeding an HSA unified address space – these capabilities don’t even exist on a PC and here we get into truly spectacular efficiency gains over a PC, racking up 5x, 10x and even 15x more work accomplished per clock cycle, depending on the task.
    4. Games coded to take full advantage of a single hardware configuration. ALL CPU cores, ALL GPU cores, ALL the system memory. Coded to use ALL the system resources and EXACTLY the system resources. Massive efficiency/utilization gains here.

    Can you name a SINGLE metric in which a PC will be more efficient than the PS4?

  104. Yeah, no problem.

    It’s like asking if I think putting my 7870 on the same die with my 6 core Thuban, both modified for HSA compatibility, doubling the amount and speed of my memory and running a version of Skyrim heavily modified and optimized to fully utilize my particular hardware set up while bypassing all the Windows overhead would run Skyrim at 2560×1440 at 120fps on ultra.

    Duh. Yeah. Ridiculous to think otherwise.

  105. [quote<]"THE PS4 WILL HAVE !0X THE PERFORMANCE OF THE PS3."[/quote<] Really, a whole !0 times greater? You should've used Caps Lock instead of Shift.

  106. Ya, I guess if we used his logic, even the PS4 couldn’t come close to matching an A4/A5/A6. STOMP STOMP KICK

  107. *flails his arms around randomly and goes full retard with spigzone*

    (Curiously is this trying to imitate one of Chuckulas trolly strawmen posts?)

  108. One can only hope it turns out with PCs in the lead and consoles are alike PCs enough where things can be simply ported back to consoles… They should’ve been capable of doing that with the PS3 and X360 though… Most games utilized the UE3 engine and that can simply be compiled for whatever system it’s being distributed to.

    I agree, PCs have always been a great staging ground, but developers have been more attracted to the console hardware baseline so they don’t need to doll up their graphics and can chince on PC gamers who usually have higher quality standards.

  109. I’m pretty sure modern games still use a processor to a certain degree, especially those that involve physics (which will be hopefully catching on again).

    I’m glad you highlighted buzzwords, but I don’t think they do more then buzzz a bit. Watching W7 CPU usage, a lot of games evenly distribute their load across all eight of my cores. Perhaps W7 is doing a bit of the work, but it still turns out pretty good.

    Jaguar cores are cheaper… You wouldn’t happen to know of consoles cutting corners to make things cheaper would you?

  110. You know that even if we threw down concrete benchmarks of X game on both the PS4 and PC (which btw, no one has, unless they have some serious NDA breaking tendencies), he would just spew some more of the “but but, that’s just a release game, it’s not the true “realizable” performance or “but but you forgot to mention the special chip that is custom built that lets you download and update while still playing, you can’t explain that”, and so on and so forth.

    The only way to beat this brand of stupid is to ignore it.

  111. Even when all cores are used, 8 core jaguar @ 1.6Ghz is probably just half as fast as i7 3770 or FX-8350

  112. Wow for someone that considers “Mensa is beneath me” you sure like to move the target. PS4 maybe efficient but it hardly stomps the performance of a gaming PC. Far from it.

  113. LMFAO, read your original statement. Since when are we talking about just IGP’s in Llano/i3’s? You said [b<]any[/b<] off the shelf components PC. Sound's like your backing away from your position.

  114. It is true: [url<][/url<]

  115. I’ve heard that Jaguar support 256-bit AVX due to “double-pumping” the FPU. Anyone know if this is true?

  116. when gcn2 comes out it will have all the features of the ps4 GPU and more at probably 4x the performance and it’s not true that pcs need the best CPU and ram available. Kaveri and haswell gt3 will come close ps4 performance with the next gen taking over.

  117. jaguar supports avx sse4 and fma4 so its going to be a formidable CPU too. Its not haswell but it’s at least 50% faster than atom per core. I thing is wasn’t mentioned much because it won’t be doing the heavy lifting the GPU is going to provide, this is a game console after all.

  118. The K7 Athlons had a IPC of 1.2 IIRC. “Modern” x86 chips have been getting a IPC of 1 or more for quite a while.

    OoOE is a hell of a thing.

  119. That’s an astonishing statement, as in astonishingly absurd.

    So … that IGP Llano or Core i3 Ivy Bridge can run Far Cry 3 with maxed out settings at 1080p?

    ‘Cause I’m thinking every tech review site on the planet would laugh themselves into an hysterical sideache at such a claim.

  120. Mensa is beneath me.

    The PS4 is more gaming efficient than a Windows PC across every conceivable metric. If you take issue with that name one gaming related metric a PC is more efficient at than the PS4.

  121. Why wouldn’t I accept it, the Chief Technology Officer of EA said exactly that and there are very few people in the world better positioned and qualified to know the relative capabilities of current gen and next gen Sony and Microsoft consoles.

    Prior to the actual specs leaking NOBODY was predicting a next gen 7850+ class GPU and 8GB of GDDR5 RAM.

    It wasn’t ‘revolutionary’, but it was sure as he// more than anyone was expecting and using the term ‘exquisite’ to describe it’s component optimization is not out or order.

    Your argument is soft and squishy, like a marshmallow.

  122. Nope you wont, not by a long shot.

    I guaranty you that you will not be able to run the AAA Ps4 title in full quality on your PC.

    1) Your 560ti doesn’t stand a chance for compute VS a 7850 class GPU
    (specially when developers will have their code GCN tweaked out)
    2) Your pcie bus will become the #1 limiting factor, and be ready to use the ‘lores texture pack’
    3) Your CPU will be burdened by the windows GPU drivers layer, and the fact that it will run the SSE2 code path instead of the tweaked jaguar AVX code.

    So ok you have 6x4ghz vs 8×1.6ghz of CPU compute.
    But that advantage become less clear by having to run under windows (heavy overhead from the architecture limitations) and run SSE2 class code.

    Now if you had a 3GB 7950 class card, you might have a chance to run top Ps4 title in full quality.
    But not with a 560ti….

  123. Most games right now are not very demanding because they are coming off of 6 year old consoles or are targeting the majority who have something like Intel HD 2500. It’s not surprising that a recent high-end GPU is fast, considering…

  124. Can still get decent scaling until around 16 cores. Once you hit 32 cores, you need some really good algorithms to keep scaling.

  125. It’s an office PC / general purpose workstation with a beastly 3D accelerator card. It is a very flexible platform of course but it isn’t optimal and there are definitely shortcomings. Windows is part of the problem too because it’s designed to be so flexible and stable but makes compromises to do so (jack of all trades).

    Anyway for me the strengths of the PC are genres like RTS and MMORPG, along with game modding. Years ago there was more to differentiate, like sci-fi and military simulations. The latter are still around to an extent but nothing like in the ’90s.

  126. Oh really because pretty much any PC can fly through games at 1080P just fine so how is it exactly bottlenecked?

  127. If you accept that the PS4 has 10x the performance of the PS3, that (just) puts the PS4 in line with what you would expect from Moore’s Law. Expecting it to be something revolutionary and “exquisite” is just embarrassing.

  128. Depends on how much of the same hardware is in the next Xbox. If much of it is comparable with each console needing a little tweaking via dev kits to make it use the equivalent feature on each platform, then things’ll be great.

    If the features are vastly different, one platform will emerge as the primary one.

    Of course, developers may take this as an opportunity to make PC’s the lead platform and then just port to the consoles, which in book would be the absolute best possible result and a likely one, too. It’s not hard to imagine future console games wanting to be ported to a variety of platforms like tablets and smartphones and putting it on PC first would make a great “generic” lead platform from which to make ports for any other system you like.

  129. The PS4 targets 1080P, not 4K. That’s not a PS4 ‘bottleneck’, it’s a design feature.

    A console is a defined hardware spec targeting a defined resolution and refresh rate.

    It’s the PC that is maximum resolution and refresh rate bottlenecked.

  130. Can they go higher then that? 4k gaming is a reality on the PC, 4k TV’s are already on sale. Are you saying that the PS4 can’t stop on a PC? Are you saying that somehow a PC that can handle 4k would flutter and die at 1080? What about [email protected]? Plenty of 120Hz TV’s out there which are desirable for 3D purposes.

  131. How are maximum resolution and refresh rate bottlenecks for a console targeting a 1080p 60hz TV?

  132. Do you think that Skyrim would run 2560×1440 @ ~120 fps on ultra settings on a PS4?

  133. There are plenty of bottlenecks in the PS4 hardware such as maximum resolution, refresh rate, I/O etc. Perfect optimization would mean that every single developer not deviating from the optimal code path. Those of us that have done development know however that, even in a dedicated system dedicated to gaming compromises are not only common but plentiful especially when much of that code is to be shared with completely different platforms.

  134. Funny I never run into at the Mensa meetings. However it makes perfect sense. When you talk performance it depends what you are comparing it to. It seems you are focused on power efficiency which is a power to performance comparison. If you compare are comparing the performance between two different items then you have to specify what you are comparing. FPS? Power efficiency? etc. Product A may perform power efficiency but Product B may have better performance in more frames per second.

  135. Um no. Remember this all will be cooled poorly. The cpu isn’t powerful its low heat because they cant go any higher folks they need the heat budget in the gpu and their heat budget… not so great so don’t expect too much.

  136. The concept of fully optimized is for all components to ‘bottleneck’ at the same time, which per the article is exactly what the PS4 team strived for. Put another way, to leave no bottlenecks that then causes an underutilization of the rest of the system’s elements.

    Hence the statement “there is a point where optimization leads to issues in other areas” as a negative is illogical. When one optimizes a system one DESIRES it ‘leads to issues in ALL other areas’ simultaneously.

  137. PUTTING some of your WORDS in ALL CAPS doesn’t HELP you emphasize YOUR POINTS, it JUST MAKES you look like a complete and UTTER RETARD who doesn’t UNDERSTAND that things like “HSA” and “next gen” are just buzzwords.

  138. Comparing an ARM chip to the TITAN and then comparing two different GPU boards as an analogy to an ultra-optimized high power next gen console vs. a PC running windows makes sense how?

    For the love of Odin, is there anyone on this forum with an IQ higher than the number of boogers rubbed under their desk?

  139. No, actually it’s MORE raw horsepower than the PS4 has. You lose, thanks for playing.

  140. Totally nonsensical.

    I have a Phenom x6 with a 7870 and 4GB of 1600 ram, which is far less raw horsepower than the PS4 has, and I have no problem running Skyrim at 1080p60 with the graphics maxed.

    If anyone’s trolling here, it’s you.

    Or just manifesting profound ignorance.

  141. No, it actually makes perfect sense and it squashes your whole argument.

    But I guess maybe the MAGNITUDE of that statement is too big for you to grasp? Hohoho.

  142. The CPU doesn’t need to burn down the barn, that’s what the HSA enabled GCN 18CU GPU fed by 8GB of GDDR5 is for. That said an EIGHT core NEXT GEN EFFICIENCY OPTIMIZED CPU fed by programming to UTILIZE all eight cores is formidable in itself.

    If the CPU bottlenecked the performance it would have been replaced by Kaveri cores. The Jaguar cores are obviously sufficient to the task so any reference to or argument about their being too weak is nonsensical.

  143. I typed up a huge reply to your post here and then the browser on my phone crashed. AWESOME. The gist of it:

    Adding a multi-hundred-dollar expansion board to your PC designed expressly for playing games makes it pretty clearly a gaming PC. “supercharged office PC” is an incorrect perception on your part.

    Consoles aren’t really “PCs” at all, since you can’t pick and choose the software you run.

  144. [quote<]EFFICIENCY + OPTIMIZATION = REALIZABLE PERFORMANCE.[/quote<] Not necessarily, there is a point where optimization leads to issues in other areas (any gentooer could verify this).

  145. Some of these things sound interesting, but it really depends on what the Xbox 360 has in store. The PS3 had some pretty neat hardware at the time too, but the Xbox 360 completely reigned that in simply by being subpar. A few of these things sound like you need to write specifically for the hardware to take advantage of them, I don’t imagine many developers doing that.

    More so then the X360, this depends a lot on how engines like the UE4 and Cryengine 3 take advantage of it as most developers simply use premade engines and don’t design exclusively for hardware. I’m honestly not a huge fan of overly custom hardware because you run into scenarios like this where either developers have to spend an inordinate amount of time customizing for exclusively one system or they simply don’t (usually the latter). That’s also one of the reasons why I’m a fan of PCs more so then consoles, because they simply do what you tell them to and they’re wide open for customization. It gives people a hardware that does what it’s supposed to without building exclusively for it.

    It’s rather odd, developers over the last few years have exclaimed how awesome consoles are because there is a hardware baseline there that never changes (so they don’t actually need to improve art assets or try to make their game look prettier), yet most of them never go out of their way to actually try to fully utilize said hardware and simply design for them like PCs with crappy hardware. They just hop on the UE3 bandwagon or whatever. Perhaps this is why I refer to console developers as being lazy and consoles should DIAF.

    I’m sure they’ll eventually realize that since they design for the UE3 or Crytek 3 engine anyway there really is no reason to design for consoles in the first place as that can all be backported… And then ends the reign of consoles and PCs are put first and foremost once again.

  146. Wouldn’t it be neat if PC gaming was less supercharged office PC and more built-from-the-ground-up gaming monster? I suppose consoles are just that, but in a mass market-priced format.

  147. [quote<]Efficiency and performance are intertwined. The better the efficiency the better the performance.[/quote<] That really depends on what you are comparing. An arm chip graphics for example is very efficient but when compared to a monster like a Titan, the Titan skins the arm alive in terms of overall performance. Saying they are intertwined depends on the metric you are comparing. xyz card consumes 10 watts and gets 100 fps zyx card consumes 100 watts but gets 1000 fps Both are equally as efficient but zyx stomps xyz into the ground in terms of performance when compared against each other.

  148. Wait, are you actually serious? I thought you were just trolling, but I guess you’re actually that ignorant.

    For efficiency to matter, the raw performance has to be there. The raw performance of my hardware shits all over the PS4’s raw performance. Even if it were many times more efficient than a Windows machine — which it isn’t, overall — it would still “only” perform similarly.

    Sony has been trumpeting their performance numbers based on raw GFLOPS for decades. I remember back when the PS3 came out they were going on about the Cell’s raw math performance — kool-aid some Sony fans are still drinking. That stupid “8 to 10x more powerful” line is just barely-justified marketspeak and even if it were true, even at 10x more powerful than the PS3, you’re still talking about 1080p60 levels of perfomance, which just isn’t that impressive in this day and age.

    I will be surprised if the first wave of PS4 games even run in native 1080p60, honestly — though I’d love to be wrong.

  149. Efficiency and performance are intertwined. The better the efficiency the better the performance. If you are only using, on average, 20% of your GPU’s potential across a limited set of operation types, poor efficiency and poor realized performance. If you are using 90% of your GPUs potential across a far larger set of operation types, excellent efficiency and excellent realized performance. Same hardware, 3x or more the performance. That’s essentially what’s happening in a PS4 vs. PC scenario. Except MORE so.

    Get twice the amount of usable computing done per clock cycle (efficiency) and you get a dramatic performance boost. It’s not complicated.

    A machine like the PS4 has a wide array of moderate to massive hardware and software efficiency gains over any Windows PC.

    8 to 10x more powerful than last generation. Said by someone in a position to KNOW.

    I don’t know, maybe the significance, the MAGNITUDE of that statement just isn’t graspable by most people.

  150. 3.2×4 gets you better single threaded performance, marginal better multithreaded performance, and less power efficiency.

  151. But what if it were as fast as a core2duo (T7300)ipc wise..? [url<][/url<] 4 cores at 2,0 GHz also seem to able to keep up with a q6600.

  152. I’ll believe it when next-gen consoles run DX11-class games in 4K resolution at >60FPS. Except they won’t, and I already do. ‘`,、(‘∀`) ‘`,、

  153. You purposely use a misleading word “efficiently” but write both posts entirely as if you were saying “performance”. In the end, a high end pc will look+run better than PS4, albeit at a much higher cost.

    [quote<]In realizable performance the PS4 will stomp your machine into the dust.[/quote<] And this sounds like some marketing guy's infamous last words on twitter.

  154. EFFICIENCY + OPTIMIZATION = REALIZABLE PERFORMANCE. The PS4 will be an order of magnitude more EFFICIENT and OPTIMISED than any PC. Even the most powerful gaming PC running Windows is inherently riddled with bottlenecks, lags and programming inefficiencies that won’t apply to the PS4.

    A high end gaming computer is a racing modified semi truck with a 12 liter engine while a PS4 is a Formula 1 car with a 3 liter engine. It’s all about exquisite optimization and efficiency.

    Consoles were already using their hardware far more efficiently than any PC could. The PS4 is going to use IT’S hardware far more efficiently than any previous console. And it has some SERIOUS hardware to begin with. Hence the EA CTO saying the next gen consoles are 8x to 10x more powerful than current gen consoles. The PS4 is almost certainly that 10x figure.


    No gaming PC today has a REALIZED PERFORMANCE remotely close to that.

    In realizable performance the PS4 will stomp your machine into the dust.

  155. I find it highly dubious the PS4 will outperform my machine. Just saying.


  156. They aren’t calling the CPU super charged. In fact, they didn’t say anything about the CPU.

  157. They aren’t using custom compilers anymore. Microsoft is using VC++ and Sony GCC on the PS3 and Vita and Clang on the PS4.

  158. [quote<]Remember the Xenon and Cell are from the PowerPC G5 era and even gimped versions of that, things have come a long way.[/quote<] More accurately you could say they are from the Power6 era, when IBM was apparently fascinated by high clocking, in-order designs. Power6 and the PS3 even launched at similar times (PS3 leading by maybe half a year). I don't think there is any particular relation between G5 (a Power4 derivative) and the PPE core.

  159. In other words a flat out gaming monster massively more efficient than any previous console, much less any current off the shelf component PC running Windows that will chew up and spit out even the highest end gaming computers of today.

    PCs are going to be eating the PS4’s dust for a quite a while.

  160. I seriously doubt Jaguar gets 5x the IPC on the same code, except perhaps in some very specialized cases. I could see 2x the IPC being fairly normal.

  161. It’s not actually overly broad when comparing the same architecture… there’s a cost to going parallel and thus forking a single task onto two cores running at half the speed will at best be the same performance, but in most cases will be slightly slower (even for very parallel-friendly tasks). Put in reverse, any set of 2 tasks that you are going to run on your 1.6Ghz dual-core I can just run back-to-back on my single 3.2Ghz core and be done in the same amount of time. And I might be able to improve the algorithm to avoid the overhead of parallelism as well 🙂

    Now of course in this case the architectures are not the same, but it’s safe to say that desktop chips will maintain an IPC advantage, and definitely have a FLOPs advantage, so the general comparison holds.

    i.e. there’s really no way for the PS4 CPU to end up faster on some piece of code than the same code run on a 3-4Ghz quad-core desktop chip, but so what? It’s a great trade-off for the form factor, power and die size constraints I think.

  162. [quote<]No matter how you slice it, an 8-core 1.6Ghz CPU is strictly worse than a 4-core 3.2Ghz CPU. [/quote<] I think this statement is overly broad, but actually I can't think of any 3.2ghz quad core that would loose to any 1.6ghz octo-core CPU. I am very disappointed. However if we ignore the total number of cores on a chip, its not hard to find examples where a single 3.2ghz core would loose to two 1.6ghz cores...

  163. while i agree on core count, the core performance is my gripe, calling it supercharged is a hell of a stretch. having a 5 gallon gas tank in a ‘vette does not make it “”awesome!”” which is similar here. great buss, great gpu, great integration and lower latencies. but then you have 8 honda civic’s on the track with a bunch of ‘proper’ 911’s

  164. the phrase “SUPER charged” means to me that **all aspects** are omgz-level performance. this is not that at all. it is a well balanced ”to task” build to do 2 things well. play limited resolution games and stream media and content to that effect this is a perfect balance..

    but if you put it up against my 4ghz thuban with a measly 560ti at the same rez i will handily beat it. (note i did not say ‘wipe the floor’)

  165. I’m not knocking the gpu. In fact in terms of gpu I think its AMD’s game to loose on the gpu front. Intel just barely catching up, nivida is nowhere to be seen on the integrated front.

    But the phrase “SUPER charged” mean to me that all aspects are omgz-level performance. this is not that at all. it is a well balanced ”to task” build to do 2 things well. play limtied resolution games and stream media and contetnt to that effect this is a perfect balance..

    but if you put it up against my 4ghz thuban with a measly 560ti at the same rez i will handily beat it. (note i did not say ‘wipe the floor’)

  166. -16 haha.. seriously you folks expect Jag cores to beat 10.5 stars? wow…. /facepalm/

    when we see the single thread benchmark we see how the cards fall…

  167. 😉 which was my point… but at -15 i guess the fanbois are butthurt … again…

  168. Uhh Ivy Bridge can do an 8-wide SP add + an 8-wide SP mul per clock, so it has twice the throughput you are giving it (i.e. ~220Gflops, not 110). And Haswell has another 2x on top of that. Why are you just counting 128-bit instructions?

    No matter how you slice it, an 8-core 1.6Ghz CPU is strictly worse than a 4-core 3.2Ghz CPU. And IPC on Jaguar will not hit the levels of the “big cores” either.

    That’s not to say the PS4 CPU is bad. On the contrary, I think it’s a very good choice. But don’t pretend it’s going to be faster than desktop chips.

  169. Actually they probably could use the Intel compiler, which is really pretty good BTW, with the proper flags on compile.

    MS/Sony will certainly do custom compilers mind you, but not because Intel’s stuff isn’t up to snuff. Also hand optimization isn’t really done all that much anymore, even on consoles.

    A lot of the big efficiency improvements will actually come from the artists gaining an understanding of what these new consoles can and can’t do effectively and tailoring their work to fit the hardware appropriately.

  170. Well this deliver 102 GFlop using 128bit AVX
    The I7-3770k peak at 112 GFlop running the same 128bit AVX code.

    Atom is nowhere close…

    Also PS4 game developers wont be using Intel compiler.
    And unlike many software, function wont be Intel tweaked anymore.
    The compiler & hand optimization will target the jaguar architecture directly.

    You are underestimate what this 8core CPU can do.

  171. ArtX, the company which designed Gamecube’s GPU, consisted of some of the SGI people who did N64. They were later acquired by ATI.

  172. Yup, comparing them to multi hundred dollar PC CPUs is missing the point, even these 1.6GHz enhanced Jaguar cores will run circles around what developers are coming from. Remember the Xenon and Cell are from the PowerPC G5 era and even gimped versions of that, things have come a long way.

  173. To be fair what they are saying about the connections between CPU and GPU in PCs (even with on-die graphics in most cases, I forget who has unified address space by now) is true, the limit for GPGPU work thus far has been data swapping between the GPUs local memory and the CPUs, which often took longer than the calculation itself would. The things they did to improve that should allow for much more impressive GPGPU work.

  174. I don’t know about most, but that usage model does almost nothing to put processing power into perspective for me. ;-D

  175. Just to provide a little perspective, we used to run an entire city’s fully-windowed, map-based, real-time 9-1-1 call-handling and dispatch system on a 2.0 GHz dual-core Athlon with 4 GBytes of RAM. With double the horsepower, double the memory and a powerful GCN unit hanging on the side in an optimally-integrated architecture, developers should be able to do some amazing things.

  176. You are splitting hairs.

    Bobcat is only a little slower than Athlon 64. Athlon II / Llano are only a little faster than Athlon 64.

    But Jaguar is a new architecture, and this particular chip will use a significantly more power efficient manufacturing process.

    A bigger issue is the L2 cache clock, but that is entirely dependent on the chip. And in this case, the chip was designed exclusively for the PS4, so I don’t see why that would be an issue. We already know it has an astronomical amount of memory bandwidth.

  177. Kabini is hardly comparable to what’s in the PS4. It has the same type of CPU cores as the PS4 but only half the amount and probably clocked lower. The GPU is on a completely different scale too, not to mention the memory.

  178. Slow? A single 1.6 GHz Jaguar core will be faster than that single PPE core in the PS3’s Cell processor in general computational tasks. Jaguar has an IPC of around 1, whereas that PPE core has around 0.2 IPC.

  179. [url<][/url<] Wikipedia seems to agree. So if I was the hat-wearing kind of man, I would tip it.

  180. They only got there because of their ATI purchase. AMD continues to be known mainly for their CPU business, and this is the first time game consoles use AMD for their brains.

  181. IIRC Jaguar is an improved Bobcat which was a supposedly clean sheet design targeted for low power/cost devices.

    Its mean to compete with Intel’s Atom but generally performed significantly better than Atom.

  182. While on paper it looks mediocre even vs the older PS3/X360 CPU’s FLOPS in real world console CPU work loads and situations a 1.7-2Ghz 8 core Jaguar will stomp the hell out of Cell/Xenon. IIRC AMD listed Jaguar as having a IPC of about 1.1 and that is for general PC use.

    For reference some developers mentioned publicly on B3D that Xenon had a real world performance of around .2 IPC. I don’t know what the avg. IPC for Cell was but it was also probably nearly as dismal if not worse than Xenon for general console use. After all it was only able to show some of its potential performance as long as you were doing something embarrassingly parallel like making up for the short comes of the PS3’s GPU.

  183. Jaguar is not the same architecture as Bulldozer/Piledriver. I’m not sure off the top of my head, but it’s more similar to Phenom II if I’m correct. Definitely don’t quote me on that, though.

  184. The CPU’s single-threaded performance may be slow, but there are 8 of them. Hopefully we’ll see better utilization of multiple cores moving forward.

  185. “”supercharged PC architecture”” my ass. It is a Atom class cpu, true it is better than atom… but the cores are targeted at competing with atom.

    if this was (8) 10.5 stars cores running at a lower speed i’d be more impressed.

    now that being said, i am glad amd got the gig. it will provide a lot of soc experience and help them in the long run to build smaller faster more ipc efficient cpus.

    but supercharged…. nope.

    The review of Kabini by TR will shed a lot of light… commmmeeeonnnnn damage 🙂

  186. Interesting stuff. It’s still a little worrying to see that low clock speed with AMD’s recent lackluster IPC. Still I’m looking forward to better coded ports thanks to x86.

  187. I’m pleased to see AMD power the PS4. Who would’ve thought such a high-profile console will use both AMD CPU and GPU? This is perhaps the first time AMD has ventured beyond the PC. Hopefully this will earn them big bucks to stick around.