Xbox One SoC rivals AMD’s Tahiti for size, complexity

Microsoft revealed some details of the Xbox One’s custom SoC earlier this week at the Hot Chips conference, and I’ve been a little negligent in writing it up—in part because I didn’t attend Hot Chips this year. Anyhow, Charlie at SemiAccurate has posted some pictures of key slides from the presentation; they offer better information than some of the other reports I’ve seen simply because they’re unfiltered.

We’ve known the outlines of the Xbox One’s AMD-designed SoC for a while now, including its eight Jaguar CPU cores and GCN-class integrated graphics. One of the primary revelations to come out of Hot Chips is the SoC’s sheer size: the thing is 363 mm² and is comprised of roughly 5 billion transistors; it’s produced at TSMC on a 28-nm HPM process.

To give you a sense of scope, the Tahiti GPU that powers the Radeon HD 7970 is also produced on a 28-nm process at TSMC, and Tahiti packs 4.3 billion transistors into 365 mm²—so the two chips are quite comparable in size.

I’ve seen speculation that this SoC might be among the largest chips ever produced, but that’s a bit off the mark. Nvidia’s GK110 GPU, also made on TSMC’s 28-nm process, has 7.1 billion transistors and is 551 mm². The Xbox One’s engine is a pretty hefty chunk of silicon, but it’s not going to set any records.

What makes the Xbox One SoC somewhat unique is not just its size, but also how AMD and Microsoft decided to spend its transistor budget. The presentation says the SoC contains 47MB of internal storage. Presumably, that number encompasses all storage across the chip’s various cache levels and other SRAMs, including the 32MB of embedded SRAM that’s right there on the die. Memory cells tend to be denser than logic, which is probably why this SoC crams ~700 million more transistors into the same space as Tahiti.

This chip’s use of DDR3 system memory presents an intriguing contrast to the otherwise-similar SoC in the PlayStation 4. Sony chose to employ much higher bandwidth GDDR5 memory, instead. The Xbox One’s architects sought to overcome this bandwidth disparity by putting quite a bit of fast eSRAM memory on the SoC die. This decision hearkens back to the Xbox 360’s use of an external 10MB eDRAM chip, but it also participates in an emerging trend in high-performance SoC architectures.

For instance, Intel incorporated 128MB of eDRAM onto the Haswell GT3e package to serve as an L4 cache, primarily for graphics, in order to overcome the bandwidth limitations of the CPU socket. The result is quite credible performance from an integrated graphics solution—and it comes in the context of a relatively modest 47W power budget, in part because local cache accesses have lower power costs than going out to main memory. The GT3e also happens to be a virtuoso in bandwidth-bound CPU workloads like computational fluid dynamics thanks to that massive L4 cache.

The benefits of such configurations are not lost on the architects of various upcoming x86 SoCs, if the rumors we’ve all been hearing are true. It’s quite possible we’ll see on-die or on-package caches rising into the hundreds of megabytes or even into the gigabytes over the next several years. If and when that happens, many of today’s thorny, compute-intensive workloads could begin to look very much like solved problems.

With that said, we don’t yet know whether the Xbox One’s eSRAM will provide enough oomph for Microsoft’s new console to keep pace with the PS4. The slides appear to indicate that the Xbox One’s eSRAM is arranged in four segments of 8MB each, with four 256-bit read/write data paths. Peak bandwidth is indicated to be 204 GB/s, with minimum bandwidth of 109 GB/s. Right now, we have to presume the PS4’s GDDR5 main memory, with 176 GB/s of throughput at 5.5 GT/s, will give it some advantage over the Xbox One. However, the eSRAM in the One’s SoC could confer some nice benefits itself in the form of lower access latencies—important for CPU-bound tasks—and increased power efficiency. In the end, this contest might prove to be much closer than folks first thought.

Comments closed
    • Airmantharp
    • 6 years ago

    Will Microsoft or Sony upgrade these consoles over time?

    [quote=”the”<]I'm also wondering if MS has provisions to transition between DDR3 to DDR4 down the road when things become economical. DDR4 speeds would still be 2133 Mhz effective to conserve bandwidth but it'd be a cost cutting and power saving measure.[/quote<] In addition to a memory interface upgrade, which should be easy to do, I'm wondering if MS didn't go with the ESRAM/DDR3L solution in order to get their basic platform set with the intention to upgrade over time. I think that it's perfectly feasible for Sony, MS, or both, to use future process nodes to make more powerful versions of the PS3 and/or Xbox One in the future, instead of just using them to shrink their current part for better margins, which they may do as well. Given that these consoles both top out at 1080p for game rendering, while they support 4k outputs (just like the PS3 and Xbox 360 rendered to 720p while supporting 1080p output), it stands to reason that they could easily upgrade a range of the components without breaking software compatibility in a PC-like fashion to support higher resolution or higher performance, at a price. And given the extremely focused architectures that the last generation of consoles had (and the one before that, save the original Xbox), I don't think that there'd be any impediments to MS or Sony ordering an upgraded SoC at TSMC's next stable node while also upgrading memory, storage, or other game-system components like networking or adding a CableCard slot or whatever. What do you guys think?

    • sschaem
    • 6 years ago

    The number that need more explaination is the 109GB.
    We know the xb1 ram is rated at 68GB. So if 109GB is the sram bandwidth under normal condition.. ouch!

    BTW, the PS4 does have on board cache, just not as much.

      • nico1982
      • 6 years ago

      The 109 GB/s figure is the only certainty provided. 4 * 256 bit * 853 MHz equals 109 GB/s, period. Why they define it as being ‘minimum’ and provide a theoretical maximum is anyone’s guess.

      The 8 MB split is a novelty and more interesting. If the 1024 bit ESRAM interface is not shared among the modules, then the 109 GB/s is just the aggregate bandwidth. Do this mean that if the data on the ESRAM is not copied across multiple modules, it can only be accessed at roughly 28 GB/s?

    • eitje
    • 6 years ago

    xbox is huge lol

    • TheMonkeyKing
    • 6 years ago

    [quote<]It's quite possible we'll see on-die or on-package caches rising into the hundreds of megabytes or even into the gigabytes over the next several years. If and when that happens, many of today's thorny, compute-intensive workloads could begin to look very much like solved problems.[/quote<] Does this mean a bitcoin rush? (Like the old days of the gold rush period where everyone became a miner.)

    • Deanjo
    • 6 years ago

    I’ll wait for the steambox. Both the PS4 and XBox One are too locked down for me.

    • ronch
    • 6 years ago

    [quote<]Memory cells tend to be denser than logic, which is probably why this SoC crams ~700 million more transistors into the same space as Tahiti.[/quote<] Not only that, but it also has the CPU cores as well as more glue logic here and there. Does it also have the connectivity options found in Kabini such as USB and SATA controllers on-die? I don't really care much about the Xbone's extras though. I just want a no-frills game machine. If I were out for a next-gen console I'd probably get the PS4 although I'm certainly not ruling out the Xbone.

    • Ashbringer
    • 6 years ago

    Is that with or without the 8 core CPU? Cause the Xbox One has like 768 stream processors. The Radeon HD 7970 has like 2048 stream processors. The nearest likeness to the Xbox One is a Radeon HD 7770 which has like 640 Streams.

    Someone tell Microsoft to take their marketing someplace else. Also it’s a System On Chip, which means that chip has a lot of things other then a GPU.

    • brute
    • 6 years ago

    if u want hot chips u need try new doritos

      • Ashbringer
      • 6 years ago

      Now unlockable Doritos player.

      [url<]https://i.chzbgr.com/maxW500/7742999808/hA25EB071/[/url<]

    • spuppy
    • 6 years ago

    Has it been determined if the XB1 uses the same hUMA architecture as ps4? I know it’s confirmed for the latter, but I wonder if the ESRAM design makes it impossible for the former

      • BoBzeBuilder
      • 6 years ago

      I thought the former. But now that I see the former, I think that means the latter is true, which makes the latter truer than the former since the former is true.

    • sschaem
    • 6 years ago

    Size in term of transistor… but not complexity.

    Its ‘mostly’ sram. 32 MBytes worth . Something Sony didn’t have to do with their use of GDDR5
    And it turn where able to pump a 50% faster GPU.

    Take sram out, and what you have is not that impressive.

    That cache deliver 204 GB to 32MByte . PS4 get 176 GB to the entire 8GB.

    • lycium
    • 6 years ago

    Excellent writeup Scott!

    I really wish desktop CPUs with large eDRAM caches would become available; it seems like it would be great for 3D rendering applications, too.

    • BaronMatrix
    • 6 years ago

    I have been telling people that this MAY be why so many chips are pushed back…

    AMD will probably need to supply 10M of these (XB1 and PS4) by the end of the year… That’s around a HALF A BILLION in revenue at $50 apiece… But with all that stuff it’s closer to $75…

    They CAN’T mess that up, cause the chips are really already sold…

      • Airmantharp
      • 6 years ago

      AMD isn’t supplying any product- they’ve already supplied it, to Microsoft and Sony. That’s the designs for these things. TSMC has to supply them to Microsoft and Sony, and TSMC definitely has the capacity to do so.

      I don’t see any massive potential issues here. The design isn’t special, with components sourced from the computational units of products TSMC is already fabbing for AMD. If TSMC messes it up, well, that’ll be on them, as they’ve already proven that they know how to make these things in bulk.

    • gerryg
    • 6 years ago

    The question I have, based on the architecture differences between the One and PS4, is how do they affect the cost of parts. I’m not keeping track but assuming GDDR5 is more expensive than DDR3. Depending on how much memory each has, and what the approximate cost of the main chips are, one of them is likely to be more expensive. Will be interesting if PS4 proves to be more expensive, since IIRC it is being staged to sell for less than the One, which means Sony’s margins will be slimmer (nil?) for the hardware. Microsoft of course has next gen Kinect bundled and has a bit more clout to charge a premium for the One bundle, but certainly not as much as the Apple tax (which I’m hoping will be repealed in the coming year or two as Google Android continues to crush them in the open market…). Anybody have a way to estimate part costs to compare the two systems? Can’t wait until they’re released…

      • Airmantharp
      • 6 years ago

      When talking about costs and margins, it’s important to understand that they’re not really that important for the hardware- the only importing thing is that they don’t lose their pants selling the boxes.

      What matters is the game sales, and the royalties from them, along with the subscriptions to the online services Microsoft and Sony provide. They can stand to lose money on every console shipped if they move games, and that’s exactly what they’re both fighting for; for now, Microsoft seems to have the more versatile box and a pretty devout following for whatever reason, while Sony only seems to have a few exclusives, none of which are the multiplayer blockbusters that keep Xboxs flying off of the shelves.

        • Srsly_Bro
        • 6 years ago

        GTA V. It’ll sell a ton of consoles by itself.

          • Airmantharp
          • 6 years ago

          No doubt- but not to me. Such games, for whatever reason, don’t pique my interest- and any game that I’m really interested in I want a high-resolution screen, a mouse and a keyboard, and the option for other input devices as well. Consoles feel like driving go-carts in comparison.

            • Srsly_Bro
            • 6 years ago

            Believe me, I’ll wait for GTA 5 to debut on PC. I hope the 360 controller is supported!

            • Airmantharp
            • 6 years ago

            It’d take some real, focused bureaucratic asshattery to make a 360 controller not work on a console port :).

            • Chrispy_
            • 6 years ago

            I’ll be updating my HTPC to the ex-bone pads as soon as they’re available for Windows.

            The 360 pads are good but my hands-on with the ex-bone pads last week just sealed the deal for me.

            As much as I want Sony to win this round with their anti-DRM, customer-friendly policies, I’m a Windows gamer and as such my only interest in their participation this round is to provide competition that keeps Microsoft in check.

            • Airmantharp
            • 6 years ago

            I’m going to have to look closely at those- but I don’t really have a whole lot of games that could really use the inputs.

            Right now, I use my 360 pad exclusively for flying helicopters in BF3; for which there is no equal. But it’s useless for jets, or anything else really, compared to just the keyboard, keyboard and mouse, and especially a real joystick.

          • CaptTomato
          • 6 years ago

          yo mofo dat shat da true!!!!

        • Narishma
        • 6 years ago

        Sony has plenty of AAA exclusives being worked on by their many first party studios. They haven’t announced them yet because they really have no reason to. They’ll sell every console they can produce from launch until maybe halfway through next year with just the first and third party games they have announced so far, so why announce more of them now when it won’t help them sell additional consoles they don’t have? When the PS4 stocks start accumulating after the launch demand has been satisfied, then they will announce them.

        • gerryg
        • 6 years ago

        Yes, the money is in the software, but If the hardware doesn’t meet the price point consumers are willing to pay, they won’t buy the software. If the markup and therefore margin on the hardware is high, then the manufacturer can do sales or drop prices to stimulate demand. Wii and Wii U and 3DS have proved that it’s a difficult mix to get the right, have desired features at the right price with the right game titles. Any of those are off, sales suffer, and if sales suffer you can’t sustain over the long term. Nintendo has been learning hard lessons. Anyway, just saying the doing some guesstimation and prognostication can lead to speculation as to how well the two boxes will sell and what the post-launch strategies might be in order to declare a potential console winner or at least early leader. I’m going to guess it will be a tie, assuming PS4 has good crop of launch titles.

      • puppetworx
      • 6 years ago

      Estimating the cost of parts is very hard for this situation as you can’t just use the going market rates (if there are any). Sony and Microsoft are behemoths and will be buying parts on an unparalleled scale for these consoles. Over 75 million PS3s were sold worldwide since it launched and the XBOX 360 sold a couple million more than that since launch. When you’re dealing with that kind of volume and orders for tens of millions of parts companies are prepared to offer huge discounts in order to get the contract.

      • sschaem
      • 6 years ago

      #1 extra cost for the xboxone is the kinect you are forced to buy with it.

      The SoC should be near in cost, but what might make a difference is yeld.
      Not sure if AMD got rear ended on this one, and if its AMD that will pickup the tab for manufacturing yield. (looking at GF contract, AMD is a very weak negotiator, so I woudnt be surprised AMD could lose money in the end on any of those 2 deal)

      Well, Sony set the price to $299, that what matter to us.
      If MS component price is cheaper, they are still wanting $399 for it.

      Either way both company will do fine. More profit on the console will not create better games.

        • Sahrin
        • 6 years ago

        The reason that the GF deal was structured the way that it was was that GF/ATIC absorbed a lot of debt from AMD and bought AMD out of GF (instead of having AMD slowly close it down). AMD didn’t get “nothing in return” they got bailed out of $500M/year in fab research costs plus about 50% of their outstanding debt (at the time) swallowed.

          • sschaem
          • 6 years ago

          Not part of my comment..

          But yes, AMD had to divest itself from its foundry business.. because it was failing.

          AMD did get a bail out… but that bailout was not in AMD favor. AMD paid very, very dearly for this transaction.

          My point is, AMD can negotiate anything to their advantage. Hence the 200+ million they still own to GF, and have to pay in Q1 2014.. that will WIPE OUT all their 2013 and 2014 profits.

          GF get paid by AMD, because AMD couldn’t pay for chip it couldn’t sell. and GF turhn around and already sold all the free capacity that created.. to AMD competitors.
          Thats called being rear ended really, really hard. (The grown up know that I’m saying here)

    • superjawes
    • 6 years ago

    Man I can’t wait!

    Can’t wait to build a new gaming PC, that is. And maybe pick up an XBOne or PS4 after a significant price cut and several game releases.

      • Airmantharp
      • 6 years ago

      I’m really only interested in what they can do aside from gaming- and I’m not really that interested in actually buying one, just as a curiosity.

      For my gaming PC, though- I’m fairly set on a six-core Intel CPU, 32GB of RAM, and a pair of 8GB-12GB GPU’s with the combined horsepower to do 1440p@120Hz and 4k@60Hz with high settings.

      I’ll keep saving. I still have to feed my photography habit :).

        • DarkUltra
        • 6 years ago

        Its amazing how fast current 3d cards are at 1440p if you turn down aa, shaders, shadows and ambient occlusion. 😛

          • Airmantharp
          • 6 years ago

          I have a pair of GTX670’s running at 1600p- and yeah, there’s not much that really challenges them if you back off the fill-rate- or memory-bandwidth-intensive AA settings; most of the other stuff you can leave cranked.

          One reason I’d want to upgrade, though, is that 1440p at 120Hz (if I actually wanted to hit 120, which I do) requires near twice the fill-rate of my current 1600p 60Hz setup, as would a 4k 60Hz setup. But the main reason is due to the incoming consoles having 8GB of RAM, with much of that available to games. I fully expect the PC ports of these games to support even higher fidelity settings (such as more, higher resolution textures) than the console versions, and with higher resolutions on the PC side, I expect 8GB of VRAM to become the new 2GB.

          And memory is cheap- that’s why these consoles have 8GB in the first place. Putting 8GB or 12GB on a GPU depending on the width of the memory controller (you don’t want 6GB on a 384bit controller, so you put on 12GB) is quick and easy if there’s a demand for it. And I’m demanding it!

          • Airmantharp
          • 6 years ago

          Note for my reply above- I do realize that you’re being sarcastic, but my reply is serious, because I think that even in jest you do bring up a good point 🙂

        • internetsandman
        • 6 years ago

        Pfffft, 16:9, lame

        In all seriousness though if you have a couple GPU’s each with 8GB of memory you should be able to do 3 displays with 1440p@120hz. I just have to wonder at the amount of bandwidth such a setup would use, whether DP would be able to drive such a config

          • Airmantharp
          • 6 years ago

          I mention 1440p @120Hz only because such panels can actually be bought- I abhor 16:9 as a computing aspect ratio just as much as the next enthusiast. I’d jump on 1600p @120Hz if such a thing existed, or was even feasible, which I don’t think it is, with current display interface technology.

          I love my HP ZR30w- it may have CCFLs and be hot and heavy, but nothing on the market has fully surpassed it, though I’d gladly take 120Hz, deeper blacks, higher contrast ratio, and faster pixel response. It has almost no input lag, which is why I bought it, and which Dell has yet to fully match in a monitor using a modern IPS panel. The U3013 has issues when set to it’s ‘gaming’ setting that I’d rather not deal with, especially since I also do a considerable amount of photo editing.

        • Srsly_Bro
        • 6 years ago

        A gaming PC?

        You need a 6 core CPU?

        You need 32GBs of RAM?

        You need 8GB-12GB of memory for each GPU?

        Do you use a Canon 1Dx to take pictures of your cat?

        [url<]http://usa.canon.com/cusa/consumer/products/cameras/slr_cameras/eos_1d_x[/url<]

          • Airmantharp
          • 6 years ago

          I use a 6D, but it’s functionally the same as the 1D X when talking about the RAW and 1080p30/720p60 video files.

          But let me respond line-by-line, as I understand that my ‘wishlist’ is a little ‘out-of-line’ for a pure gaming machine:

          Yes, I want a six-core CPU. I’m running a 2500k at 4.5GHz, and I regularly (I mean all the fricking time) top out all four cores. I’m actually CPU limited at 4.5GHzx4, and I’d be looking to hit a solid 5.0GHzx6 on Ivy. And I still don’t think that’d be enough, but that’s the best I could do.

          I have 16GB of RAM now- I reserve 1GB for the HD2000 GPU on my 2500k, which powers two additional (small, cheap, crappy TN) monitors that I use for various web pages and system readouts while working or gaming. Now, with 15GB, I regularly breach 10GB usage in Lightroom, and I don’t see that usage going down in the future- rather, I expect it to rise considerably, especially when you consider that Canon is working on a 25MP Foveon-X3 style sensor that actually records (at least) all three color channels on each pixel at at least 14-bit color, if not more, instead of the 22MP-max single-color-channel Bayer-style sensors they’re using now. I’m also looking forward to doing RAW video, be it at a reduced resolution on my 6D (I really should have saved up for a 5D III… but I’m definitely enjoying the 6D’s new features), or possibly at 1080p on the 70D or whatever the 7D II becomes, as I’d really like a cropper to serve as a long-reach, action, and video backup to the 6D.

          I don’t need 8GB-12GB per GPU- I expect future games that have been developed with the new consoles in mind to need more than 6GB of VRAM when ported to the PC in order to turn on all the eye candy, which I plan to be able to do at either ~4MP 120Hz or 4k 60Hz. Maybe both.

          I do apologize that my requirements seem out of whack- right now, with a 4.5GHz 2500k, a pair of 2GB GTX670’s, and a 2560×1600 HP monitor, I’m pretty well set for at least the next year or so, but my wishlist is a little forward looking :).

            • Srsly_Bro
            • 6 years ago

            lol I didn’t think you would respond. You get a +1 for cojones! I still shoot with the dependable D300 and MB-D10 grip.

            I’m running a 2700K and a MSI TF3 7950 and I leave everything at stock gaming on 1920×1200. Metro: Last Light beat up my system, but that’s the only game I have that taxes it.

            IVB CPUs run hot. The only way you might have a chance of running a 6 core @ 5.0 GHz for any extended period is on a high end WC loop. Still I have my doubts you’d be comfortable with the temps.

            I’m not a fan of the 6D or really any of Canon’s current offerings. The camera to beat in FF format is the D600. The 6D falls short a bit short, but if you’re invested in the Canon system, Canon makes awesome lenses.

            Here is a pro-tip: Don’t build a PC for the future. Build it when the future comes or you’ll end up with an archaic POS.

            • Airmantharp
            • 6 years ago

            One thing for certain with Ivy-E- we can’t be certain what ‘paste’ or ‘solder’ they’re using on it. If it is soldered like SB/SB-E, then it quite likely will be easy to push to higher clockspeeds than the consumer Ivys. As for the temps, well, CPUs have gotten better at that- it’s actually not the heat I’m worried about, as I’d be able to reasonably account for that once reviewers and enthusiasts tighten the screws on retail samples, but rather the voltage. 80c under sustained load is perfectly fine (as is higher, actually), but not if getting to a speed that produces that kind of heat requires greater than my 110%-115% comfort zone for voltage.

            For cameras, there’s a constant back-and-forth; Canon’s advantage is in lenses, where they have the world’s best, their high-ISO performance, where the 6D is nothing short of absolutely amazing, the features that can be unlocked using ML, and in their ‘extra’ features like WiFi and GPS that are so very well implemented and can only get better.

            The 6D isn’t perfect- sure, I’d love some of Nikon’s DR, and the 14-24, but I wouldn’t give up my 6D for it. The camera to beat in the FF space is the 5D III, or 1D X, though it’s in a different league :).

            *Edit: I’m not building a PC for the future, I’m building a PC in the future- the parts I want to use don’t exist today, at least not on the shelves!

    • OneArmedScissor
    • 6 years ago

    I’m a bit surprised that the eDRAM is only 32MB. That’s enough for a 1080p frame plus eye candy, but also how much the Wii U has. I doubt Nintendo had any aspirations for 3D or 4K TV.

      • Airmantharp
      • 6 years ago

      Microsoft doesn’t have any aspirations for 4k- at least, they don’t with this version of the Xbone, though it’s not unreasonable for the SoC to be upgraded for 4k rendering without having to change all of the software along the way, like one would do with a PC by adding a second GPU.

      As for 3D- well, 3D doesn’t require more resolution, it requires higher framerates, and that’s not really an issue for Microsoft, rather for game developers.

      • Beelzebubba9
      • 6 years ago

      In Anand’s Iris Pro review he mentioned that Intel told him that 32MB was about the ideal size for a frame buffer for graphics workloads, but they went with 128MB to accommodate other design considerations and, I assume, to future proof the Crystalwell chip.

        • Airmantharp
        • 6 years ago

        Another chicken-and-egg scenario: Intel will have to get developers truly interested in their L4 cache setup to make a business case for it across the board, and to do that they have to ship it without a killer app and eat the costs (if there are any).

        But there’s a very big bright side to it- putting another level of memory/cache between L3 caches and main memory make a whole lot of sense given the capacity, latency, and bandwidth disparity between the two. Intel’s goal is to improve efficiency, after all, and it makes sense to attempt to reduce CPU idle times when working with larger datasets, allowing the CPU to compete more effectively with compute-focused GPUs that are quickly negating the need for real CPUs in big iron deployments.

        Imagine if Nvidia started shipping Kepler GPUs with ARM ‘supervisor’ cores attached? It wouldn’t look much different than AMD’s ‘tablet’ APUs, if modern out-of-order 64bit ARM designs are sourced. Such cores could handle all of the IO and branching code requirements while the GPU’s cores do all of the real heavy lifting, and such ‘modules’ could be neatly packed into racks for big iron workloads.

    • jjj
    • 6 years ago

    It’s a bit bombastic to compare it with Tahiti since they waste a lot of die area with that SRAM.
    It’s also very different from what Intel did and don’t forget Nvidia Volta with the RAM on an interposer.
    What Sony did seems more reasonable though, they can afford to have a bigger GPU and they most likely have better yields while paying a bit more for the RAM.
    As for what to expect in other devices , to have a lot of RAM on the same die is very unlikely, same package sure but same die makes little sense and there is a huge difference between the 2 solutions.
    In x86 you also need to look at the context, Intel is exceptionally greedy (watch and see how big is a 6 core IB-E die and how much it costs) so any added die area will cost and cost a lot and then it might as well not exist.AMD is in trouble and desperate to make a buck somehow so they might be unlikely to go for big dies even if it makes sense.On the GPU side it’s another matter ofc.

      • Damage
      • 6 years ago

      Bombastic, eh?

      Yeah, we’re just like BuzzFeed here. 18 reasons the Xbox One SoC is like AMD’s Tahiti–and one difference you won’t believe.

        • LukeCWM
        • 6 years ago

        [quote<]Yeah, we're just like BuzzFeed here.[/quote<] [i<]Click [b<]here[/b<] for the XBone secret that [u<]Sony[/u<] [b<]doesn't want you to know![/b<][/i<]

    • fredsnotdead
    • 6 years ago

    And now for the gratuitous automobile analogy…

    PS4 SoC w/GDDR5 = V8
    XB1 SoC w/DDR3 + SRAM = V6 + turbo

      • Thrashdog
      • 6 years ago

      Ah, but does the XBone EcoBoost chip give me V8 power and towing capability with V6 fuel economy?

        • slowriot
        • 6 years ago

        Sure, but there’s turbo lag and the noise just isn’t the same!

        • tfp
        • 6 years ago

        Yes but only for a limited time because the transmission isn’t heavy duty.

      • Beelzebubba9
      • 6 years ago

      PC: V8 + Twin Turbos.

      :smug:

        • willmore
        • 6 years ago

        More like V12 with supercharger.

          • Airmantharp
          • 6 years ago

          Not sure which I’d prefer more- but a modern boosted V10/V12 a la M5 with cylinder deactivation and variable boost along with a configurable high ratio transmission would really be interesting. Make it like a GTR, but, you know, with character.

            • entropy13
            • 6 years ago

            The GT-R is one of my dream cars. Another would be the Skyline GT-R…

            • Airmantharp
            • 6 years ago

            Which are the same car…

            And both are life-less. Really. As much has been said by countless reviewers. Get an M3/M5 instead, and actually enjoy driving!

            • entropy13
            • 6 years ago

            The same car? The R32, R33, R34, yeah there were only some differences, but the R35 is drastically different from any of them.

            • Airmantharp
            • 6 years ago

            GTR <=> Skyline. It’s a marketing badge, not a model :).

            • entropy13
            • 6 years ago

            So by your reasoning the new ‘Skyline’ is the same car as the GT-R… 😐

            • Airmantharp
            • 6 years ago

            Yes…

            • Scrotos
            • 6 years ago

            Also, from the official Nissan website:

            “The Nissan GT-R supercar, formerly known as Skyline, is the 2009 Motor Trend Car of the Year.”

            • Scrotos
            • 6 years ago

            I drove a few laps in one. It was really nice! Also did an R8 V10 and a MacLaren MP4-12C.

            [url<]http://www.exoticsracing.com/[/url<] I don't know that I'd call the GT-R "lifeless". Granted, I didn't drive any "ultimate driving machines" or yuppie cars for a direct comparison, but it was very nice to take on the track. There's one in the parking garage where I work. The dude must love hitting the mountain roads or on the plains just going all-out when no one's around.

            • willmore
            • 6 years ago

            M5, most surely. I went to the dealer several times just to look under the hood. People must have kept asking to see it so much that they ended up leaving the hood open and putting some of the protectors over the sides so that people could lean in and get a good look. Drool…..

            That said, I still love my inline 6 with normal aspiration.

          • Jigar
          • 6 years ago

          with loads of headroom to increase the power.

        • superjawes
        • 6 years ago

        [url=http://www.dieselpowermag.com/features/1006dp_78_liter_colossal_cummins_engine/<]The Double Stuff Workstation[/url<] ...okay, not quite, but I couldn't help myself...

          • ALiLPinkMonster
          • 6 years ago

          That’s the ‘vette. Dual Xeon E5’s and quad GTX Titans would be the Bugatti.

            • Airmantharp
            • 6 years ago

            I’d agree that the Bugatti is up there- but you have to give the oncoming ZR1 a little credit. 800HP stock is well within range with the new V8 + blower, and GM’s suspension and handling technology is second to none (even if they use it so, so sparingly).

            • Fighterpilot
            • 6 years ago

            LOl…wut?

            GM’s handling and suspension technology?

            The company that relied on live rear axles and leaf springs?

            Oh please…stop it.

            Go learn about cars rather than reading the sides of your matchbox toy cartons.

            • Airmantharp
            • 6 years ago

            Go compare a Camaro ZL1 to a Mustang GT500 with a 100HP advantage- and watch the ZL1 tear up the ‘Stang around the track.

            Or compare a ZR1 to anything except the Veyron.

            Try reading up on recent cars for a change? Thanks!

            • Fighterpilot
            • 6 years ago

            Do you really need another smackdown….?

            Let’s see how your LOL.Corvette LOL…does against…

            1.La Ferrari
            2.Pagani Huayra
            or
            3.Porsche 918

            Try n get current dude…

            • Airmantharp
            • 6 years ago

            You do realize that the ZR1 is a $100,000 car, right? The outgoing model is still among the fastest in the world, and it’s about to get a significant update?

            No, it’s not from a supercar house- it’s from GM. But that’s the point I’m making (as an analogy for something else…)- that GM has world-beating suspension and handling technology, and that they’ve used it in a number of very nice cars, not all of them ‘sports’ or ‘muscle’ or ‘super’ cars, too.

            And yeah, I expect the incoming <$150,000 Super-Stingray to hang with your list of exotics above. Stay focused in your basement cockpit!

            • Waco
            • 6 years ago

            Are you suck in the 80s or something?

            • Fighterpilot
            • 6 years ago

            No,but Corvettes are…

            • Airmantharp
            • 6 years ago

            No, no they’re not.

            The C5 was for impotent geriatrics- the C6 changed that, and the current C7 is a world-beater.

            Oh, and I don’t like GM, for the record. I’d rather drive a Ford every day of the week- if that’s the question. Though given the choice, it’d be neither. I’d take a Land Rover Defender, when they get those back over here, as long as they can make it reasonably comfortable by keeping down the NVH, and they put a solid diesel V8 in it.

            • Scrotos
            • 6 years ago

            Hey, remember that time that the crappy handling and suspension got a Corvette around Nurburgring in near record time? That was the best!

            [url<]http://www.autonews.com/article/20110609/BLOG06/110609869/2012-chevy-corvette-zr1-laps-the-nurburgring-in-near-record-time#axzz2dV7zklmV[/url<]

            • Airmantharp
            • 6 years ago

            Thanks for the backup- some people just never learn, lol. I was hoping that he’d grow a set of testicles so he could swallow his pride and actually go look this stuff up and educate himself, but instead he’d rather sit in his basement with Flight Simulator X and custom desk with his sticks and pedals, and reminisce about his daddy’s automotive glory days, since he obviously didn’t live them.

            Oh well- all we can do is try, right?

            • Fighterpilot
            • 6 years ago

            oh please…you got pwned and then tried to move the goalposts…
            FYI I have extensive experience with circuit racing,rallying and drag racing as well as actually being a pilot.
            What do you currently drive and …how’s your pilot skills?

            • Airmantharp
            • 6 years ago

            Not really ‘caught’, it was an analogy, after all, and a loose one at that.

            I’m more making light of your attitude in posts recently- you’re coming from a very specific perspective, and even though I mostly believe you’re right (opinions about cars are opinions, and I don’t disagree with yours, either, we’re just coming from different directions), it’s your attitude that I, and others apparently, take exception to. Try a smoother approach?

            • entropy13
            • 6 years ago

            Makes you wonder what’s the equivalent of the BAC Mono and Ariel Atom V8…

            • Beomagi
            • 6 years ago

            What would Maximum PC’s dream machine 2013 be?

            Core i7 3970X (6 cores, liquid cooled at 5GHz)
            4x Titan hydro (OC is @ 928MHz, boost at 980MHz)
            64GB DDR4 1600 CAS
            Raid 0 SSDs
            Asus PQ321 3840×2160 monitor

            • Airmantharp
            • 6 years ago

            If it’s in 2013, they can wait for:

            Ivy-E
            The new AMD cards (if they’re any improvement, if not, no change)
            DDR4 doesn’t exist in a shipping CPU package, but DDR3 can hit 2400 easily- though that may need to be dialed back a little for quad-channel
            RAID-0 SSDs would be fine, I’d think- just make sure you get four of the fastest capacity of whatever drives you’re using- probably 256GB Samsung 840 Pros, but I’d look pretty closely at the OCZ Vectors too
            I’d have one of these monitors, but really I’d prefer to debezel a triplet of 120Hz TN’s with Lightboost as well; try to get ones that don’t degrade in image quality significantly when Lightboost is used. I might also get a top-end Dell for critical photo work.

            And don’t forget to mention sound! I’d be using a Soundblaster ZX (for the upgraded DACs) with a pair of HD598’s, since the water-cooled GPUs should open up some slots, and it’d be getting a triple-channel (3×3 on 2.4GHz and 5GHz) 802.11ac card as well- and I might put in a basic RAID controller for a stack of eight WD Reds (or Hitachi/Seagate equivalent) in RAID6 with a hot spare- and I’d have a couple standing by in hot-swap trays.

            I might also look into setting up a 3.1 sound system (stereo plus center and sub) for non-headphone gaming, music, and movies, using high-quality discrete components.

            Only thing I’d put into question for you though- what case would you use? I’d want something that isn’t stupid huge, but that could reasonably space out all of the components, tubes and cables, and would provide a quiet environment for the radiator setup, both intake and exhaust, and that’s fully filtered.

            I really wish Silverstone would update the FT-02 design to accommodate nine slots instead of seven, so that you could put three or four blower-style GPUs in there evenly spaced, while still having room for ‘other’ stuff. They could even just put in a slim-line bay for optical drives (I’d use a slot-loading Blu-ray burner) and turn the whole front of the enclosure into a SATA3 hot-swap stack, so that there’d be no issue with optical drives being too long for a card in the ‘bottom’ expansion slot.

      • lilbuddhaman
      • 6 years ago

      Are you implying the XB1 will have better gas mileage too?

      • DeadOfKnight
      • 6 years ago

      Too bad the V6 + turbo comes stock with an overpriced rear-facing camera.

      • slaimus
      • 6 years ago

      I would say:

      PS4 SoC w/GDDR5 = I6 + supercharger
      XB1 SoC w/DDR3 + SRAM = I4 + Electric Motor
      Modern PC = V8

        • Srsly_Bro
        • 6 years ago

        Maybe your grandma’s E-Machine she bought 10 years ago would be a V8. You do understand desktop graphics and CPUs are still leaps and bounds ahead of the PS4, right??? right???????

        You get a -1 for your grossly uninformed opinion. I with I could give you more for spreading ignorance.

      • moose17145
      • 6 years ago

      So… Then along the lines of Car Analogies… Would this basically explain what happened with MS and Windows 8 then?

      [url<]http://www.youtube.com/watch?v=mrn3Yb3L_iU[/url<]

        • jfreiman
        • 6 years ago

        One thing that no one is talking about is how important the performance of x86-64 Jaguar cores is in relation to overall game performance.

        I am a huge AMD fan, but every benchmark shows that AMD’s current cores hold back the GPU – it doesn’t matter if its Trinity, Richmond, etc – none can keep up with AMD GPU + Intel CPU! Heck you don’t even need an i7 to shame a 6 or 8 core AMD system.

        so while the PS4 has more shaders than does the Xbobe, we may see the bottleneck at the Jaguar and not the GPU.

        Microsoft’s decision to boost the on chip cache may be the best design decision. If MS is pushing more compute cycles to their cloud, that latency isn’t going to held back back fewer shaders.

        IMHO

          • Airmantharp
          • 6 years ago

          The thing you’re really missing here, though, is that current games are designed around the current console’s hardware configuration- that is, the PS3 with it’s single core, dual-threaded ‘master CPU’ and seven floating-point ‘signal processors/SPs’, and the 360’s three dual-threaded ‘master CPUs’. Essentially, they have to design games around few, highly-clocked (3.2GHz), extremely low IPC cores; as this translates to desktop ports, the CPUs with the greatest IPC for the first four cores always wins; that’s an Intel i5/i7 with four cores today.

          But that whole design ethos changes with these Jaguar-based consoles- now, we have eight (or six or whatever games actually get) fully out-of-order x86 cores running at 1.6GHz; and that’s actually a good thing!

          Because now, every game will have to be very highly multi-threaded to run smoothly on the consoles, and PCs are very, very good at making use of multi-threaded workloads. It means that even Intel’s dual-core + HT i3’s will see a performance boost, and it means that AMD’s Bulldozer CPUs will also likely see a performance boost as well- they excel stupidly at truly multi-threaded loads. Hell, even if the game is FPU-heavy, given that the consoles have eight cores with one FPU each at 1.6GHz, even a four-module Bulldozer with four 4GHz FPUs should keep up very well- and better than they’re doing in current games, I’d think.

          But the real winner here? Intels six-core SB-E and Ivy-E. With six high-clocked high-IPC cores, plus hyper-threading, these new super-multi-threaded games will really show off the advantage of Intel’s top-end consumer SKU’s, where they really haven’t been truly relevant in the gaming space up until now, given the relatively small benefit they represented with such a large (+ ~$500) increase in cost.

      • ronch
      • 6 years ago

      If those PS4 and XBone SOCs are V6 and V8 engines, I’d say a fully-loaded PC would be the Batmobile.

        • Airmantharp
        • 6 years ago

        …except if you can be Batman, then always be Batman.

    • ssidbroadcast
    • 6 years ago

    [quote<]In the end, this contest might prove to be much closer than folks first thought.[/quote<] I have two words--just two words--for that sentiment: shader count.

      • Damage
      • 6 years ago

      In modern SoCs, the two biggest performance constraints are power and memory bandwidth. Shader count is practically just a tuning option to help you perform well within your power and BW constraints. Adding embedded memory may be a better use of transistors and die area than adding more shaders, in the right conditions.

      So yeah. The architectures are different interesting ways. Shader count is one way, but not one of the most interesting ones. 🙂

        • ssidbroadcast
        • 6 years ago

        I *think* that I agree with you that the eSRAM config will certainly level the playing field in terms of loading/streaming performance. However, there ain’t no substitute for shaders when it comes to having complex light-passes on textures that are 12+ layers deep. Things like having strong SSS or SSAO, even Tessellation do better with more shader units available for the GPU to access.

        If the XBox One ends up with a higher or equal user base to the PS4 then it probably won’t matter much in the end because developers will just scale their games to the XBone. However, if the PS4 ends up with a distinctly larger user base then we could have a repeat of the Saturn versus PS1 era: where ports between the two were common but the Saturn had poorly optimized ports with cheap hacks to approximate the effects (think stipple-alpha versus true transparency) of the superior system.

          • Damage
          • 6 years ago

          I’d just say these things are about having the right balance. More shader units won’t do you much good if you’re heavily bandwidth constrained.

          The obvious substitute for more shader units is higher power efficiency + higher sustained shader clock speeds. A big cache may help you achieve that kind of gain and end up being a nice trade-off.

          Dunno that MS is going to achieve that entirely, but the substitute does exist. 🙂

            • ssidbroadcast
            • 6 years ago

            Do we know the clock speeds of each respective CPU/GPU for each console?

            • Action.de.Parsnip
            • 6 years ago

            Damage – I think I’ll poo-poo you there, In reverse order:

            Having a SoC using GDDR5 with it’s minging latencies and incredible bandwidth is just as interesting as the Xboxone ESRAM implementation, for one it’s a first AFAIK.

            For the Xboxone the memory size has grown by 16x yet the esram size has grown by only 3.2x. Given also that Intel gave Iris a 128mb L4 cache as a …. bandwidth ‘booster’ to hit a MUCH smaller GPU performance target: How much GPU performance can a 32mb esram support? How useable is it really? Can you push many technical boundaries in shadow maps & alpha effects on the back of that???

            Are either of the next gen console APUs looking a bit strained for bandwidth? …… ssidbroadcast’s “two words” …. valid enough point I feel. 176gb/s for the Ps4 APU could feed a much more nippy GPU than something in the 7850 range so it’s a small leap to say there’s a goodly amount to go around between the GPU and 2 x 4 core CPU dies. A vanilla Jaguar quad core has a 17gb/s memory interface and 4 cores, double this for the 8 cores in the Xboxone + Ps4 APUs gives 34gb/s …. hardly a king’s ransom of bandwidth.

            Then think that bespoke code will be running on both APUs that make the very very VERY best of what potential exists in the different sets of hardware ….. the extra compute units in the Ps4 chip look well placed to be utilised fully. Sony did after all choose to give it 16 rops, hard to imagine they’d often go slightly idle.

          • Prion
          • 6 years ago

          [quote<]Saturn versus PS1 era[/quote<] Obscenely different architectures and development environments. I'll add the Nintendo 64 into that as well. The PS1 was the most conventional design, but I'd argue none were completely superior. None of the showcase games on any of those systems would have been/were possible on the others without severe changes or cuts. As with the previous era, you basically had to go from the ground up for each system for respectable results. "Poorly optimized" ports these days just mean a little less shininess, less AA, lower resolution, or a less stable framerate on one system or the other. Usually even then only when farmed out to a developer unfamiliar with the console/programming tools, or just limiting the resources of the porting team too much.

        • HisDivineOrder
        • 6 years ago

        The important point will be if software makers (that is, game developers) are constrained in properly using the SRAM or not. If MS has the tools built to automatically manage that cache and that management turns out to be VERY optimized, then it may keep up with the PS4’s generally greater memory bandwidth and also have a few opportunities to excel in areas that the GDDR5 can’t.

        However, if MS lets the developers do the heavy lifting on using that SRAM properly, then I think you’re going to see third parties by and large ignore it and treat the system as if it has DDR3L and the PS4 has GDDR5.

        I think the GDDR5 was a necessity for the PS4 because Sony went with more shaders. We don’t really know what the cache is like on the PS4’s version of the chip, either.

        My prediction is going to be that developers waited so long for this new generation to show up they used PC’s with high spec to prototype what the next gen would be (ie., Watch Dogs developers said this, BF4 did this with Frostbyte for all of non-sports EA). This is why developers told Sony (and presumably MS) that they wanted the architecture to just be x86, have lots of fast RAM, and have a modern GPU from a PC-like design. They were already heavily prototyped for PC. Given that, I imagine this generation–especially at the beginning–will show up with PC titles ported to either console. As such, I imagine they’ll tweak settings to get smooth gameplay for each console.

        For the PS4, they’ll probably leave the AA up a little higher and perhaps render truly at 1080p internally while with the Xbone they’ll likely turn off AA and/or reduce SSAO or ambient occlusion or the particle physics or even render at a sub-1080p resolution internally and convert it on output like they did with this generation’s consoles (lower than 720p often).

        You’ll get the same game and it might even look a lot alike, but porting from PC will have the advantage of hitting as great as each platform can do. The only thing will be that games designed in such a way would not be heavily predisposed towards using highly specialized caching routines very well unless they were heavily integrated into the porting tools that MS built and/or it was done automatically by the console on the fly.

        I’m more inclined to think the PS4’s more generalized architecture that requires less babying is going to be easier to get great ports out for and is going to become many developers’ lead platform. Especially when you look at the preorder numbers for each system and find that the PS4 is way, way ahead of the Xbone on every version of every game that is available for each. It’s the early days of a console that decides which platform developers aim at first and foremost for console game designs in two years and right now, PS4 is lookin’ mighty great.

        Remember, the 360 had a superior GPU far and away over the PS3, but it was still the lead platform for most games. The PS3 just got hobbled, choppy, often buggy ports.

        I do not think the PS4 will be getting hobbled, choppy, often buggy ports this time. If Xbone doesn’t do a damn fine job of using that SRAM properly, it may well get them, though. Sony made their console to be easy for developers to maximize after NOT doing so for two console generations.

        Especially when games are either being targeted at PC and then ported OR being targeted directly at PS4, it doesn’t look good for Xbone. Only if MS can somehow convince developers to ignore all the bad press, all the false starts, all the preorder numbers that make the Xbone look far inferior in actual hardware preorders, all the preorder numbers that make it look like third party games are going to do way better on PS4 than Xbone this holiday season, ignore the startling price difference of a Benjamin, ignore the (at least) on paper superiority of the PS4, ignore the vitriol people are still spewing at the Xbone,, ignore the fact that the name of the Xbox One has become and will forever more be the Xbone to the point that I bet that is the name stenciled on the next update of this thing…

        Only if they ignore all that and FORCE (or coerce?) their developers to make it their lead platform for console games will it come out with games that don’t show advantages for the PS4.

          • Airmantharp
          • 6 years ago

          One point where you may be wrong- the GDDR5 on the PS4 has a much higher latency penalty than the DDR3L on the Xbone. Meaningless to GPUs that are designed around streaming the data that they know they’ll need next, but a very big concern for CPUs, and along with the SRAM on the Xbone version, could give games on the Xbone a distinct advantage when it comes to allowing for more dynamic games (or anything else), especially if there’s any compute stuff going on that’s tightly coupled with a game/application kernel running on the CPU.

            • Bubster
            • 6 years ago

            The difference isn’t all that much greater between DDR3 and GDDR5 due to the GDDR5’s much higher clock speed. Also the memory controller on the xbone may be modified for maximum bandwidth vs lower latency. eSRAM obviously will be much faster than either. Jaguar is also a really weak CPU (from kabini benchmarks it looks like like given the same frequency it takes 4 jaguar cores to match 2 ivy cores at the same frequency) so 8 jaguar cores at 1.6 ghz would probably have the same throughput as a 3.2 ghz i3. And that i3 probably isn’t going to be affected more than 5% even if latency increases around 30% due to the fact that today’s CPUs have plenty of bandwidth and faster RAM makes no difference even with an i7 outside of a few applications (namely encryption and winrar).

            • Airmantharp
            • 6 years ago

            GDDR5 doesn’t have a higher ‘clock-speed’, it has a higher per-clock transfer rate- it’s effectively QDR, but they didn’t want to call it that.

            That said, latency is a function of real clockspeed, and GDDR5 has to run far looser to make it’s QDR capability happen than DDR3 does. There’s a real advantage to CPUs for DDR3 over DDR5; note that desktop CPUs will be moving to DDR4, not DDR5, which GPUs largely ignored.

            • Bubster
            • 6 years ago

            In a dual channel system the effective speed of the module is 800 mhz for 1600 mhz RAM. For quad pumped GDDR5 at 5.0 ghz the effective speed of the module is 1250mhz or more than 50% higher. So latency can be much higher (in terms of timings) on GDDR5 and get the same absolute latency as DDR3 (in ms).

            • Airmantharp
            • 6 years ago

            It’ll be measured in ns, but you’re right, for the most part. Remember that the per-clock latencies of GDDR5 are significantly higher, though, so you can expect that if the base clocks of the DDR3 are close, the physical latency in nano-seconds will be quite a bit lower, likely approaching half that of the GDDR5, optimistically. So the question is, what are the base clocks for the DDR3 vs. the GDDR5?

            Personally, I’d think that they’d be comparable- Microsoft knows that while they have only 2/3rd the shader power, DDR3 is generally half as fast from a bandwidth perspective, so they may be pushing for higher clocked DDR3 to compensate. They should easily be able to get cherry picked DDR3L modules that run at the lower voltage while hitting 1866MHz (933Mhz base) or 2000MHz (1000MHz base) from the likes of Samsung, but even the standard fare DDR3L-1600 C10 would likely be physically lower latency than the GDDR5.

            • Action.de.Parsnip
            • 6 years ago

            You can – to a certain extent – code around high latencies but there’s a lot less room for maneuver if something needs bandwidth and you don’t have it.

            • Airmantharp
            • 6 years ago

            Sure- and that WILL be an issue, especially with large sets of assets- it’s possible that Xbone games may run at lower texture and shader detail settings to account for fewer shaders and less real bandwidth to memory for large transfers.

            On the other hand, the ESRAM may make up for the bandwidth issue, and it will help with the latency for the CPU, but it will still be behind on actual fillrate.

        • WaltC
        • 6 years ago

        Memory bandwidth has been a “performance constraint” for as far back as you’d care to go…its relationship to performance isn’t “new” or “modern”…;) And with devices designed to be plugged into a wall socket 24/7, non-portable devices that have no need to worry about battery life, devices far more concerned with performance (well, there’s cost, too) than any other factor–“power” need not be a performance constraint at all (as is always true with the highest performing cpu & gpu products in every generation.)

        I think the largest limiting factor people are overlooking is cost. Both Microsoft and Sony are designing these consoles within the cost constraints that they each want to maintain in order to (hopefully) show a profit on these boxes at these price points (or at least break even.) I think the days of massive subsidies for consoles are long gone–no need for that anymore as some very powerful hardware is now available for very low prices (hence, why both consoles went AMD x86, accordingly.)

        The way I look at it, Sony went for more gaming horsepower all the way around for their money, whereas Microsoft opted for less gaming power in order to hold down costs so that Kinect could be shoehorned in (not to mention the set-top box pass-through stuff, but that’s much more minor, I think, in terms of cost.) The DDR 3 is less expensive than the GDDR5, etc. With lower performance, too–hence the need for the band-aid 32MB edram approach which the Sony design would not benefit from having because its native ram is already faster than even the xBone’s cache. Hopefully, Microsoft’s gaming DX API for the xBone (pretty sure it will be a slightly modified version of D3d/DX–?) will automatically handle the 32MB cache efficiently so that developers won’t have to specifically code around it. But I’m not entirely sure that will be possible. But anyway–we could discuss the ramifications of the hardware ’till the cows come home…

        The designs of these consoles, imo, have far more to do with economic considerations than they do with any surmised limits in the performance potential of “modern SoC’s” and so on. If you want to spend the money, you can get all the performance you want…;) But, since the beginning with gaming consoles, the console game has always revolved around economics more than any other guiding factor.

        That’s why I’ve always been puzzled when I hear people say things like, “Consoles are always technically a few years ahead of PCs when they are launched,” because I’ve never known that to be true. IE, I’ve never owned a personal computer less powerful in the gaming arena than *any* console when launched, made by any manufacturer. Consoles are all about shaving costs down to the bare knuckle while delivering as much performance *as possible* under those economic constraints. But in the distant past, too, there was the fact that my PCs cost me thousands of dollars while the consoles could be had for hundreds. Indeed, the new consoles resemble current mid-range gaming PCs more than anything else in terms of the technology employed.

        The money, and only the money, is the *real* limiting factor in these particular SoC’s, imo. Should console owners be willing to spend 2x as much, then both Sony and Microsoft could deliver far more powerful consoles than the PS4 and the xBone, and do it all with current technology exactly as the actual consoles to be shipped are doing.

        Summing it up: consoles are about economics first, performance second. Always have been, imo. I think when launched that the PS4 will clearly outclass the xBone in gaming performance and/or the special FX graphics department–support more games with playable frame rates at higher resolutions, and so on. However, for $100 more, the xBone will provide a Kinect and a sort of strange set-top box for broadband television (no one actually knows how well *that* will work at this time.) But both consoles, running far more like mid-range PCs than 360’s or PS3’s in performance terms, will seem a *huge* jump in performance compared to what previous console-only owners are used to. It’s going to be a very interesting race between these two consoles–even if neither of them interests me as something I’d actually buy…;) We’ll be watching not only two competing hardware platforms, but also two competing console design philosophies–very entertaining!

      • NeelyCam
      • 6 years ago

      I also have two words: Micro Soft

      • nafhan
      • 6 years ago

      Eh, the specs are close enough that cross platform devs will probably just target the slower machine. First party games will probably be the only ones to take advantage of the extra speed on the PS4.

        • Airmantharp
        • 6 years ago

        I agreed with this assessment the first time I heard it- it does make sense.

        And really, given how well current game engines are designed to scale, I’d bet that it will be a simple question of balancing additional fidelity vs. higher framerates when compiling for the PS4.

        Just keep in mind that the Xbone will have a more flexible CPU-side architecture, and that may make a big difference in what games can do. We might very well see cross-platform games espousing different feature-sets on each console to suit their respective strengths and cover for their respective weaknesses, if developers are so inclined. It certainly won’t be hard, given that the engine work has all but been completed.

          • Action.de.Parsnip
          • 6 years ago

          [quote<] Just keep in mind that the Xbone will have a more flexible CPU-side architecture [/quote<] Sorry? how??

            • Airmantharp
            • 6 years ago

            By having the local ESRAM and by having lower-latency DDR3. Both of these work to increase the effective IPC of the CPU versus no ESRAM (just L1/L2) and higher latency GDDR5.

            It’s the same thing we see on desktops, really. Lower latency memory and large, fast, low-latency local caches (like the L3 on Phenom II’s that the Athlon II’s didn’t get), except this is a very large cache more like Intel’s Iris.

      • l33t-g4m3r
      • 6 years ago

      I agree. Remember when amd first switched from ddr2 to ddr3? No benefit in games. I’m betting both of these systems will perform pretty similar, but the ps4’s extra shaders will give it an edge.

        • Firestarter
        • 6 years ago

        RAM bandwidth in CPUs is a different beast from RAM bandwidth in GPUs

        for CPUs, the benefit of more throughput is far more situational than for GPU

          • l33t-g4m3r
          • 6 years ago

          I doubt these cpu’s are high performance enough to take advantage of it, plus it’s only 32mb vs intel’s 128mb. I really doubt it’ll make much of a difference, unless programs or the compiler are really optimized for it.

          We’ll see when the games start coming out.

            • sschaem
            • 6 years ago

            Radeon 7770 using DDR3 vs radeon 7870 using GDDR5

            Plenty of benchmark around to give you a pretty good idea of the performance difference.

            • Bubster
            • 6 years ago

            Not Really.

            Xbox one has a 256 bit interface vs the 128 bit interface of the 7770 and 7790. At the same clocks GDDR5 gets twice as much bandwidth so the xbox one is equal to the 7790 (shader count) with 128 bit GDDR5 at whatever frequency MS chooses to run its RAM at. Furthermore the xbox one has its eSRAM which helps to reduce the 7790 like memory bottleneck. So really its like

            7790 + eSRAM vs 7870

            • sschaem
            • 6 years ago

            Yes, I might have undercut it with a 7770 + DDR3.

            But I dont see 7790 + esram to be accurate. The 7790 alone get 96GB second, that very close to the 110GB of the xbox with esram bandwidth. (and their is only 32meg of it)

            Also the 7790 got close to 20% more compute power.

            So to get better a general idea of the performance delta to come,
            take 80% of the 7790 score and vs 90% of the 7870 score.

            BF3 : 35 vs 54fps
            Sleeping dog : 49 vs 59fps

            We see a 20 – 50% performance delta at 1920x.

            • l33t-g4m3r
            • 6 years ago

            Apples and Oranges. Those are dedicated video cards, which prefer bandwidth over latency (remember VLIW5?), and the 7770 is way lower spec than the 7870. Although if you are inferring that the GDDR5 on the ps4 will give it an edge graphically because of the extra bandwidth, then I could see that logic. However, the xbox could be using the sram for graphics, which would offset that, but would also limit the cache’s compute capabilities.

            Yeah, I think the ps4 won, generally speaking. Microsoft should have doubled (or nothing) the sram, then maybe they would’ve had something.

    • derFunkenstein
    • 6 years ago

    The PS4 has more graphics capability than the Xbone as well, so maybe it’s the SoC that’s pushing the doors on the biggest chips? Also that will help Xbone put off less heat than its rival.

      • Airmantharp
      • 6 years ago

      I wouldn’t make any predictions yet- even though I agree that your assessment is reasonable- as it appears that the Xbone SoC has traded GCN cores for SRAM (which while dense, is far less dense than DRAM). If that SRAM is actually used as much as the extra GCN cores on the PS4, then platform heat might be comparable. Microsoft’s use of lower-power and lower-latency DDR3 along with a more power-efficient and far faster and lower-latency local SRAM cache might actually prove to be as fast or faster across the board as Sony’s more traditional solution, even for graphics, and especially so for any compute-focused workload; It’s possible that Xbone or Xbone/PC exclusives will be able to outshine anything on the PS4 by making extensive use of the SRAM for stuff like physics and advanced, dynamic sound and lighting and could require tight coupling with the game kernel.

        • derFunkenstein
        • 6 years ago

        I probably ought not predict, because it’s going to matter what kind of work loads developers come up with, what can be passed onto what’s going to wind up being idle hardware via GPGPU APIs on Xbone ports, etc. too.

      • Meadows
      • 6 years ago

      More graphics capability? Where did you get this information?

        • Airmantharp
        • 6 years ago

        All over the Internet bro. For like months.

        • derFunkenstein
        • 6 years ago

        [url<]http://www.anandtech.com/show/6972/xbox-one-hardware-compared-to-playstation-4[/url<] On page 2 there's a nice table showing 1152 "cores" in the PS4 vs 768 for the Xbone.

        • sschaem
        • 6 years ago

        And not a little either. *** 50% *** more

        Also dont believe MS hype on the 32Meg cache, its really delivers ~110GB. and 68GB to the 8GB memory pool.

        PS4 is 176GB to the entire memory pool (and the Ps4 SoC does have cache, juts a lot less)

        It wont matter much in the end. Side note, I cant beleive MS is still using a big power brick for this console !!!
        How come Sony can include the PSU inside the console, and MS cant ?!

        And from the picture, the xbox one doesn’t use a small power brick either.
        [url<]http://www.dualshockers.com/2013/08/08/xbox-power-supply-huge-external/[/url<] Silly looking... and probably include active fans.

    • cal_guy
    • 6 years ago

    That 32MB of on-die memory on the Xbox One SOC is actually SRAM not DRAM.

      • Damage
      • 6 years ago

      Noted! And corrected. Thanks!

      • the
      • 6 years ago

      I’m still surprised that they went with SRAM instead of the denser DRAM. I figure MS could have gotten 96 MB of eDRAM into roughly same area but it would have incurred a latency hit. Still, going from 32 MB to 96 MB would have allowed local storage of full sized 1080p frame buffers with really high bandwidth. This would have eliminated the pressure behind the vanilla DDR3 2133 memory controllers.

      I’m also wondering if MS has provisions to transition between DDR3 to DDR4 down the road when things become economical. DDR4 speeds would still be 2133 Mhz effective to conserve bandwidth but it’d be a cost cutting and power saving measure.

        • Airmantharp
        • 6 years ago

        It is surprising- but it’s also very telling, in that MS knows that they have a significant bandwidth disadvantage going with DDR3L, and that the two go hand in hand.

        I’m still very, very interested in seeing how these two consoles shape up with very well optimized cross-platform games; I’m thinking that the Xbone will be very surprising, and that Sony is going to be two steps behind MS this generation, instead of one behind in gaming due to MS’s far superior SDK, and one ahead in home entertainment with their excellent Blu-ray player.

    • chuckula
    • 6 years ago

    Interestly note: The PS4 & Xbone SoCs are widely rumored to have GPUs that are roughly on par with the 7850… according to Wikipedia, the 7850 has 2.88 Billion transistors, so even if the Xbone SoC has the same budget for its GPU*, there is a ton of other stuff on there, including a massive SRAM cache.

    * The rumors are that the Xbone’s GPU is actually cut down slightly, so that number may be an overestimate.

      • cobalt
      • 6 years ago

      The PS4 has 50% more shaders than the One — 1152 vs 768. And it’s rated at 1.8 vs 1.2 TFLOPS if I remember correctly. (The One’s recent clock speed bump changed that, but only by a few percent.)

      • Srsly_Bro
      • 6 years ago

      Interestly note: they are not widely rumored to have GPUs that are roughly on par with the 7850. You can’t say widely rumored and quote 1 resource that isn’t necessarily the most credible.

      Check out [url<]http://www.google.com[/url<] and give fact checking a try.

Pin It on Pinterest

Share This