All signs point to Kaveri being an evolutionary upgrade

We talked a little bit about AMD’s upcoming Kaveri APU in our latest podcast. Some of the discussion centered on how Kaveri will compare with current offerings—a topic about which AMD provided several hints during its APU13 conference earlier this month. In case you missed the podcast, here’s some of what we learned.

To begin with, Adam Kozak, AMD’s marketing chief for client processors, told folks during a press briefing that Kaveri will be competitive with Intel’s Core i5-4670K processor. (That’s a $225 offering and the cheapest quad-core Haswell CPU with an unlocked upper multiplier.) When pressed for details after the briefing, however, Kozak clarified that Kaveri should only be equivalent in terms of combined CPU and GPU compute power. If one measures x86 performance on its own, Kozak said, "we’ll lose." However, Kozak expects Kaveri’s integrated graphics, bolstered with Mantle support, to be better than the latest version of Intel’s HD Graphics.

On the subject of power consumption, Kozak wouldn’t give us definite numbers, but he told us that he expects Kaveri to consume less power than the Core i5-4670K at idle and to draw more power under load. Earlier in the briefing, Kozak had mentioned that Kaveri would have a similar target TDP to "what you see today on the A series."

Kozak wouldn’t say much about pricing. However, AMD VP Manju Hegde offered a hint during an impromptu lunch chat. Referring to a stage demo in which a Kaveri APU was compared to a Core i7-4770K processor with GeForce GT 630 discrete graphics, Hegde said that Kaveri didn’t do badly for a product that’s "a third the cost." It’s not entirely clear if Hegde was talking about manufacturing cost or retail pricing. However, the i7-4770K and GT 630 are worth $410 put together. A third of that would be around $137, which would be in the same ballpark as the current high end of the A-series APU lineup.

In short, based on what we heard at APU13, it sounds like Kaveri may be an evolutionary rather than revolutionary upgrade over Richland. It also sounds like Kaveri may not take the A series into new pricing territory.

Kaveri nevertheless looks poised to offer decent performance improvements over the previous generation. AMD suggested during the opening keynote of APU13 that Kaveri’s integrated graphics would have 512 shader processors, which would be up from 384 in Richland. Then, during the aforementioned press briefing, there was also talk of "decent IPC [instruction per clock] gains" and support for DDR3-2133 memory. Finally, because Kaveri supports hUMA, it may deliver substantially better compute performance than Richland. Hegde claimed that usable gigaflops were "an order of magnitude higher" in Kaveri.

For what it’s worth, leaked slides suggest that Kaveri will offer a 20% boost in CPU performance and a 30% boost in graphics performance over Richland. That’s probably at least close to the mark.

Comments closed
    • Johannesburg
    • 6 years ago

    It would be more interesting for them to put it up against the intel BGA solutions promising twice the power of HD4000

    • itachi
    • 6 years ago

    From what I understood when you play a game with an APU or integrated graphic chipset like intel AND have a graphic card it does’nt make use of the integrated chipset, my question is : why ? it could be useful for us gamers.. like they could make the chip designed to help the physics or help with the antialiasing at least or something, make it useful… I don’t know, correct me if I’m wrong.

      • chuckula
      • 6 years ago

      In a few more years the distinction between IGP and CPU will become less important and you will see future versions of the IGP being used for various processing tasks in games (and other software) even when a discrete GPU is being used to tackle the graphics.

      As for right now, the IGP parts are still largely devoted to graphics with compute tasks being done more as a sideshow than the main attraction for having an IGP.

      • maxxcool
      • 6 years ago

      well as strange as it may sound intel does not care about gaming a whole lot. so the incentive to make a integrated-gpu active with discrete graphics is not a big deal. the second reason is as it stands the cpu + discrete is good enough so there is no demand to try to offset performance losses as seen on AMD cpus.

    • Chrispy_
    • 6 years ago

    It’s Richland with a GCN update and a few general performance tweaks.

    I can live with that, and because these things are likely to be compelling i3 alternatives, so can AMD, I hope.

    • Bensam123
    • 6 years ago

    So keep in mind if what was hinted at by AMD before is true, this is a traditional four core chip. It isn’t the same as 8 modules AMD is currently using with Piledriver (which is the same as a four core chip).

    Also if the power usage is true, we may be looking at being able to overclock the stuffing out of these processors and it may offer a alternative to normal desktop processors that don’t exist yet.

    Another thing to consider is if the way Mantle is going is true, compute power will definitely matter as the API will be able to take advantage of it. I’m not sure if it’s capable of using power off a secondary adapter with a discrete card, but it’s definitely something to consider if AMD is serious about compute and they definitely seem to be.

    Overall not that disappointed with this as I originally was. As long as the architecture wasn’t designed from the ground up to simply be efficient and not extremely fast (ARM), it will still be competitive. They sound like they’re serious too, by comparing it to a 4670k which is making a lot of people turn up their noses.

      • MadManOriginal
      • 6 years ago

      Pinning the hopes of success on Mantle is folly.

        • nanoflower
        • 6 years ago

        I can just see the clusterf*** that results when developers start using all of these features and bugs occur. Everyone will be pointing fingers at the other guys (game developers, AMD and Microsoft) with the customer left to suffer while they puzzle out where the problems actually are. Eventually Mantle may be a very good thing but I bet it’s going to have a rough first year.

          • Bensam123
          • 6 years ago

          How is this different from when any other piece of software comes out?

        • Bensam123
        • 6 years ago

        I’m pretty sure I had more then one point in there, but if you wish to just see Mantle that’s up to you.

        • LostCat
        • 6 years ago

        I’m buying ASAP because of Mantle. I doubt I’m the only one.

      • ermo
      • 6 years ago

      [quote<]"So keep in mind if what was hinted at by AMD before is true, this is a traditional four core chip. It isn't the same as 8 modules AMD is currently using with Piledriver (which is the same as a four core chip)."[/quote<] According to [url=https://techreport.com/review/23485/amd-cto-reveals-first-steamroller-details<]this TR piece which deals with Kaveri's architecture[/url<], the above assessment appears to be slightly incorrect? Unless things have changed drastically since that piece was published, Kaveri still uses the Bulldozer module approach with 2 integer units (2 ALUs and 2 AGUs each) and one FP/Vector unit per module. To help alleviate the scheduling issues with Bulldozer, Kaveri now has one decoder for each integer core (this was previously also a shared resource between modules), yet the fetch stage and the L2 cache are both still shared resources on a module.

        • Bensam123
        • 6 years ago

        “The ‘net has been rife with speculation about the primary sources of Bulldozer’s problems. Looks like the shared front end of the dual-core Bulldozer “module” is indeed one of the culprits. Steamroller gets separate, dedicated decoders for each integer core, along with larger instruction caches.”

        [url<]https://techreport.com/review/23485/amd-cto-reveals-first-steamroller-details[/url<] It may not be completely a four core chip, but it definitely sounded like they were going back to that design when they talked about Steamroller a bit. If not, this may function very similar to a traditional four core chip performance wise, but if there are still eight core variants on the horizon they would have ridiculous performance (probably on par or close to a six or eight real cores). Given how AMD currently markets it's APUs and that it's only announced up to 'four cores', I assume that is the maximum for Steamroller APUs and since the current high end APUs have eight cores on them, we can assume they aren't holding back on a eight core model and they indeed went back to a more traditional four core design. Which may lead to speculation that the performance variants that weren't announced this coming year are hex and octa core processors, with the performance fitting of real hex and octo processors. That could also be why they aren't simply debuting the performance variants with these APUs as they're working the kinks out of them (probably the TDP more then anything).

          • ermo
          • 6 years ago

          At 2 ops per cycle, the integer cores are still fairly narrow by today’s standards though. If the integer cores could take 3 or even 4 ops per cycle (like recent intel cores), this thing would have been a beast.

          I guess I’d rather see AMD widen the integer execution units by slapping an extra ALU + AGU onto each ‘core’ and scale down the clock speed slightly. If they did this, I’m given to speculate that they’d have a design which would have the integer IPC of the Ph II per ‘core’, yet would be able to scale to higher speeds at 32nm and below than Ph II apparently could, while still reaping the benefits of shared resources in terms of transistor budget and thus die area.

          The question is if doing so would leave each module starved for FP/Vector Units…

            • maxxcool
            • 6 years ago

            Funny thats what they did with k10… then regressed to BD/PD/SR

            • Bensam123
            • 6 years ago

            Even if it’s not ‘optimal’ it still qualifies as cores now instead of modules or pretty darn close. It really doesn’t matter if it’s not optimal if it performs quite well and works in practice and that’s what they’re promising.

            A six and eight core variant of this could very well perform extremely well

            • maxxcool
            • 6 years ago

            I’m leaning to closer to a real core… im curious to see the actual execution profiles / debuggers to see the actual hit/utilization rates for the ”off not so full cores” for this revision.

            i’m even more interested in the next shrink (to bad its a year away), getting away from shared fetching will likely yield more benefits than clock scaling.

        • maxxcool
        • 6 years ago

        if that’s true that’s ‘better…’ one more die shrink and maybe they can get away from the shared fetch and pathways. *then* maybe i will support amd again.

    • maroon1
    • 6 years ago

    [quote<] AMD's marketing chief for client processors, told folks during a press briefing that Kaveri will be competitive with Intel's Core i5-4670K processor." [/quote<] If you talk about CPU performance, then i5-4670K is whole difference league. Kaveri won't have any chance of competing [quote<]Kozak clarified that Kaveri should only be equivalent in terms of combined CPU and GPU compute power. If one measures x86 performance on its own, Kozak said, "we'll lose."[/quote<] LOL combined CPU and GPU compute !!! What a joke. How many applications that average consumers use take the advantage of GPU compute anyway ?! I have high-end PC and I don't remember if I ever used any of those few professional application that can take advantage of openCL. GPU compute is not as important as x86 performance. [quote<]"Referring to a stage demo in which a Kaveri APU was compared to a Core i7-4770K processor with GeForce GT 630 discrete graphics,"[/quote<] LOL!!! AMD is only good at marketing these days. It not hard for anyone to prove that cheaper APU is better than than i7 4770K when you are using biased demo, and stupid comparison. Any demo from AMD is going to be biased by default. It should not be taken seriously, specially when history proves that real world results is not as good as AMD demos Not to mention that there comparison is stupid. Whether they use i3 or i7 with GT 630 it won't make a any difference. Haswell based pentium + GTX 650 is better for gaming than i7 + GT 630 and cheaper at same time. If AMD was smarter they could have used intel extreme edition + weak GPU, to prove that cheap APU is better than $1000+ processor. Trust me a lot of those average joes are going to be convinced

      • maxxcool
      • 6 years ago

      I am a notorious BD/PD/SR hater.. look for my – vote count… that being said :

      “”How many applications that average consumers use take the advantage of GPU compute anyway ?! I have high-end PC and I don’t remember if I ever used any of those few professional application that can take advantage of openCL.””

      with most photo editing suits from adobe, cs, and a ton of consumer bundled apps opencl does really get alot of use. and for power users the defaults for alot of ripping/encoding software make a call to see if opencl is available and use it unless told specifically not to. so, if this makes it into say, a dell or a gateway box it will see more use than you might think.

    • ronch
    • 6 years ago

    I hope AMD changes their collective minds and offer Steamroller cores in 8-core configs. I felt more excitement over my motherboard’s Realtek audio codec than these APUs.

      • ermo
      • 6 years ago

      Let me guess … you couldn’t care less about the Realtek audio codec on your motherboard?

        • ronch
        • 6 years ago

        Actually, I find those audio codecs quite interesting. It’s interesting how quality digital audio practically comes free with all of today’s motherboards. I remember paying through the nose for a Sound Blaster Pro back 1993. Back then, having one of those sound cards made you awesome. Now, nobody cares even if you have a $200+ Xonar Essence STX or SB ZxR.

        But still, considering the low cost of those Realtek audio codecs, it’s amazing what they can do.

          • l33t-g4m3r
          • 6 years ago

          If nobody cares anymore, it’s only because they all use the new software mode audio stack, and there is little to no difference in audio quality, especially if you use digital out, in which case your video card can even output sound. This has nothing to do with realtek per-se, because the only thing realtek has to do is write a compatible driver that supports MS’s audio stack.

          Back in the day brands mattered because they had different features, and therefore sounded differently. Now, there is almost no distinction, unless you’re playing legacy games with EAX, trying different stereo upmixers, or need something for professional recording.

          Is this a “good” thing? IMO, not a chance in hell, because there is absolutely NO innovation or quality effects coming out of your speakers. We’re all getting basic non-hrtf stereo / 5.1 audio panning, with no environmental effects whatsoever because game devs are too lazy to program their own from scratch. If nobody is offering a sdk toolkit or easy sound effect editor, then it’s not going to get implemented. THANK YOU, Haters. (generally speaking.) You’ve eliminated 3d audio completely from the market, even though there was nothing particularly wrong with creative’s products during the XP era. Creative’s products were never more expensive than the competition, and even now they remain price competitive with alternatives such as Xonar. The public wanted “free” audio via onboard, and that’s exactly what they got, limitations and everything.

          The only light at the end of the tunnel I see now, is AMD’s True Audio, which is essentially AMD’s version of a X-FI built onto a videocard. Will this help game audio progress past supreme blandness? Perhaps, but being locked into using AMD for your videocard or APU(?) is infinity times worse than using Creative, and AMD’s drivers are more buggy than creative ever was. Hopefully, true audio is licensed out to 3rd party, so we’ll really have something. I love how people ripped on EAX, but mantle is OK, while EAX was open to 3rd party up to 2.0, and OpenAL is open to both independent hardware and software mode. Yeah, let’s throw away a vastly superior API like OpenAL for vendor lock-in Mantle and True Audio. Good job guys. Way to not be hypocrites.

          PC gamers are their own worst enemies.

          Advocating “free” onboard, is basically akin to advocating for “free” monitor speakers over dedicated speakers. Sure, you’ll save a few bucks, but you REALLY lose out when you take this route. We all do, because it destroys the market, and stagnates innovation. Imagine if monitor speakers were the industry standard, and that’s what onboard advocates have created.

          Not really happy people destroyed the dedicated sound card market, and this is the only way forward, but I’ll probably end up buying a low end amd gpu to go with my main graphics card just to get good audio, if that’s what it takes. Sigh. Not sure how well this would even work, if at all, but people did it for hybrid phsyx.

            • ronch
            • 6 years ago

            Well, not everyone uses digital out, including me. I still connect to analog 2.1 speakers. Between my board’s Realtek ALC892 and X-Fi, there’s an audible difference. I’d stick with my sound card but life won’t exactly be miserable if I had to go back to using the Realtek ALC892.

            As for EAX, I just normally disable it or don’t really even bother using Creative’s ALchemy. I just find those environmental effects somewhat too exaggerated for my tastes.

            • Concupiscence
            • 6 years ago

            With respect, I’m not sure you remember *how* buggy Creative’s drivers really were…

            • Waco
            • 6 years ago

            This. So very much this.

            They’re still terrible too!

            • l33t-g4m3r
            • 6 years ago

            Were. Past Tense. Mostly on older windows like 9x, which DUH was “buggy”. That was more of an OS issue though. Creative really started making “good” drivers XP-XP64(better), then Vista came out and threw out the entire audio stack infrastructure for an inflexible software mode that wasn’t very robust.

            I like how people bring up creative’s really OLD issues, when recent in comparison Xonars have had horrible bugs, and Aureal never had a perfect driver either.

            Aureal only had one or two driver updates after A3d 3.0, and they were very buggy, and limited to 9x OS’s. Crash prone, depending on what you were doing. The XP drivers didn’t even fully work with A3d, and newer games could cause the audio to distort in high pitches.

            For all people rail on Creative’s drivers, they have released more updates, and work better overall than anything else available on the market. The real problems were more from the physical limitations of the original sblive/audigy1, out of spec motherboards, then low build quality of the early X-FI cards, which caused all kinds of trouble which was scapegoated onto drivers.

            There were Motherboard problems due to certain brands not adhering to PCI standards, making intel the only safe choice. Via and Nvidia boards that ran out of spec caused all sorts of issues, and was incorrectly blamed on the drivers. There were workarounds, but because it wasn’t an actual driver problem, it took forever to address. There are a lot of misguided grudges being held from that, but once you understand the root cause, you know the sentiment is unjustified. There were build issues from early x-fi’s too, so it’s complicated matter, but overall you can’t blanket statement claim Creative had “buggy” drivers. It was a combination of several factors, which have been addressed over time.

      • USAFTW
      • 6 years ago

      I’m a bit sad at the fact that the only drop in replacement over the Phenom II is the vishera 8350.
      Because I don’t want the IGP the GCN graphics is just a waste of silicon for me, and it could serve me better with an l3 cache instead.
      And if I drop the ball and get Intel, I’m horrified about Intel’s get-everything-new-and-maybe-a-new-monitor-and-case-dropped-in-for-good-measure upgrade path after that. Moore’s law seems to be going strongly still.

    • BaronMatrix
    • 6 years ago

    I just want to know which chips will be 14nm and which will be 20nm… And will they have 20nm at TSMC and GF…

    TSMC is going 16nm so the 14nm chips MUST be Opteron\FX.. The best case scenario is to shrink Steamroller and drop an 8 core APU…

    But either way it won’t shake out until they mention the schedule for ONE SOCKET…

    I would think it would be FM3 later in 2014… Though there will I guess still be two since the server chips have to have extra pins for the extra HT links…

      • jjj
      • 6 years ago

      There is a big difference between tape out and retail,
      GloFo doesn’t mean no APU , up until now the mainstream APUs are made at GloFo.
      What they call 14 and 16nm shouldn’t reach volume production in 2014 and we might not see parts hitting retail from AMD even in 2015.
      20nm should be available soon enough but that doesn’t mean AMD will jump on it fast. They have indicated that they aren’t chasing the process anymore so they might wait a bit for the process to mature (get cheaper,have better yields) – if that delay is a few month or a year i got no idea.
      What is certain is that they have no 20nm parts on the public roadmap for 2014 so it’s safer to dial down our expectations. On the GPU side ,in theory , it would be best if they have 20nm at least in laptop by the time Broadwell launches but remains to be seen if that is doable and it would make sense to have 20nm desktop GPUs for the holidays so they better be aiming for that.

      • NeelyCam
      • 6 years ago

      [quote<]I just want to know which chips will be 14nm and which will be 20nm... And will they have 20nm at TSMC and GF...[/quote<] Sure, happy to help. TSMC 20nm (GPUs/FPGAs) are coming out towards the end of next year. There might be some 'deals' before that, but be careful with the supply. 14nm will be towards the beginning of 2016

    • NovusBogus
    • 6 years ago

    Kaveri could be revolutionary in the high-level sense that APUs now become good enough for mainstream users, but both camps are locked into their architectures for a very long time and nobody should expect a big jump until at least the end of the decade. There’s been lots of talk over the years about what happens when computing reaches maturity, now we know.

    • chuckula
    • 6 years ago

    [quote<]When pressed for details after the briefing, however, Kozak clarified that Kaveri should only be equivalent in terms of combined CPU and GPU compute power.[/quote<] Theoretical single-precision performance for Kaveri: 856 Gigaflops. Theoretical single-precision performance for the A10-6800K: 779 Gigaflops ([url=http://www.amd.com/us/Documents/3513_AMD_Elite_Series_APU_Product_Summary_v1_May2013.pdf<]according to AMD[/url<]) So basically, unless the 4670K is right in between those two theoretical numbers, then the A10-6800K is already faster... in theory. I'm sure there will be some pre-fabbed HSA synthetic benchmarks that AMD pushes to all the review sites to make Kaveri look amazing. I'm more interested in real-world application performance though. P.S. --> The theoretical gigaflop boost comes to ~10%.

      • Chrispy_
      • 6 years ago

      I think the real difference is that GCN opens up many more compute doors than VLIW.

      GCN is an openCL champion and games/drivers are largely optimised for GCN – so whilst the theoretical Gigaflops argument is good in theory, I am expecting greater real-world benefits from updating the architecture.

      • maxxcool
      • 6 years ago

      if the decoder is truly decoupled.. yeah TR benchmarks became a little more interesting for me.

    • Meadows
    • 6 years ago

    At least they’re honest.

      • chuckula
      • 6 years ago

      It’s a big step up from the (literal) smoke & mirrors that preceded Bulldozer’s launch. It’s nice to see that AMD is at least not trying to build up a huge hype machine that raises the fanboy expectations to ludicrous levels before they are inevitably let down.

        • ermo
        • 6 years ago

        Yeah.

        Here at TR, we have good reason to be a tough crowd when it comes to AMD’s recent hardware. The FX line is certainly decent value in specific applications, which is highlighted by the fact that surprisingly many (well, around 10 or so) people apparently use one in their systems according to the recent Friday night topic.

        Which reminds me: Didn’t we manage to chase that hapless AMD marketing guy away basically covered in virtual tar and feathers? I haven’t seen him around in a while…

          • chuckula
          • 6 years ago

          I asked him some tough questions regarding AMD’s Linux driver support, but at the same time I wasn’t hostile. I actually felt sorry for the guy since there’s all sorts of stuff we’ll ask that he can’t give us straight answers to.

          I’d be on the case of an Nvidia/Intel/etc. marketing guy who got onto the forums in a similar manner.

            • Fighterpilot
            • 6 years ago

            LOL…wut?
            You ARE an Intel/Nvidia marketing shill…just dumb enough to do it for free.
            Ouch.

            • chuckula
            • 6 years ago

            Hrmm… violation of forum rules with that post?

            • Fighterpilot
            • 6 years ago

            You and Neelycam have both admitted on this very board to being trolls…please ban yourself.

          • ronch
          • 6 years ago

          His name is Sam, isn’t it? Warsam. I don’t understand why AMD even thought of wandering the TR forums using him. It’s an impossible job knowing how people will try to get information from him about undisclosed product or company information, and it doesn’t help that AMD has been very tight-lipped lately, leaving fans with a lot of big questions.

        • ronch
        • 6 years ago

        Adam Kozak wrote a crazy blog when Bulldozer came out. Typical blog from AMD’s marketing department. Guess Adam’s a changed man.

    • windwalker
    • 6 years ago

    Are there any rational reasons to keep holding out hope for revolutionary improvements to desktop CPUs?

      • stmok
      • 6 years ago

      Unless AMD has a new CEO (engineer with competent business skills) to take the company into a new direction and inject some serious money into the company, the answer is: [b<]No[/b<]. AMD's paradigm has completely changed. What they use to do... * Design performance microprocessors for Enthusiasts. * Hand-craft and tune the design. * Strive for maximum performance. (Challenge Intel in head-to-head competition). * Found anyone with the newest/best manufacturing technology or expertise to make them. (Eventually manufacturing processors themselves). What they do now... * Design consumer mainstream microprocessors for general public. * Done by automated tools. (Developed from their GPU side). * Strive to maximize general computing audience market share. * Became fab-less. * Depend on TSMC and GlobalFoundries. (Both are behind Intel in manufacturing technology as they struggle with die-shrinks and miss their promised scheduled transition periods). The market has changed... * Excitement is on tablets and portables. * The long-term threat to x86 is ARM. Because of market and rival changes, Intel has no reason to push things on the desktop . So they respond by incremental evolutionary bumps. While the focus of their engineering resources goes into reducing power footprint and pushing manufacturing technology. Intel's Atom has more exciting improvements in each major iteration! ie: Intel is holding back because AMD is NOT kicking ass like they use to. The threat to Intel's core existence is ARM. The only thing AMD is really pushing these days is GPU. It has forced Nvidia to respond (on discrete GPU side) and Intel to focus more on their IGP technologies. Since Intel/AMD aren't keeping each other on their toes, excitement in the x86 desktop space has waned. So you get these "Death to Desktop Computing" nonsense. The computing world does need a serious rival to Intel. The only option is for AMD to have a new CEO with a completely different focus. (Its harder to introduce a third rival in the x86 space because Intel is very strict on licensing conditions...Its why most prefer to dive into the ARM space). AMD no longer has the engineering talent it had on the CPU side during the late-1990s. So they're compensating by trying to get more GPU technology involved. As a result of the above, we have the desktop space chug along incrementally...No longer exciting because they aren't pushing things like they use to. No aggressive competition means consumers don't see aggressive price cuts. Which means they'll upgrade less often. Which leads to a slow down in the x86 market. Seriously, would you have gotten excited if AMD really had a processor in the works that challenged Intel's Core i5-4670K in a head-to-head manner? I rather AMD make performance CPUs with dedicated connections to discrete GPUs, than make these SoC processors that they market to us as APUs. (In fact, performance CPUs then back-port the CPU architecture to APUs. They already do this with GPUs! They back-port Radeon technology to APU.) APUs are good for markets where things are fixed for the life of the APU itself. ie: Game consoles and budget desktops/notebooks that get used and discarded once they no longer fit the user.

        • the
        • 6 years ago

        The one thing missing here is the simple fact that the desktop market has been continually shirking for the past several years. Combine that shift from desktop to laptop with an overall reduction of the PC market as a whole and it is no wonder why companies like Intel and AMD have been letting the desktop market languish. It is not because there is not competition but rather the lack of potential growth. The way Intel see the future is that the desktop market will simply get mobile parts with radically higher TDP’s and in a socketed form factor. Intel even went as far as skipping Broadwell on the desktop (though more recent roadmaps have it appearing in socketed form).

        AMD has been wise to focus on SoC’s for mobile and consoles. There is a future in these markets but they still need to execute (IE Kaveri is very late). AMD’s performance segment, the FX line, has been using trimmed down server chips. The server market is where Intel has been really flexing their manufacturing advantage. Even if AMD had a design that competed well against Intel in terms of IPC, such a design would have to consume more power and/or come with fewer cores. (Intel’s high end Xeon chips are manufactured with 15 cores, only 12 of which are enabled.) It is disappointing to see AMD exit this area by not even giving a Steamroller update to socket AM3+/C32/G34 but I understand the business logic behind it. If AMD attempts to return to this market, it’d be 2015 at the earliest with a new platform. Realistically I see 2016 as the time for AMD to make a return. That’s over two years and the pace the market moves at it could easily close off this point of re-entry due to the ‘good enough’ nature of micro servers. In this context, AMD’s strategy of focusing on SoC’s makes more sense.

        In essence, it sucks as an enthusiast to see AMD basically drop out of the enthusiast circles but it is necessary for the long term health of the company.

          • DavidC1
          • 6 years ago

          “Intel’s high end Xeon chips are manufactured with 15 cores, only 12 of which are enabled.”

          Don’t lie.

          [url<]http://cdn4.wccftech.com/wp-content/uploads/2013/09/Intel-Ivy-Bridge-EP-Xeon-Die-Configuration.jpg[/url<] The 15 core Xeon will be Ivy Bridge EX which isn't even out yet and will use yet another different configuration. Intel is flexible enough to create multiple different dies for maximum cost optimization.

            • the
            • 6 years ago

            Here is supposed to be a die shot of the 12 core Ivy Bridge-EP chip:
            [url<]http://www.3dcenter.org/news/intel-stellt-ivy-bridge-ep-vor-12-kern-prozessoren-fuer-den-sockel-2011[/url<] As you may count, there are 15 cores on the die. Several of the register masks for determining how many cores are active on a die and that points toward a 15 core version too: [url<]http://www.intel.com/content/dam/www/public/us/en/documents/datasheets/xeon-e5-v2-datasheet-vol-2.pdf[/url<] The 12 core version does have the new ring bus topology and dual memory controllers. Ivy Bridge-EX was originally supposed to launch with a new socket. I'm starting to wonder if that'll turn out to be true. The EX lineup has traditionally supported more sockets and more memory then their EP counterparts. I was expecting Intel to update their memory buffer technology and setup a migration to DDR4 down the road. If the memory buffer chips are on a daughter card (like several Westemere EX servers do), then swapping out the daughter cards would be a simple way to migrate from DDR3 to DDR4 without having to replace the processors.

      • the
      • 6 years ago

      Nope because the design philosophies for Intel are all dictated by performance per watt. Right now Intel only permits a 1% increase in power consumption only if it increases performance by 2% or more. The inverse is also true: a 1% decrease in performance is also permitted if power consumption is saved by 2% or more. Any radically big change to IPC in all probability require this design rule to be thrown out.

      There are a few big changes Intel could perform on their future cores: massive L1 and L2 trace caches, out-of-order instruction retirement, deep speculative execution, etc. These all would consume more power than what their performance benefits would provide. Die size would also noticeably increase, though this metric is of lesser concern as we’re entering the SoC era.

        • njsutorius
        • 6 years ago

        very interesting conversation. The makes some very good points. I also agree the industry is moving to less power which would make any moves by ATI made in that direction poor for the company. Ultimately who wouldn’t want devices that runs considerably less power but can still run modern graphics compared to a device that creates increased heat for increased performance.

    • Andrew Lauritzen
    • 6 years ago

    So is Kozak saying that he expects Kaveri to beat the i5-4670k’s iGPU (not a surprise…) or high end Intel iGPUs (Iris Pro)? On paper the specs look similar or slightly better than Iris Pro 5200, but it’s unclear how badly a lack of memory bandwidth is going to hurt Kaveri.

      • smilingcrow
      • 6 years ago

      If they didn’t specifically mention an Iris Pro CPU then it’s safe to say they aren’t comparing with one.

    • KenLuskin
    • 6 years ago

    AMD CEO Rory Read said on the last CC, “that we are NOT leaning in to the PC area”.

    WHY?

    Because MSFT will be creating an ARM based full Windows OS for 2015.

    The RT will be cancelled, leaving only a proprietary mobile offering, and the full windows.

    ARM chips for full Windows will destroy Intel’s profitability.

    AMD is concentrating on HSA innovations, but the true pay off will come from their ARM offerings, starting with a server chip in Q1 2014.

    AMD is DE emphasizing large investments in general purpose X86.

    AMD is using 64 bit X86 for game consoles because of the PC ecosystem.

    Nvidia will NOT have a 64 bit ARM chip until 2015.

    AMD will focus on Servers and semi-custom 64 bit chips.

    AMD will be able to leverage its 64 bit experience and server expertise to take a significant portion of the Server market away from Intel thru low power ARM chips.

    AMD’s 64 bit ARM server chip will give them an a head start over the competition for 64 bit Windows ARM chips.

    The future is a Cloud connected model.

    Future Windows systems will only work in a connected mode to their cloud, just as the Google chrome books function.

    Corporations will save money by not being forced to install Office and other programs on each user machine.

    ARM chips from multiple manufacturers will greatly reduce the cost of Windows devices.

    MSFT knows they cannot compete when Intel is charging 3 times as much for the same chip that they are selling for Android powered tablets.

    Its a total joke that Intel is selling a Bay Trail tablet version for $37, while almost the same Bay Trail chip is being sold to Windows OEMs for $130.

    MSFT is separating from Intel. The official divorce papers will filed in late 2014 or early 2015.

      • Sahrin
      • 6 years ago

      “The RT will be cancelled, leaving only a proprietary mobile offering, and the full windows.”

      …RT is “the full Windows” – the thing that RT is missing that isn’t in x86 Windows is Win32…aka an x86-only API. Saying that RT isn’t ‘full Windows’ is like saying that Windows x86 isn’t “full Windows” because it doesn’t have ARM support.

      The one thing they could *change* in RT to make it more like “full Windows” is to allow developers to access low-level API’s (ie, make API calls that aren’t in WinRT). That doesn’t men they aren’t there (the API’s) – it means that MS blocks it.

        • windwalker
        • 6 years ago

        You are completely confused.
        RT is built on top of Win32 and exposes only APIs that Microsoft managed to design for enabling well behaved sandboxed Metro apps.

        There is nothing in x86 that makes it indispensable to Win32.
        It’s the design of Win32 that makes it likely for programs to squander system resources and undermine the low power potential of ARM.

          • JumpingJack
          • 6 years ago

          RT did away with Win32 runtime entirely, this is why it was named RT because it is an entirely new runtime, RT stands for the new windows Run Time.

          [url<]http://www.anandtech.com/show/4771/microsoft-build-windows-8-pre-beta-preview/5[/url<] [quote<]With Windows 8 Microsoft will be introducing a new API: Windows RunTime (WinRT). Over the years much has been said about replacing Microsoft’s long-lived Win32 API, with some success. The .Net Framework supersedes Win32 to some extent, but at the end of the day Win32 is still significantly used in one way or another by many developers. WinRT is the API that will replace Win32 for application developers, and is the API developers will need to use to develop for Metro. [/quote<] [url<]http://winsupersite.com/blog/supersite-blog-39/windows8/winrt-replacing-win32-140605'[/url<] [quote<]One of the confusing bits about WinRT is, well, everything. But after conferring with others and studying Microsoft's documentation, I can make the following general statement: WinRT (the new Windows Run Time) is not a replacement for Silverlight or .NET, it's a replacement for Win32. [/quote<]

            • BaronMatrix
            • 6 years ago

            It doesn’t totally replace Win32 since drivers and any kernel mode functions require C++ Win32…

            WinRT is mainly for desktop and Web…

            • JumpingJack
            • 6 years ago

            Wow, you don’t even know programming though being a self-proclaimed programmer 🙂 …. actually it is the other way around.

            WinRT was built specifically for metro apps and a common API base to code for both ARM and x86 based applications. WinRT resides along side Win32 for various reasons, particularly for legacy application support (we still need the desktop x86 applications to install and run).

            There are subset features/functions that RT can call from Win32, but they are a subset and not part of Win32 as a whole.

            MS is moving the world away from Win32 and toward RT.

            [url<]http://www.zdnet.com/blog/microsoft/heres-the-one-microsoft-windows-8-slide-that-everyone-wants-to-redo/10736[/url<] As you can see from the MS slides Metro Apps (mobile apps) are self contained, RT based applications. Desktop applications and web apps ride along with Win32. In the Windows on ARM part of the world, there is no doubt MS want the world on RT and off Win32. [url<]http://www.infoq.com/news/2011/09/WinRT-API[/url<] [quote<]The basic concept: Windows 8 on x86/x64 is a dual-personality operating system. It will run Metro-style apps and Desktop apps. Developers writing Metro apps will use the new Windows Runtime (WinRT) application programming interfaces (APIs). Those writing and running Desktop Apps will not; they'll still use good old Win32 and the full .Net Framework. And that XAML box up there? Microsoft codename Jupiter lives (and sure sounds like what I described back in January, despite Microsoft exec's claims to the contrary.)[/quote<] So in my response above, I was off base when I said RT replaces Win32 entirely, this in not accurate for x86 desktop applications, but it is certainly true for Win on ARM devices.

            • Ringofett
            • 6 years ago

            I think the “full Windows” point has been missed. The average Joe doesn’t care about programming language, API used, or underlying hardware. WinRT isn’t “full Windows” to most people not because of any of that nonsense, it’s not full Windows because they can’t open up their favorite programs website, download their program of choice, and install it and run it from the desktop as they used to or still can in regular 8.1. If it’s not part of Microsoft’s new locked down ecosystem, tough luck.

      • windwalker
      • 6 years ago

      Those are interesting hypotheses.
      The AMD related ones sound optimistic but somewhat reasonable.
      I don’t see how and why Microsoft could move all of its software so quickly though.

      • MadManOriginal
      • 6 years ago

      Yesterday I gave thanks for paragraphs.

      • chuckula
      • 6 years ago

      TL;DR version:

      Buzzword

      Buzzword

      Intel Sucks!

      Buzzword

      Buzzword

      AMD’S own marketing powerpoints are incredibly pessimistic and are obviously written by pro-Intel stooges!

      Buzzword

      Buzzword

      VICTORY!

      • Amgal
      • 6 years ago

      ….so sayeth the wise Alaundo (?)

      • just brew it!
      • 6 years ago

      ARM definitely has momentum, but IMO your MS-centric slant on things misses the point.

      Your post seems to assume that MS is the only game in town OS-wise. Hello… iOS and Android dominate in mobile, *NIX (including Linux and proprietary implementations) dominates in servers and scientific computing, and real-time is a mish-mash of mostly proprietary and in-house solutions, with MS having a minority share.

      I think x86 will remain king of the desktop for a long time yet, but desktop systems will continue to shrink as a percentage of the overall computing market, decreasing their overall relevance. Yes, MS still dominates the desktop. But they have little influence over the direction of the other market segments where ARM has the most potential.

      • the
      • 6 years ago

      “Because MSFT will be creating an ARM based full Windows OS for 2015.”

      Windows RT found in the Surface RT and Surface 2 tablets is a full iteration of Windows 8. The difference is that MS has sandboxed RT to Microsoft’s app store which has the requirement of being a Metro app. This is all business politics as Windows RT does contain the full Win32 API. Jail breaking Windows RT devices allows the usage of ARM based desktop applications, though developers do have to go through a few hoops to compile them.

      What 2015 will bring is an ARM based version of Windows Server. Right now MS has dropped 32 bit support from Windows Server and currently the only 64 bit ARM chip on the market has an Apple logo stamped on it. However, 64 bit ARM chips targeted at the server market are due in 2014 and MS wants this hardware to run a copy of Windows. With out a version of Windows available, MS would be ceding the emerging micro-server market to Linux. Here, it would be insane to keep the same Metro requirement for ARM applications as it does consumer tablets.

      • maxxcool
      • 6 years ago

      umm no : “”AMD will be able to leverage its 64 bit experience and server expertise to take a significant portion of the Server market away from Intel thru low power ARM chips.””

      Amd knows jack S^^T about ‘making arm cpus’ .. they have the process but it is up to arm to build the instructions and make them work.

      besides. arm has been around in the server world for 6+ years and they have yet to do any sort of damage that intel would care about.. besides. AMD makes opterons they do not want to cut their own throats.

      edit. removed profanity, to harsh for a Monday after 2 weeks off. will be a dick to amd tomorrow.

    • UnfriendlyFire
    • 6 years ago

    I wonder how long would it take for apps that support Kaveri’s features to be common.

    • internetsandman
    • 6 years ago

    Fingers crossed for something that offers competition to Intels mid range, this looks promising but it’s too hard to be optimistic for AMD lately

    • jjj
    • 6 years ago

    They can’t really do much more on 28nm. A huge chip with a lot more GPU might sound nice but it would end up being rather costly and it would be a hard sell on desktop. In laptop it would be nice but then AMD would lose on GPU sales and it seems that they don’t wanna do that.
    Now, if the thing was on 20nm ,they would have so much more room for the GPU. A PS4 like GPU would be very doable in up to 200mm2 (guess Richland is 250+mm2) .So ,at least we can look forward to 20nm bringing something much nicer – ofc we are going 4k so integrated graphics will be left behind in a big way, they’ll need a few more years to catch up (and by that time maybe mobile gaming grows up so this won’t matter much anymore).

      • Game_boy
      • 6 years ago

      You’re assuming AMD will ever release a 20nm CPU.

        • jjj
        • 6 years ago

        They will if they don’t die before that and that’s very unlikely. They might not go 20nm APUs in 2014 but i never suggested they will, we just don’t know when yet.

          • nanoflower
          • 6 years ago

          Yes, we know that TSMC/Global Foundries are both working on 20NM (with TSMC claiming they will be producing products in 2014.) So we may see some AMD products on 20NM in 2014 (though likely not until the end of the year.) No matter what it’s certain AMD is going to keep shrinking their APUs (and GPUs so long as they make them) when the new processes become available so long as the company is viable. They likely won’t ever be true competitors to Intel but at least they can them honest even if they are a process behind Intel.

        • BaronMatrix
        • 6 years ago

        At APU13. Lisa Su said they will be taping out 20nm AND 14nm chips by Q2 14…

        We know they will be doing a GPU at TSMC, but only GF is going 14nm so I see a SICK Opteron at the end of next year… They have sped up time from tapeout to Engr samples..

        And maybe Sony and MS will do a refresh of the consoles with a shrink at the end of next year…

        I don’t see them screwing people by upgrading the perf of the chip but just the power…

          • BaronMatrix
          • 6 years ago

          Why’d you suckers vote me down for this comment…? Still under the thumb of Intel I guess…

            • chuckula
            • 6 years ago

            Why did we vote you down?

            How about: At CES 2013, Lisa Su said that Kaveri would be out in 2013.

            • nanoflower
            • 6 years ago

            I didn’t vote you down but I know that when TSMC or Global Foundries says they will have a process ready in some year it is best to add on a year. Even Intel has had some issues as the process gets smaller so why would you assume it’s going to be smooth sailing from tapeout to ramped production for AMD when they depend on other companies to actually produce their chips. My guess is we might see low production of a 20nm product by the end of 2014 but 14nm is going to be 2015 at the earliest.

            • Bensam123
            • 6 years ago

            There are a few dedicated AMD haters on here, don’t mind them dude. 4 people is hardly a majority.

          • NeelyCam
          • 6 years ago

          You’re right; 20nm is coming soon.

          But 14nm is not.

          [quote<]And maybe Sony and MS will do a refresh of the consoles with a shrink at the end of next year...[/quote<] They won't. If they do, call me up; I'll buy you wings

            • nanoflower
            • 6 years ago

            It depends on what you call a refresh. I could see them doing some internal changes in the consoles to reduce cost next year but it does seem unlikely we will see a new console released. Even though the rumors are out that at least Microsoft plans a new console next year, that just doesn’t make sense to me.

      • smilingcrow
      • 6 years ago

      It’s very hard to be more than evolutionary when using what is already a mature process node.

        • nanoflower
        • 6 years ago

        It would depend on the AMD engineers having a breakthrough in how to design their processors and given the situation they’ve been in that just isn’t going to happen. Even Intel with their huge staff doesn’t get that lucky except for every once in a great while (ala Intel Israel and what became Core/Core2.)

          • smilingcrow
          • 6 years ago

          Yes, Intel’s success is all down to luck.

          • tipoo
          • 6 years ago

          I think Intel could pull something like that again if their focus was still on performance, but for the last few generations they’ve been going the power draw above all else route.

      • Kaleid
      • 6 years ago

      Even with 2133Mhz memory the I-GPU needs higher memory bandwidth.

        • maxxcool
        • 6 years ago

        what it needs is better/much lower latency.

    • l33t-g4m3r
    • 6 years ago

    Looking forward to a review / benchmarks.

      • NeelyCam
      • 6 years ago

      Exciting times! I have bet riding on these

Pin It on Pinterest

Share This