AMD to go ‘ambidextrous,’ grow another ARM?

Here at AMD’s Financial Analyst Day, a procession of top AMD executives—from new CEO Rory Read to new CTO Mark Papermaster and new Sr. VP and GM Lisa Su—has used a metaphor about being “ambidextrous” when it comes to architectures, meaning CPU core instruction set architectures. The somewhat oblique yet obvious implication: that AMD’s future products will eventually include chips with ARM CPU cores for markets where it makes sense.

Dr. Su made the boldest statement on this front, proclaiming that AMD will take advantage of all sorts of ecosystems, from Windows 8 to Android. She said AMD will not be “religious” about architectures and touted AMD’s “flexibility” as one of its key strategic advantages for the future.

Despite these declarations, the firm hasn’t made any specific commitments to ARM-based products on its roadmap, and indeed, the name “ARM” has only been mentioned once so far, almost in passing. AMD remains committed to much of its x86-centric current product roadmap in 2012 and 2013. Looking forward, Su claimed AMD can “absolutely” take an x86-compatible SoC into the sub-2W ultra-low-lower category.

Still, the theme of an “ambidextrous” approach has been echoed by all three key executives when talking about AMD’s future direction, very much intentionally.

Comments closed
    • stmok
    • 8 years ago

    This is what Techreport.com should have posted.

    [b<]Understanding AMD's Roadmap & New Direction[/b<] => [url<]http://www.anandtech.com/show/5503/understanding-amds-roadmap-new-direction[/url<] For those who are too lazy to click the link... 2012 => 2013 [u<][b<]GPU side...[/b<][/u<] Southern Islands => Sea Islands (New GPU architecture) [u<][b<]Desktop/Mobile Processors[/b<][/u<] [b<]CPU (Performance; FX-series; 4 to 8 cores)[/b<] Vishera (Piledriver cores) => Vishera "Refresh" (Piledriver cores) (Remains on 32nm process in 2013.) [b<]APU (Mainstream; A-series; 2 to 4 cores)[/b<] Trinity (Piledriver cores) => Kaveri (Steamroller cores. GPU side is GCN-based.) (32nm => 28nm) [b<]APU (Budget; E-series; 2 to 4 cores)[/b<] Brazos 2.0 (Bobcat cores) => Kabini (Jaguar cores. GPU side is GCN-based. South Bridge integrated.) (40nm => 28nm) [b<]APU (ULV variant for Tablets and fanless solutions; Z-series; 1 to 2 cores)[/b<] Hondo (Bobcat cores) => Temash (Jaguar cores. GPU side is GCN-based. South Bridge integrated.) (40nm => 28nm) [u<][b<]Server Processors: Opteron series[/b<][/u<] (All 2012/2013 processors will be Piledriver-based cores on the 32nm process.) [b<]Socket AM3+ (1 processor server; 4 to 8 cores; DDR3 dual-channel)[/b<] Zurich => Dehli [b<]Socket C32 (1 to 2 processor server; 6 to 8 cores; DDR3 dual-channel)[/b<] Valencia => Seoul [b<]Socket G34 (2 to 4 processor server; 4 to 16 cores; DDR3 quad-channel)[/b<] Interlagos => Abu Dhabi Notes: * AMD's original 2012 plan for the Socket FM2 based Komodo is dead. Same on their Server side: Socket G2012 platform with its Sepang and Terramar (10 to 20 cores) is sh*t-canned in favour of drop-in compatible solutions for 2013. * You'll note that the FX and Opteron lines are not getting a major architectural overhaul until 2014. => The idea is that the A-series APU line (from Kaveri onwards) will act as the initial testbed for the next generation architecture AND die-shrink in 2013. The lessons learned from this will then be implemented into FX/Opteron series in 2014. => This will mean that the Steamroller cores in FX/Opteron-series will be revised versions (more mature) compared to the A-series ones implemented in the previous year...Better overclocks? * Heterogenous Systems Architecture (HSA) is their future. This will be the software backbone. The overall goal is to reach a unification of CPU and GPU technologies by 2014/2015. (Share memory space, act as coprocessors in a smooth fashion, etc.) ...Their vision is to have their platforms flipping between CPU and GPU technologies based on workload in a seamless manner. The software developer just focuses on C++ and calling the software APIs in their applications. * AMD will adopt ARM architecture based on where/when its suitable. (Depends on customer's needs.) ...They're being flexible and pragmatic. They haven't offered any ideas on implementation, so its a bit vague as to what they are considering. So there you have it...Socket AM3+ isn't going anywhere for 2012 and 2013. [u<]2012[/u<] CPU: Socket AM3+ => Piledriver-based (32nm) APU: Socket FM2 => Piledriver-based (32nm) VS LGA1155 => Ivy Bridge (22nm) [u<]2013[/u<] CPU: Socket AM3+ => Piledriver-based (refresh) (32nm) APU: Socket FM? => Steamroller-based (28nm) VS LGA1150 => Haswell (22nm) [u<]2014[/u<] CPU: Socket ??? => Steamroller-based (28nm?) APU: Socket FM? => Excavator-based (??nm) VS LGA1150 => Broadwell (14nm)

      • jdaven
      • 8 years ago

      Very nice. Maybe we can turn TR into a reader driven tech news site. For sure their coverage and time to post is lacking as of late.

    • Xenolith
    • 8 years ago

    You guys are lousy at reading between the lines. This is obviously about going into PowerPC, not ARM.

      • NeelyCam
      • 8 years ago

      Hmm… now, that’s an interesting suggestion.

        • BobbinThreadbare
        • 8 years ago

        I was actually thinking, what if AMD put some cell cores on their chips.

        You have this hierarchy

        >most general x86
        >less general Cell
        >more specific GPU

    • ronch
    • 8 years ago

    Troubling to see that AMD doesn’t seem to want to compete aggressively in the high end x86 segment anymore.

    Anyway, If you want a more comprehensive take on AMD’s FAD check out Anandtech.

      • flip-mode
      • 8 years ago

      Honestly, that might be the smartest thing for AMD. Competing against Intel to see who can make the fastest x86-64 processor is a masochistic, losing proposition.

        • ronch
        • 8 years ago

        Good for AMD, bad for us.

          • A_Pickle
          • 8 years ago

          Not necessarily. What the hell is the point of the UB3R-HIGH-END x86 market? Seriously, who needs it besides enthusiasts who NEED their colossal e-peen? The Core i7 3960X is a jaw-dropping performance monster, but [i<]who the hell needs it?!?[/i<] I have a Phenom II X6 1055T, and 8 GB of RAM. [i<]Everything[/i<] I throw at that sucker runs, it runs fast, and it runs well. The hardware on desktops (and even laptops) has FAST outpaced the software -- it's 40 years ahead, the developers need to figure out a way to use it. I've thought of some ways, but hell, I run six or seven server programs in the background on my system (not counting my other non-Windows background processes) and my computer, with it's aging 1055T, 8 GB of RAM, and Radeon HD 4850 runs Skyrim on Ultra at 1080P for hours on end. Everyone bemoaning that the old AMD vs. Intel tit-for-tat war is long gone doesn't get it. AMD did well during the Athlon 64 days for two reasons: One, they had a genuinely innovative architecture. Two, Intel took a bad step with NetBurst. A VERY bad step. They have since corrected that, and are focusing fervently on the best possible goal: Performance-per-watt. In 2010, AMD pulled in $6.4 billion in revenue. In 2011, Intel pulled in [b<]$54 billion[/b<]. I'm sorry, but the expectation that AMD somehow "ought" to be able to compete toe-to-toe with a competitor with [i<]NINE TIMES[/i<] their annual revenue is fantastically unrealistic. They're playing smart - they're focusing on what consumers want. Consumers, on the whole, don't want or need ridiculously high performance CPU's -- they want enough computing power to do what they want reasonably quickly, and they want it to be portable and long-lived. Where Intel [i<]isn't[/i<] getting it is with graphics, they're living in this 21st century replay of the '90's, where everyone is looking forward to that next 100 MHz speed bump. That doesn't matter anymore, and AMD gets that -- we have quad-core processors, and for 95% of people that's all they'll need. Those processors will play Skyrim at acceptable detail level and fluid enough frame rates to satisfy them, even if they don't extract 7-zip archives ten seconds faster than any other CPU on Earth.

            • ronch
            • 8 years ago

            Ok, for now your 1055T is doing fine. Five years down the road, will it still be just fine? Intel keeps pushing the boundaries of what the PC can do, not just with more cores (which, as you implied, are taking devs ages to take advantage of), but higher IPC per core. Devs see this and will create more demanding games. We’re near the point where the PC is good enough, I agree, but that doesn’t mean progress will stop. Going back, when devs create more demanding games to take advantage of the available computing power (presumably from Intel CPUs) in the future, AMD CPUs will look even worse than they do now. And that’s when people have no choice but to pay megabucks for Intel, or peanuts for AMD’s inferior tech.

            • A_Pickle
            • 8 years ago

            [quote<]Devs see this and will create more demanding games.[/quote<] They might -- but what I'm saying is, [i<]they haven't[/i<]. The only games that stress modern components are ones whose developers are downright incompetent (See: Crysis franchise). They don't run hard because they look good (though, they do), they run hard because they're coded like crap. Did you SEE that tessellation article here? That's not stressing components towards a useful goal, that's stressing them [i<]for the purpose of stressing them[/i<]. This is, in fact, my entire point: Developers HAVE NOT bothered to make SOFTWARE that takes advantage of our computers. The computing power in a CPU that's now available for $85 (the socket FM1 Athlon II X4 631) relative to how [i<]the majority[/i<] of users will put that power to use is comparable to a nuclear reactor being dedicated for use by a small African village with a few lights and a village stove. Developers have yet to provide users with meaningfully powerful software that makes use of unused computing cycles -- and I'm not just talking about goddamn games. Computers can do more than play frigging games. [quote<]Going back, when devs create more demanding games to take advantage of the available computing power (presumably from Intel CPUs) in the future, AMD CPUs will look even worse than they do now.[/quote<] They won't do that. Their job, as software developers, is to make money. By cutting out an entire processor brand, they are unnecessarily cutting themselves out of profit. Additionally, AMD platforms are [i<]superior[/i<] to Intel platforms when it comes to gaming on integrated graphics -- so developers might actually target that because those platforms are far cheaper, and thus far more accessible to new customers. People are somewhat averse to PC gaming due to the high upfront expense required to enter the market (buying a capable gaming PC). [quote<]And that's when people have no choice but to pay megabucks for Intel, or peanuts for AMD's inferior tech.[/quote<] And most people will be glad to pay peanuts for AMD's "inferior" tech. I prefer AMD's platforms, and I buy high-end components. They're fast [i<]enough[/i<], and their platform as a whole offers more value. Okay, yeah, a $200 Core i5 CPU probably [i<]does[/i<] get an extra 50 FPS in Skyrim, but my rig gets 60 FPS, [i<]so I don't care[/i<]. On the flip-side, I have USB 3.0 and SATA 6.0 GB/sec, and I only have to deal with socket segmentation and overpriced motherboards.

            • ronch
            • 8 years ago

            Ok, you say that devs don’t take advantage of current hardware well enough and probably will not be able to (if they ever do) for a long time. You say that they code like crap, and yes, I agree with that for the most part. But don’t these arguments also suggest that it’s advisable to buy the most powerful hardware a user can afford? I mean, if it already runs like crap on a fast Intel machine, I guess it would run even crappier on a slower AMD machine, right?

            And if I understood your third paragraph correctly, you’re saying that devs only care about making more money (of course they do) and will not, or cannot, create more demanding games (contrary to what I said, which you also quoted). That’s like saying progress will stop, which is a little pessimistic on your part, don’t you think? Also, you seem to have forgotten about competition between software developers. The business of game development is a cutthroat business. Companies always try to outdo each other with better, prettier games, which will most likely put more pressure on hardware regardless of whether those games are coded well or not. As good as games already are both in terms of graphics and gameplay mechanics, I don’t believe progress will stop. And although I agree that the value proposition of cheap APUs is undeniable for most people, I don’t imagine developers will sack all their demanding RPG/FPS titles in the works and instead just stick to games like Plants vs Zombies. Big titles such as Battlefield 3 and Skyrim make megabucks, and devs capable of pulling off those titles will continue pursuing those markets. And for those games, I believe more performance doesn’t hurt. Why settle for 60 fps when you can have 100 for not much more money? Have you checked Intel and AMD prices lately? I can build a Core i5-2500 or an AMD FX-6100 system for roughly the same price (diff. motherboards of course, faster memory for the FX-6100, and everything else the same for both systems). And given how much faster the 2500 is compared to the 6100, I’m not sure why anyone in their right mind would go for an FX-6100. Really, I love AMD but AMD urgently needs to price these things more competitively.

            Lastly:

            [quote<]And most people will be glad to pay peanuts for AMD's "inferior" tech. I prefer AMD's platforms, and I buy high-end components. They're fast enough, and their platform as a whole offers more value. Okay, yeah, a $200 Core i5 CPU probably does get an extra 50 FPS in Skyrim, but my rig gets 60 FPS, so I don't care. On the flip-side, I have USB 3.0 and SATA 6.0 GB/sec, and I only have to deal with socket segmentation and overpriced motherboards.[/quote<] Oh, I get it now. No offense, but I take this to mean that you really love AMD, and there's nothing wrong with that. But make no mistake: I do too... but I try to keep an open mind. I'm not saying you don't, but this last paragraph makes me realize that I should stop this debate because it may be difficult trying to convince you that more performance is better and AMD should try to keep up or slip out. Remember, there were so many x86 players before, and if you fall behind too far, you're toast. And seriously, if AMD doesn't catch up, there will be no more AMD to keep the x86 industry in balance (or from being monopolized), and you'd be forced to buy Intel if you demand anything more than a low-end APU. So, really, I want AMD to try to keep up with, if not supersede, Intel. I know AMD can't be expected to overtake Intel given how much bigger Intel is, but for AMD to even keep playing in the x86 industry, I can't imagine them allowing themselves to fall too far behind.

        • dpaus
        • 8 years ago

        Yeah, it looks like ‘what AMD wants to do’ now is make money, and I’m OK with that.

    • Hattig
    • 8 years ago

    This is a very very small article to cover all the things talked about at the analyst day. The confusion in the comments is a result of missing huge swathes of information. I can only assume this means there will be a full article coming soon.

    • jensend
    • 8 years ago

    Really, TR? Zero coverage of the real announcements, updated roadmap, etc – just an article about a few hints and rumors?

    • HisDivineOrder
    • 8 years ago

    AMD is trying desperately to seem relevant, lest they seem blown out of the water. I mean, look at their latest products. They stall out on updating Bobcat this year, they release Bulldozer in a disaster, can’t make enough Llano to serve demand, fail to win Apple over with their Fusion CPU’s that are basically built as a dream list of what Apple would want, and then they deliver a high end GPU on a cutting edge fab that is only marginally better in performance than their only real competitor. Plus, they’re still flailing about trying to make a driver for their new card AND they can’t seem to get drivers built for new games out on time or such that they don’t crash.

    Rather than talk about the problems that are plaguing them, they’d rather bandy about ideas of them being a nimble, plucky go-getter of an opponent, agile and moving between the advantages of all their enemies as they make their way toward a great future led by a guy they brought from outside.

    Except of course that everything AMD’s done since he got there has tanked. Sure, AMD might go ARM, but they have to stay in business to get there.

    • odizzido
    • 8 years ago

    One of the problems I have with smartphones is the lack of x86 compatibility. Now I am talking about years in the future, but If they were to include a fusion APU and an ARM CPU in the same device that would make smartphones much more attractive to me.

      • Airmantharp
      • 8 years ago

      I really don’t see lack of x86 compatibility as a hindrance. There’s very little that could be done on an ARM based system that shouldn’t be recompiled, if not completely redesigned, before making the migration.

      Now that Microsoft has ported Windows to something other than x86 and IA64, there shouldn’t be much of a hindrance, and ARM SoC’s are just now coming into the performance envelope necessary to make applications written for Windows on ARM useful.

      Essentially, I wouldn’t want anything that was compiled for an x86 CPU running on my ARM based whatever to begin with, so I don’t really see the need for that technology to be part of SoC developer’s roadmaps- I’d rather them get on with making their designs faster and more efficient, like they already are.

    • --k
    • 8 years ago

    ARM has a much brighter future than AMD’s cpu effort as of late. Maybe they should get out while they can.

      • Airmantharp
      • 8 years ago

      I don’t disagree with the assessment on the ‘brightness’ of ARM’s future compared to AMD’s CPU business, but the loss of AMD in the x86 sector would be disastrous.

      As it is, Intel is largely competing only with themselves; AMD has devolved into a threat that might catch up should they stand still long enough. To take that away would destroy any reason for Intel to innovate on the CPU front.

    • destroy.all.monsters
    • 8 years ago

    I’m guessing they’re just trying to boost share prices since ARM is the flavor of the month – and that the board wants to not only be “me too” in the x86 world but in the phone and mobile devices and possibly ARM world as well ( I say this as an AMD fan).

    Or it might just be a trial balloon to see if it gains any interest. Read’s vision speech is supposed to be in about two weeks.

    • Tristan
    • 8 years ago

    Maybe Chinese will be interested in joining Alpha cores with GCN.

    • tay
    • 8 years ago

    AMD has always had the best Powerpoints.

    • dpaus
    • 8 years ago

    Has anyone called NeelyCam to make sure he’s not having convulsions over this?

      • chuckula
      • 8 years ago

      Medic: How many appendages am I holding up?
      Neelycam: EIGHT ARMS!!!!1!1!one!!1!eleven!

        • NeelyCam
        • 8 years ago

        Actually, I saw an article somewhere (can’t remember where – sorry), suggesting that AMD will start offering x86 to other companies. It almost seemed as if AMD would collaborate with others to bring in their customized IP, place them around an x86 (Bobcat-based, I assume) core and build a SoC for the customer.

        Seemed like a clever way of side-stepping the x86 license issues – with this scheme anyone could have an x86 SoC without paying an Intel tax… as long as they pay AMD to be the middleman. I thought that was way more exciting than this ARM rumor.

          • NeelyCam
          • 8 years ago

          I found it on SemiAccurate:

          [url<]http://semiaccurate.com/2012/02/02/amd-opens-up-bobcat-to-3rd-party-ip/[/url<]

      • Deanjo
      • 8 years ago

      Hit by a bus or abducted by aliens, that’s the only way I can see him not posting on this.

        • NeelyCam
        • 8 years ago

        Busy at work – sorry about not trolling.

        No – I don’t have much to comment about this. If AMD wants to compete with Qualcomm/NVidia/TI on an even playing field, let them do it. Probably easier than competing with Intel on x86, but still difficult as hell.

        I just don’t see what AMD could offer for the ARM market that the other, more experienced companies couldn’t.

          • Deanjo
          • 8 years ago

          OK, now I know you were abducted and they replaced you with a look alike clone.

            • NeelyCam
            • 8 years ago

            And the clone has a way better work ethic than the original.

    • Chun¢
    • 8 years ago

    What if this has something to do with them playing catch up with nVidia over GPGPU stuff? What if they don’t mean arm, and instead mean gpu / CPU archs?

    • jdaven
    • 8 years ago

    After today’s AMD Analyst Day, it will be time for AMD and Intel fanbois to shift their thinking of the CPU/GPU market.

    AMD will no longer try to compete head-to-head with Intel for the CPU performance throne. This is completely gone.

      • NeelyCam
      • 8 years ago

      Yep – I’ve been shifting my targets towards the ARM fanbois already (ever since BD results came out).

      • A_Pickle
      • 8 years ago

      [quote<]AMD will no longer try to compete head-to-head with Intel for the CPU performance throne. This is completely gone.[/quote<] It's not that they won't. If AMD held 50% of the market, and Intel held 50% of the market, you could bet your ass that the CPU wars for full performance crown would be in full effect -- but that isn't the market. AMD won't [i<]because they can't[/i<]. For every dollar AMD earns, Intel earns NINE. How the hell do you compete with that? It's simple: You don't. You give consumers what they want, and broaden into new markets where your gigantic competitor isn't quite so dominant.

        • clone
        • 8 years ago

        AMD reported a profit of approx $0.5 billion last year, Intel reported $21 billion in profit.

        that would put Intel earning $42 for ever $1 AMD makes and yes I agree if AMD had 30+ % of the market they’d compete directly with Intel but they don’t, they tried when they reached 21% but it didn’t work.

    • Washer
    • 8 years ago

    Zero confidence that AMD can be competitive in all these areas at once. What advantages are they brining to ARM exactly? One might say human talent but those people are already focused on righting their x86 ship. They have no form of fab advantage. I’m not sold their graphics experience matters that much (see Nvidia, Tegra) and it’s also a fairly busy bunch of people.

    Just don’t see it. They seem like a company that’s already stretched thin. How does expanding further help?

      • chuckula
      • 8 years ago

      AMD does have very competitive GPU design *BUT* much like Nvidia, they may find that the high-performance GPU designs they know well don’t translate as well down into the power envelopes of mobile devices… and mobile devices are where ARM basically lives.

        • cynan
        • 8 years ago

        And on that note, a foray into ARM may be simply a knee-jerk response to Nvidia’s Tegra. With the relatively lackluster graphics performance of both Tegra 2 and 3, perhaps they can produce a radeon-enhanced ARM-based platform that will really give Tegra 4 a run for its money. Besides, I don’t think it is very realistic for one of the top 2 GPU companies sit back and let PowerVR get the majority of the market share in mobile graphics for ever.

        Do we really know that “high-performance GPU” architecture isn’t useful in mobile platforms? Perhaps at the current power envelopes and die size limitations associated with the 40nm process. At 28nm and smaller, things could get interesting. The GeForce in Tegra 4 (28nm) will be a hugely improved over that in Tegra 3. Tegra 3 would probably also have been much more competitive with Apple’s PowerVR graphics if they had stuck to 2 ARM cores (with a third companion core) and saved the rest of the die space for the GPU. Their marketing-driven “first quad-core mobile platform” likely ended up robbing most users of performance where they actually wanted/needed it for most tasks.

        Knowing AMD though, if they actually commit to this, their Radeon ARM chips will show up 6 months after Tegra 4 (if they’re lucky) and be of equal or lower performance. Just getting in to ARM now puts them at a significant disadvantage. But you never know…

          • Game_boy
          • 8 years ago

          They HAD a ARM Radeon team. They sold it for a pathetically low amount to Qualcomm right before it became valuable.

            • khands
            • 8 years ago

            Yeah, AMD has had a horrible time timing things since like, 2000.

            • axeman
            • 8 years ago

            Since forever.

            • phileasfogg
            • 8 years ago

            “Since forever” is exactly right. At one point, AMD owned a 20% stake in Xilinx! 20 percent!!! If only they had kept that stake and not sold it. If only.

            • cynan
            • 8 years ago

            And that happened about 3 years ago now. More importantly, most of what was sold to Qualcomm was probably IP from the Imageon. Imageon was a line of media chips developed by ATI and produced and sold since 2003. It is unclear how much overlap, if any, there is between the Imageon tech AMD acquired during its purchase of ATI and later sold to Qualcomm at the end of 2008 and a modern Radeon.

            Now that mobile GPU demands have changed, maybe AMD is going back to the drawing board to compete with future Tegras, etc, afterall. I wonder if AMD’s deal with Qualcomm precludes them from producing Radion-ARM platforms? At any rate, this probably doesn’t show AMD’s ability to predict and respond to emerging markets in a good light. Or maybe it’s just the opposite and AMD realized that most of the Imageon IP wouldn’t be of much use for future ARM hybrids and that’s why they sold it.

            The Adreno GPUs in the previous Qualcomm Snapdragon processors haven’t been much to sneeze at. As it is, Qualcomm is still shopping around for IP in this area, and just last month purchase “Display IP” from Imagination Technologies (owners of PowerVR IP), though I don’t know whether this had anything to do with GPU architecture specifically.

            I’m willing to give AMD the benefit of the doubt, but if they’re not targeting mobile GPU segments, what else could they possibly enhance an ARM chip with (that would be of any real use)?

            • UberGerbil
            • 8 years ago

            It’s worth noting Intel also HAD an ARM team — and ARM-based products — in the form of the XScale line they got from DEC but eventually sold to Marvel. Those are still under active development and in use today in (IIRC) some Palm/Blackberry phones and at least the earlier round of Kindle devices. I don’t know how profitable that business is for Marvel, especially by Intel’s standards for profitability, but I do wonder if that sale is now viewed as a mistake in some corners of the Intel executive suite.

      • BobbinThreadbare
      • 8 years ago

      Compared to non-Intel fabs, Global Foundries is ahead of them. So they could indeed have an advantage there.

        • Airmantharp
        • 8 years ago

        That’s what I’m thinking.

        If they leverage their outstanding fabrication technology, they may be able to compete with Nvidia et al. I’d love to see a tablet with AMD-designed ARM cores and a Radeon-inspired GPU setup.

        • Washer
        • 8 years ago

        Does GlobalFoundries have the capacity? Wouldn’t other companies also have opportunities to contract GlobalFoundries? Intel is also competing in this sector (and further along in fabrication and shipping product). Right now the only certain thing appears that AMD will be introducing a 4.5W APU based on Bobcat, and plans a 2W SoC. Let alone going with ARM.

        Maybe my language was a little strong, but I still feel that AMD is chasing what’s hot at the expense of fixing their fundamental business.

    • fredsnotdead
    • 8 years ago

    GCN/Bobcat/Bulldozer — they already have too many architectures to work on. Better to find something and do it well. Being ambidextrous in chip fabs would be a better idea.

      • chuckula
      • 8 years ago

      AMD is already constrained on resources when it comes to design. They already canceled the (previous) next-gen Bobcat and now have to wait until 2013. Also, the team lead for Trinity no longer works at AMD.

      If AMD doesn’t watch out, they could be spread too thin. Also, while AMD likely thinks it will get a free ride with ARM designing the CPU cores, AMD still has to enter a market with several established players who are very experienced with ARM and already know how to go beyond the stock designs to get the best results. See Nvidia’s experience with Tegra: the Tegra chips don’t even have the best GPUs in the mobile sector, and that’s supposed to be Nvidia’s strength!

    • flip-mode
    • 8 years ago

    It would be interesting to see AMD release a chip that X86-64 and ARM and GPU all wrapped into one. I have no idea if that is possible.

      • chuckula
      • 8 years ago

      [quote<]I have no idea if that is possible.[/quote<] Possible: No problem Practical: Maybe maybe not. Actually Gonna Happen: If it does, I can pretty much guarantee it'll be for a hybrid mobile device that uses ARM for low-power mobile use and then turns on x86 for high performance.

      • BobbinThreadbare
      • 8 years ago

      Why would someone even make such a thing?

        • Washer
        • 8 years ago

        The idea is that you switch to ARM for power savings and x86 for performance. When the realm of a real product it makes no sense, if you consider how much effort it would take to make such a system a well integrated experience.

          • BobbinThreadbare
          • 8 years ago

          This seems like a lot of work for something very silly.

            • SPOOFE
            • 8 years ago

            Tegra 3 works on the same principle, though the execution differs (it’s “fifth” core isn’t a totally different CPU architecture, for instance). However, the overall structure is pretty close: Performance, Power-savings, and Graphics are each distinct parts of the chip.

          • yokem55
          • 8 years ago

          This is basically just Asymmetric Multi processing. You just need the operating system to be able to handle binaries compiled for different architectures and route them to the right processors. You’d want one, probably the ARM, to be the main OS cpu, then when a user wants to run an x86 binary, the OS activates the x86 core, and runs the executable on it. The OS would have to provide facilities for IPC, and the memory setup would probably be tricky, but not that hard. This largely has been done already with PPC + Cell, and in a different way with GPU computing.

          Edit- Wong word!

            • Pancake
            • 8 years ago

            It would be a monstrously complex piece of engineering but possible. Not even sure if it would be efficient. It’s not just a case of just farming out processing to an x86 core and letting it do its thing. All user applications run in a virtualised environment but most applications have a lot of interaction where user space needs to talk to kernel space for i/o or whatever. Getting this to work efficiently across architectures would be challenging. I think a more desirable scenario for something like a tablet would be low MIPS/watt x86 core combined with high-performance core like the Tegra 3.

            Reminds me of back in the 80’s when I was playing with a Tandy UNIX system (model name long forgotten). User code ran on a 68000 but MMU and i/o were handled by a Z80 coprocessor (which could also boot into CP/M mode). Very bizarre. For trainspotters it was a 68000 not a 68010. Not sure how they did it.

          • Deanjo
          • 8 years ago

          [quote<]When the realm of a real product it makes no sense, if you consider how much effort it would take to make such a system a well integrated experience.[/quote<] Lenovo seems to think throwing both into the same product makes sense. [url<]http://www.linuxfordevices.com/c/a/News/Lenovo-ThinkPad-X1-Hybrid/[/url<]

      • Mime
      • 8 years ago

      Isn’t this idea part of what killed the Itanium? It’s like they’re trying to create One Chip to Rule Them All without considering the reasons why we have chips with different instruction sets in the first place.

        • demani
        • 8 years ago

        Reminds me of the fabled PPC 615 (I think) that was supposed to be both x86 and PPC. There is probably a reason that didn’t come to market (besides lack of market need).

          • BobbinThreadbare
          • 8 years ago

          IBM doesn’t have a x86 license?

            • Deanjo
            • 8 years ago

            Not one in recent history. IIRC their license died with the pentium generation (which would be useless as most if not all of those have patents expired).

        • UberGerbil
        • 8 years ago

        Actually, no. Itanium had a different instruction set because Intel was pursuing an experiment based on the theory that much of the complicated op scheduling required to extract instruction-level parallelism from real-world code could be pushed onto the compiler, and the resulting transistors savings could then be used to implement parallel execution resources that that parallelism would exploit. As with a lot of experiments, the results showed that what seemed elegant in theory proved a problematic in practice (and demonstrated yet again that relying on the software engineers to make hardware engineers’ jobs easier is exactly the opposite of how that particular relationship can be expected to work successfully). Intel added x86 execution hardware to Itanium strictly as a stop-gap since they recognized that, initially at least, the ability to at least run x86 code (however poorly) was a necessary requirement for Itanium adoption since the library of Itanium code would be empty at its introduction. Eventually Itanium was able to run x86 code in emulation as fast as contemporary x86 cores were able to run it natively, so they removed it from the hardware. (Contrary to what some folks here seem to think, it was never a full set of x86 resources, just hardware assist to speed up some of the more problematic aspects of x86 decode and execution).

        The isn’t precisely the same situation since there exists already a huge library of ARM code; nevertheless, the ability to run “legacy” (or “mainstream” if you prefer) x86 code would be a potentially desirable feature for some markets and customers (imagine how much more interesting Win8/ARM would be if it had that feature). And i think the Itanium experience suggests an interesting approach to implementing that as well: rather than attempting to include a full set of x86 execution resources, it might be better to simply do work to make it possible for ARM to run x86 code in emulation at some minimally performant way. It wouldn’t be adequate for A-List games, but that’s not a requirement; it would be a successful feature if it could run things like TurboTax / Quicken and all the homegrown internal IT apps used by the F500 with adequate performance (which seems possible, since those things date to and ran fine on 1GHz-class Pentium IV-era CPUs)

          • destroy.all.monsters
          • 8 years ago

          I get that it would be a way to differentiate products – and that it could be helpful to the consumer (it being an ARM with x86 extensions or hardware emulation – or even cores).

          However it could become a very expensive boondoggle for AMD which to my mind isn’t really in the position to make many risky moves. It’s also far less elegant than Microsoft not segregating its versions of Win 8 (or making an emulation layer as Apple had done).

          It’s an interesting idea – and I appreciate the amount of thought you put into seeing how it could work – it just seems like more of a pipe dream.

            • UberGerbil
            • 8 years ago

            Yes, there’s the technical case and then there’s the business case. And there’s the business case in the abstract vs the business case for a particular company. I agree that AMD may not have the resources to attempt something like this; on the other hand, they’re getting close to the point where they have to make some kind of radical “bet the company” move to get out of the shrinking box they’ve found themselves in. (nVidia did something like that when they shut down their chipset business and went all-in on Tegra and GPGPU, even if gaming GPUs were paying the bills in the interim; whether this is a winning strategy is still unclear).

            It’s too bad, because of the few companies with the capability (legal and technical) to marry legacy x86 with ARM, AMD is the only one with any real motivation to do so.

        • thefumigator
        • 8 years ago

        No, itanium was nothing like this idea, its the other way around, itanium was unique in its class with zero compatibility with other architectures.

        btw Opteron 64 killed itanium. in short, because the thing was 64 bit regs + capable of running regular 32 bit x86 code at the same time

      • tfp
      • 8 years ago

      Intel was able to do x86 and IA64 on the same chip, it made no real sense back then but sure AMD could do something similar…

        • Deanjo
        • 8 years ago

        The itanium emulated x86, it did not have an x86 core (and was terribly slow doing so).

          • tfp
          • 8 years ago

          Originally x86 support was in hardware and was removed in later generations of the chip.
          [url<]http://www.xbitlabs.com/news/cpu/display/20060120105942.html[/url<]

            • Deanjo
            • 8 years ago

            It was still not native x86, just like Transmeta was not native x86. It used a hardware emulator to process x86 instructions.

            • tfp
            • 8 years ago

            Current AMD and Intel chips use translation and turn x86 instructions into “RISC” no one has made a pure x86 CISC chip in a long time.

      • Palek
      • 8 years ago

      Numerous SoCs have two (sometimes more) different types of CPU cores, both running different OSes in parallel. Typically you have a high-performance application CPU that runs some form of *NIX working alongside a smaller, simpler core that handles time-critical tasks running a real-time OS.

      • eitje
      • 8 years ago

      x86-64 that decodes into ARM instructions rather than standard microcode; run that through your X-many ARM cores. interesting.

      • dpaus
      • 8 years ago

      [quote<]a chip that's X86-64 and ARM and GPU all wrapped into one[/quote<] Initially, I was very intrigued by that possibility too, but the news that AMD is going to have a SoC with 'sub 2W' power draw that will be able to drive a Win8 tablet that can actually run real Windows software leaves me wondering if grafting an ARM core in the side would really gain anything. Knowing how you feel about Bette, I won't repeat my favourite quote of hers, but, well, you know....

Pin It on Pinterest

Share This