AMD’s Read opens door to collaboration with ARM

This spring, AMD categorically denied a rumor that it would partner with ARM, saying it had "made a big bet on APUs, which are x86." The company has changed CEOs and literally decimated its staff since then; now, it suddenly seems more open to the prospect of offering ARM-based processors. MarketWatch got a choice quote on the subject from the chipmaker’s new CEO, Rory Read:

"At the end of the day, it has to be market driven and by the customer," Read said. "We have a lot of IP and a lot of capability. We’re going to continue to play those cards, but as you move forward, making sure that you’re able to be ambidextrous is definitely a winning hand."

Carefully worded as it might be, that statement represents quite a reversal from AMD’s previous position. Raymond James analyst Hans Mosesmann told MarketWatch he interpreted the statement like so: "I came away with, as AMD in PCs, we will continue with x86, but in low power stuff, if that’s what customers want, [AMD could adopt ARM]." That said, Mossman added that whipping up an ARM-based solution would likely take the chipmaker two to three years. We may not see AMD competing with the likes of Nvidia and Qualcomm right away, then.

ARM really could be AMD’s best bet for a quick entry into the mobile business. Intel has struggled to push x86 into handhelds for years, and that’s despite a sizable manufacturing lead and a massive R&D budget. AMD has neither of those luxuries, and its lowest-power x86 APU currently fits in a rather large 5.9W thermal envelope. (In contrast, Nvidia’s dual-core, ARM-based Tegra 2 chip is supposed to draw just 500 mW.)

Comments closed
    • ronch
    • 8 years ago

    I’m glad this is happening. Back in those days when x86 was just taking off, Intel wasn’t as greedy and proud back then as they are today. It even allowed AMD to clone its processors at IBM’s request, and there were perhaps more than 10 x86 CPU suppliers. Some, like AMD, had direct clones. Others reverse-engineered Intel’s designs.

    But as the IBM PC/compatible industry boomed Intel got richer and richer, and greedier and greedier, filing lawsuits all over the place to get a lock on the industry. Obviously this stifled everyone, not to mention making Intel look like a kid who wants all the toys to himself now that it already had what it wanted. And so it is: AMD is the only one left standing today, and Intel continues to strangle them. Of course, AMD’s situation today can’t be totally blamed on Intel; AMD also shoots itself in the foot once in a while, but Intel’s tick-tock is proof that it’s a big bully who always pins a far smaller competitor to the wall just because it could. The industry, as I see it, is in imbalance. Nobody except AMD really wants to compete with a big bully. That may soon change.

    The thing is, x86 is an ancient design with all the extra baggage that date back to the first 8086 CPU. Heck, even undocumented instructions are still there, eating up clock cycles and die space. Even CISC-to-RISC decoding and back is a painful process which, in the grand scheme of things, shouldn’t even be necessary in the first place if x86 was a far leaner and efficient architecture. Now I think everyone is starting to realize that x86 is not the best solution to everything, especially now that low-power devices are booming and environmental concerns seem to be all the racket.

    The kid who doesn’t want to share his toys with the other kids is now seeing that the other kids bought themselves new toys. Toys that are modern, not made of heavy die cast metal. No, these toys consume less power. Leaner. Sleeker. Far more efficient.

    Now what’s happening here is that Intel has always wanted x86 all to itself and now everyone’s backing a new architecture. Is this the right way to move forward? I certainly believe so. The heck with Intel, its greed, its legal department, its deep pockets filled with greed, and its ancient x86 architecture. Time to move on with something more efficient. And it just so happens that it’s not SPARC, it’s not PowerPC or MIPS, it’s ARM. I just hope ARM will not be like Intel and become greedy and proud because it can very well do so. What the industry needs is a consortium, a standards body, or whatever you wanna call it, to govern the ARM architecture. Nobody should have a stranglehold on ARM; it should be completely open just like some architectures out there. If this happens, the computer industry will become a much better place, with more options and more innovation.

    Intel has proven time and again that it can shove x86 anywhere it wants, especially with their deep pockets. But we all know that nothing lasts forever. The industry yearns to get balanced again.

      • ludi
      • 8 years ago

      This post has been written ten thousand times since the latter days of Usenet, yet Intel continues to enjoy record-breaking quarters. I like a little David & Goliath as much as the next guy but be realistic: Intel has been trying to push x86 into the mobile space for years and never succeeding, because other design IP (ARM in particular) is always about two steps ahead of x86 in terms of power consumption, and in handset and similar devices, battery life is king. Meanwhile, x86 has been wildly succeeding everywhere else, and even Intel wasn’t able to derail it (IA64).

      To the extent that ARM grabs marketshare, it will be on account of mobile devices gaining marketshare at the expense of other PC options. However the notion that some massive rebalancing is at hand doesn’t survive the acid test of the past 20 years of PC history.

        • ronch
        • 8 years ago

        Not just ten thousand times. Twenty thousand times. The difference with ARM nowadays though, is that unlike some architectures that tried to have a shot at overthrowing x86, ARM is actually achieving critical mass in the consumer space. PowerPC, MIPS, IA-64, etc. didn’t, partly because they (particularly Power PC) tried to take on x86 head on or got relegated to other niche markets such as automotive, aerospace or game consoles that usually never stick with the same architecture from one console generation to the next. ARM’s relative simplicity also helps keep power down and development costs lower too, both of which are very handy in today’s energy and cost conscious world.

        • OneArmedScissor
        • 8 years ago

        [quote<]To the extent that ARM grabs marketshare, it will be on account of mobile devices gaining marketshare at the expense of other PC options.[/quote<] But they will now be in PCs, too. Once Windows 8 is out, it's all in the hands of OEMs.

          • NeelyCam
          • 8 years ago

          It’s just as difficult for ARM to break into “PCs” as it is for Intel to break into mobile devices. Both have strong ecosystems in their own spaces that are hard to overcome.

          Two fundamental differences: 1) Intel has a significant manufacturing advantage. 2) Intel has been trying to break into mobiles longer than ARM has been trying to break into PCs.

          It’ll be an interesting battle, but I think Intel will be the winner.

      • NeelyCam
      • 8 years ago

      When I saw how long the post was, I was sure it was from WaltC. Imagine my suprise…

      …unless, of course, WaltC is multiplaying

    • HisDivineOrder
    • 8 years ago

    ARM is the future. x86 is the past. Once AMD changes its focus from high performance to “just enough,” changes its power usage from Whoa nelly! to frugal sippers, dumps the idea of the mainstream discrete video card, and goes all in with low power, decent performers that include solid GPU’s…

    ARM is going to be all around Intel, closing in. Suddenly, x86 is going to look like 16 bit support looks now in future iterations of Windows. AMD knows this, which is why they’ve almost certainly beginning work on their ARM CPU tech while making do with what they got, cutting any extra projects that would require too much work and focusing solely on die shrinks on what is currently designed.

    By the time Intel realizes the market’s left them and there’s no way they’re going to make their turkey fly as frugally as an ARM chip…

    Project Denver arrives, blotting out the sun and casting a shadow across the entire CPU landscape. Intel weeps as they fall back on their fabs, slutting out their manufacturing ability to stay afloat, doing the only thing they can to make money during the dark times when they have no ARM CPU designed and x86 is shunned. Eventually, they sell their x86 patents to nVidia for $2 and a half-chewed stick of gum. Also, they spin their fabs off into Worldwide Winning Fabrication (WWF), which promptly merges with TSMC, creating the world’s premier fabrication company. Meanwhile, Intel exists merely as a patent troll that occasionally makes noise from under its CPU bridge, but otherwise is non-existent. Eventually, Rambus buys them outright to add to its trolling capabilities.

    …or Read being vague about ARM could be a new CEO spewing CEOspeak to avoid having egg on his face in a couple years when his company produces ARM chips like its likely always intended to do since its big announcement with ARM a few months back. Either/or.

      • NeelyCam
      • 8 years ago

      X86 is nowhere near “the past”. ARM will have serious trouble scaling up to the performance level people will learn to expect in the future without giving up its one selling point: low power.

    • shank15217
    • 8 years ago

    Unless AMD buys a ARM manufacturer there is no way AMD is going to go into that business, they cant afford it. Rory Read is either a very good politician or he’s a moron, I’m better he’s a former. He wants to keep the investors happy and tell them that AMD is interested in emerging market segments. AMD has a lot of IP and none of it is in ARM. AMD can do wonders with Brazos, in fact if AMD does things right x86 will win out yet again. ARM may have a stranglehold right now in low power devices but x86 has utterly destroyed big iron. All historical patterns indicate x86 will tear a new one into the ARM ecosystem in 2 years or so.

      • ronch
      • 8 years ago

      Many far smaller, far more obscure companies design ARM chips. It’s a lot simpler than x86. AMD is a formidable design house that’s just been living in Intel’s shadow. They’ve succeeded where no one else has for so long. They have the talent, they have enough money, they have their established channel partners, they have the brand value and recognition. All they need is an ARM license.

      x86 is an old dinosaur, flogged only by Intel’s bajillions. The industry is in imbalance and is balancing itself again.

        • BabelHuber
        • 8 years ago

        Developing an ARM-SoC may be easier than developing an X64-APU, but it is no trivial task, too.

        Also, the competition there is already fairly high. Honestly I expected NVIDIA to do better, but the current Tegra 3 is still slower GPU-wise than e.g. the PoverVR-GPU Apple uses in its iPad2.

        So an AMD GPU for ARM would not necessarily be the winner from the start.

        CPU-wise, a lot of Cortex A15 designs will be released in 2012, so it would be tough to get to the top here, too.

        OTOH, if AMD would spend ressources for ARM, where would they take these ressources from? Llano? Bobcat? Bulldozer?

        You cannot just add ressources from thin air, something has to give.

        In the end, the outcome could be an ARM-SoC which is just a me-too-product, while the x86 development would be further hampered.

        Then AMD would be even less competitive in the x86 market than now.

          • Silus
          • 8 years ago

          Tegra 3 is roughly 80mm2 and it’s a quad core (+1) and 12 core GeForce ULP, A5 is 120mm2 with just a dual core and massive die area space for the GPU. Tegra 3 was not going to beat A5 on graphics (yet it still comes very close) on the basis that Tegra 3 was designed to be more power efficient while boosting performance over Tegra 2 quite a bit and they achieved that. Just look at reviews.

            • BabelHuber
            • 8 years ago

            I think you misunderstood me a little bit here: I personally think that the Tegra 3 is a good SoC for tablets.
            Also I know that Apple focuses on GPU-power because they want the iPad to also be a replacement for mobile gaming consoles.

            But when NVIDIA started designing ARM SoCs, I thought that they would be leaving all other mobile GPUs in the dust performance-wise with their know-how.

            But this didn’t happen, PowerVR is not defeated yet.

            My point simply was that it is not so easy to compete in the ARM-market as some people think. AMD cannot expect to release a SoC which surpasses the competition.

            If it was so easy, NVIDIA would have achieved this by now.

            • ronch
            • 8 years ago

            I think it’s also about tradeoffs. Not sure if PowerVR is inherently more power efficient than Nvidia’s current designs though, especially their Fermi-derived designs, although I would think they could dig up their old Riva schematics and get some ideas from it.

            • Silus
            • 8 years ago

            What does Fermi have to do with GeForce ULPs ?

            Just look at reviews of Tegra 3 and check its power efficiency compared to its performance. It’s 2 to 3 times faster while consuming the same as a Tegra 2 SoC. When compared to other SoCs it’s also around those values in performance increases, while being much more power efficient. It loses to A5 in GPU tests, because A5 has tremendous silicon area at the GPU’s disposal. Not to mention that it’s always unfair to compare any ARM SoC for Android, with anything Apple does, because any software running on Apple SoC will be optimized for it, while almost nothing is optimized for a specific SoC in Android.
            Would be interesting to see one of those optimized Tegra 3 games, running on an A5 optimized version as well. Now that would be a far more accurate comparison, even though considering the silicon advantage A5 has, I’m sure A5 would still win, but surely by a smaller margin than it does now.

            And there’s all this buzz about Krait and how powerful it is and how it will smash everything in sight, but It’s no where in sight and will only come out in 5-6 months…quad-core version is actually a full year behind. Tegra 3 was time to market smart, since everyone else is betting on 28 nm designs when 28 nm is still problematic, yield wise.

            • ronch
            • 8 years ago

            I’m not really familiar with what goes on inside NV’s ARM SOCs, but it’s possible that the graphics part has some resemblance with their desktop/laptop parts. I don’t know. Or perhaps they’re entirely new designs. Regardless, NV is no stranger to graphics and if there’s anyone who has mastered the art, it’s Nvidia (and ATI/AMD). They can very well design low power parts based on what they think the market needs.

            • Silus
            • 8 years ago

            But it has no similarities. It’s basically a not-unified design much like GeForces pre-8800 days.

            As for designing low power parts, they already have…GeForce ULP = Ultra Low Power

            • destroy.all.monsters
            • 8 years ago

            I don’t actually see how they can compete in this space without buying Imagination Tech.

            Since they sold off their previous ARM division to Qualcomm there’s presently no in-house talent (to the best of my knowledge) that’s worked extensively on ARM. Building a new low-power graphics chip for phones and the like is a non-trivial task. At least if they found a way to buy IT they’d get a lot of licensing dollars.

            Which isn’t to say that AMD has the money for IT unless they start playing footsie with ATIC.

            Who it hurts: intel, Nvidia. Gains them a whole raft of customers.

            • Silus
            • 8 years ago

            Yes, it’s not easy! Many considered Tegra 2 a failure because it didn’t make it into many actual products. But they forget that Android wasn’t designed for tablets and a version for them had to be developed. Many companies had to wait a long time to release theit tablet products because of this. It’s not coincidence that the Android tablet market is so small. It’s certainly not just because Apple floods the market with iPads…it’s also because Android had a very rough start in tablets and it wasn’t the blame of SoC designers. The hardware was there and ready…software wasn’t.

            As for Tegra 3, designing chips is about targets and objectives for a certain market. Tegra 3 isn’t a big departure, GPU wise, from Tegra 2. It’s the exact same arch, with more resources thrown in. Tegra 3 major selling point is really the quad core CPU (+ companion core, for power efficiency), which ICS is supposed to take better advantage of. That’s why many reviews point out that Tegra 3 is being somewhat wasted in the Transformer Prime’s Honeycomb OS. The hardware is there and ready (once more), but the software is playing catch up…

      • Silus
      • 8 years ago

      AMD buying something ? With what ? They barely have money to keep their operations, much less buy any other company…

      And good luck with x86 doing anything to the ARM ecosystem. That would only happen if ARM folk stayed still. If anything ARM will punch another hole in the x86 ecosystem, especially in the server market. NVIDIA with project denver and I think it was HP also talking about ARM based servers, you can bet that x86 will have a hard time in the near future.

    • chuckula
    • 8 years ago

    There’s nothing that Rory Read said that any Intel CEO hasn’t said in the last few years. Sure Intel pushes x86, but they are more interested in making money than anything else. Intel has been a large producer of ARM chips in the past and I believe still holds a license that is inactive right now… but doesn’t have to remain that way.

    • khands
    • 8 years ago

    Honestly I’d really like to see them pushing Zacate and the like further down.

      • NeelyCam
      • 8 years ago

      Me too, but TSMC screwed them over on that.

        • Duck
        • 8 years ago

        The key engineers working on that sort of thing were fired or left. That’s why Brazos’ successor has been cancelled.

    • jdaven
    • 8 years ago

    Looks like we are going to get a Tegra 3-like chip with an ARM cortex coupled with a Radeon GPU. The safe bet is on ARM for low power than x86. AMD has better leverage to do this than Intel, in my humble opinion. The following story from Anandtech with regards to the ARMv8 architecture comes to mind:

    [url<]http://www.anandtech.com/show/5098/applied-micros-xgene-the-first-armv8-soc[/url<] Part of the story about Sandy Bridge comparison is crossed out since APM retracted that part of the story but if the performance is close (4 ARMv8 cores comparable to 2 Sandy Bridge cores), then ARM will give Intel a run for its money. I stick with my previous stance that the future competition fight is ARM versus x86 and not AMD versus Intel. Whatever happens, it is a good time to be a consumer.

      • Goty
      • 8 years ago

      Saying a “Radeon” GPU is kind of silly since it certainly won’t be anything derived from their current graphics architectures.

        • Game_boy
        • 8 years ago

        AMD’s graphics worth isn’t in architecture, it is in patents. I doubt any company in the world could make a high performance graphics architecture without licensing AMD or Nvidia patents.

          • Helmore
          • 8 years ago

          What about Imagination Technologies and maybe even ARM Holdings?

        • just brew it!
        • 8 years ago

        I don’t think that’s a foregone conclusion at all. A drastically cut down (chop the number of pixel pipelines to a fraction of the number used for a typical desktop GPU) and downclocked Radeon could conceivably be competitive in the low-power mobile space, both in terms of performance and power consumption.

          • TO11MTM
          • 8 years ago

          Indeed. In fact, due to the very short development cycles of most video cards, they are typically designed in a hardware descriptor language. This allows them to easily tweak things as the pieces (i.e. core, memory controller, etc) are basically ‘functions’ and then they can piece everything together and possibly manually tweak some things in the final layout if required.

          My understanding is that in the case of CPU Architectures (Namely X86,) things are typically much more hand tuned on the transistor level, hence the long development time between architectures.

          Also, my understanding is based on actual computational thoroughput per watt the current sets of GPUs actually are extremely efficent…

    • flip-mode
    • 8 years ago

    It’s pretty rare to hear the term decimate used accurately in the literal sense. Bravo. You failed to work in the use of the term vertiginous for an epic win, though.

      • Alexko
      • 8 years ago

      Well, kind of. AMD did remove 10% of its staff, but—I hope—those people are still alive.

        • Palek
        • 8 years ago

        [url=https://techreport.com/discussions.x/22009<]The shocking truth according to Fred[/url<]

        • ronch
        • 8 years ago

        They’re gonna revive Cyrix.

    • Myrmecophagavir
    • 8 years ago

    [quote<]We’re going to continue to play those cards, but as you move forward, making sure that you’re able to be ambidextrous is definitely a winning hand.[/quote<] Corporate metaphor overload!

      • bittermann
      • 8 years ago

      Translation:”After the BD fiasco we’ll go whichever way the wind blows, because honestly we have no idea what were doing”.

      • shiznit
      • 8 years ago

      No kidding. Where do these people learn to speak like that? I hear it at work every day but this is another level.

        • Wirko
        • 8 years ago

        Wanna learn? Learn from the best:

        [url<]http://www.theinquirer.net/inquirer/news/1597046/sap-pull-socks[/url<]

      • Palek
      • 8 years ago

      That’s not corporate language overload, just plain bad English. There wasn’t enough “exploiting new synergies” or “maximizing shareholder value” or “achieving sustainable growth.” 🙂

      • ronch
      • 8 years ago

      Yeah, this overloads all eight cores!!!

Pin It on Pinterest

Share This