AMD’s ARM core will be licensed, not AMD’s own creation

Yesterday’s announcement from AMD about its plans to build ARM-compatible Opteron processors is very big news, but it shouldn’t come as a shock, since AMD execs strongly hinted at this move with talk of an "ambidextrous" ISA strategy during the firm’s analyst day earlier this year.

One of the questions that lingered after yesterday afternoon’s press conference was the exact nature of the CPU cores to be used. ARM offers licensees two basic options: rights to use a pre-fab CPU core and build it into a chip, or rights to build your own CPU core that’s compatible with the ARM ISA. Most familiar ARM-based devices these days use the first option, incorporating something like the Cortex A9 into a larger SoC. A few use the latter option, licensing the ISA and building their own CPU core, including Krait, Nvidia’s secretive Project Denver, and apparently Apple’s A6.

AMD’s talk of bringing its expertise with 64-bit CPUs to the ARM ecosystem might have led you to think that it had taken the second path, licensing the ISA, but that’s not the case. Instead, AMD is licensing a 64-bit CPU core from ARM and building it into a chip—which AMD calls an SoC, or system-on-a-chip—that’s compatible with the server-oriented Freedom Fabric interconnect AMD acquired when it purchased SeaMicro. This fabric interconnect will most likely be incorporated into the silicon along with the ARM cores.

We’d expect to see the ARM-based Opterons riding on a modular server card like the one AMD showed off for its x86 Opterons at Hot Chips. These cards plug into high-density servers following the SeaMicro template, and such cards will likely be the basic unit of computing for AMD’s cloud and enterprise solutions going forward. Thus, the, uh, Opter-ARMs may not have any need for compatibility with x86 Opteron sockets or interconnect standards like HyperTransport.

These ARM cores are unlikely to match today’s Xeons and Opterons in raw performance, but they’ll probably consume relatively little power, allowing server makers to pack lots of chips into a single cabinet. The decision to target cloud computing installations, where lots of relatively low-performance cores can serve effectively, seems to fit with that profile.

Since AMD plans to bring its first ARM-based Opteron to production in 2014, the decision to license an ARM core makes sense. Going forward, AMD could potentially decide to build its own ARM ISA-compatible CPU core, but doing so would take time. Also, licensing a core saves on engineering resources, a major constraint at a cash-starved firm. AMD can pursue this avenue while still dedicating resources to its own x86-compatible Opterons, which are not going away.

AMD is unlikely to bring ARM server chips to market unopposed, given the sheer number of ARM licensees in the world. In fact, former Opteron chief Pat Patla defected to Samsung earlier this year, along with some other former AMD employees, amid rumors that Samsung might be building ARM server chips.

Meanwhile, Nvidia was briefly very open about its plans for Project Denver, which include the development of a "fully custom" ARM-compatible core targeted at desktop PCs and servers. We expect this CPU to have relatively high performance targets and to include 64-bit support. If Nvidia has done its job well, Project Denver silicon ought to outperform ARM’s licensed CPU cores. This chip will also have an Nvidia GPU on the same die, possibly making it a candidate to serve in GPU-driven supercomputing clusters, a business where Nvidia and AMD already both play, often together.

Of course, AMD has some natural advantages over such potential competitors in the ARM server space, including its own formidable GCN architecture for GPU computing, an existing Opteron business with all of the attendant relationships with enterprise OEMs, and the SeaMicro infrastructure for dense servers. What it does not have is a head start, given that Nvidia announced Project Denver nearly two years ago and has been developing its Tesla business diligently since then.

The larger question here that’s really intriguing is what the creeping competitiveness of processors based on the ARM ISA means for Intel, the x86 stalwart whose high per-chip prices and margins are being eroded on various fronts by commodity SoCs costing $25 or less. The relative openness of the ARM ISA, combined with the constantly plunging costs predicted by Moore’s Law, may be a bigger threat to Intel’s business than any challenge that AMD ever mounted with K7 and K8. Why? Because this time, everybody from Apple to Qualcomm to Nvidia is ganging up with AMD to take on the giant.

Comments closed
    • HisDivineOrder
    • 7 years ago

    Hopefully, ARM performance will continue to improve and MS will change their minds on how they’re treating the ARM-based versions of Windows with consumers. If they built in some emulation of x86 code for ARM-based versions of Windows, that’d be awesome, too.

    • ronch
    • 7 years ago

    Is it just me or are AMD and ARM using the same font in their corporate logo? I could get ‘AMD’, take out the D, and put in an R between A and M. Voila. Maybe these two companies have been secretly connected since both have existed?

    • NeelyCam
    • 7 years ago

    I’ve been a bit puzzled by this. What is the reason for AMD to go ARM? AMD is one of the few companies with an x86 license and a lot of heavy-hitting IP to support it..

    I guess they have been watching Intel run away in perf/watt largely because of Intel’s process advantage, and realized they can’t compete there. Maybe at the same time they have realized that the low-power high-efficiency chip development is lagging behind compared to Intel or even the ARM licencees. With lack of cash and resources, killing in-house development of low-power chips might make sense (i.e., Jaguar=dead?) as there is an easy-to-obtain alternative available.

    In a way, switching to ARM for low power is a reasonable bet for hoping the marketplace to switch to a cheap, open, widely available and standardized platform. Even if the ARM software ecosystem isn’t there yet in servers, it will be in the future.

    And, although chuckula thinks AMD is way over its head battling the plethora of ARM licensees, I think AMD has something others don’t: actual server experience, and previous server relationships. For instance, major ARM licensees (other than nVidia) don’t have much experience in high-speed interconnects or high-performance memory interfaces, while nVidia doesn’t have much background in servers yet. ARMAMD vs. nVidia Denver will be an interesting battle for cloud servers, but I think AMD has an advantage…

    Meanwhile an ecosystem battle is fought between x86 (Intel) and ARM (everyone else), where I still think Intel will claim the win because of current infrastructure and the process advantage. But for AMD, it may be better to fight for 20-30% of ARM-based server market than for 1-5% of x86-based one.

      • Geistbar
      • 7 years ago

      I had been thinking on it some as well…

      [quote<]But for AMD, it may be better to fight for 20-30% of ARM-based server market than for 1-5% of x86-based one.[/quote<] And this is more or less what I concluded. It's been many years since we weren't hearing of AMD having financial issues, but in the last year or so their troubles have really seemed to take a more solid impact. The basic costs of competing in x86 might be too high for AMD to maintain it with enough strength going forward, while the basic costs for trying to compete with ARM seem significantly lower. It might not be so much as AMD expecting to do excellently in ARM, but instead their management believing it the only viable path to avoid dying unceremoniously to Intel's various advantages.

        • NeelyCam
        • 7 years ago

        What I don’t get, though, is why not focus [i<]everything[/i<] towards this new plan, instead of splitting resources between ARM and x86 chip projects?

          • just brew it!
          • 7 years ago

          Because if they do that, and the ARM thing doesn’t pan out, they are left with nothing?

          • Geistbar
          • 7 years ago

          I think that, even if they wanted to put all of their resources into ARM and give up on x86 altogether.. the amount of turnover that would require is too large to do it quickly. They’ll need a lot more people experienced with ARM than they currently have. Also, I expect that they concluded that they might as well try to coast along their current designs as long as they can — the true question of their dedication here will be if we end up seeing another brand new x86 architecture to succeed Bulldozer (instead of right now, where they modify Bulldozer and work from there).

      • just brew it!
      • 7 years ago

      [quote<]Even if the ARM software ecosystem isn't there yet in servers, it will be in the future.[/quote<] It is mostly ready to go now; shouldn't take much to ramp it up. Linux runs on ARM, and the codebases of all of the major Open Source software stacks are 64-bit clean (Alpha, Itanium, and x86-64 paved the way for this by forcing developers to fix any code that was 32-bit specific years ago).

      • ronch
      • 7 years ago

      I’m puzzled as to why they didn’t just use their Jaguar/Hondo/whatever Bobcat-derived CPU core they had for this purpose, which is microservers. Perhaps Bobcat would’ve ended up bigger than an ARM core but they don’t have to pay any licensing fee and the value is there in the x86 ecosystem.

        • NeelyCam
        • 7 years ago

        Maybe it was because of power efficiency…? Or some major bugs? We’ve been waiting for true Bobcat follow-up for a long time now..

    • willmore
    • 7 years ago

    Haven’t both TSMC and GF parternered with ARM to develop cores for their processes? If so, I guess we don’t know who will be producing these chips. Come to think of it, what fab doesn’t have an ARM license? Anyone bigger than Microchip?

    I guess that doesn’t help narrow down where the silicon is going to be made.

    Edit: Okay, I think we should all review this as it contains a lot of info:
    [url<]http://www.arm.com/products/processors/hard-macro-processors.php?tab=Why+Hard+Macros?[/url<] Check the other tabs as well, there's a lot in there. Looks like the only fab support they offer is TSMC, but they don't list the Cortex-A5X processors, yet. List of licenses: [url<]http://www.arm.com/products/processors/licensees.php[/url<] Also, AMD is missing from the list entirely. So, maybe it's not complete or up to date? I'd swear hey licensed a core back in June for some security work.

      • Game_boy
      • 7 years ago

      They did. Future AMD processors will have an A5 core in them to implement TrustZone.

        • willmore
        • 7 years ago

        Do you see anything on the ARM website about it, though?

    • Bensam123
    • 7 years ago

    Ahhh, I didn’t know that AMD was simply licensing premade chips… But that most definitely doesn’t mean that they have to continue doing that into the future. My guess is that’s sort of like testing the water. They can hit the ground running with some ready-mades and then pound it with their actual engineering talent when it has had time to spin-up.

      • Game_boy
      • 7 years ago

      Cutting 15% of staff is a great way to ramp up designing new things in parallel with your existing things.

      • just brew it!
      • 7 years ago

      Nit pick: They’re not licensing pre-made chips. ARM doesn’t make chips, they [i<]design[/i<] chips. AMD is licensing a chip [i<]design[/i<] (and presumably adding their own interconnect tech to it).

    • jdaven
    • 7 years ago

    Great write-up of A57 and A53 – ARMv8 64-bit over at Anandtech.

    [url<]http://www.anandtech.com/show/6420/arms-cortex-a57-and-cortex-a53-the-first-64bit-armv8-cpu-cores[/url<] I look forward to an x86 free world.

      • sweatshopking
      • 7 years ago

      Y

        • MadManOriginal
        • 7 years ago

        Because that’s what all the cool kids are saying.

          • mnecaise
          • 7 years ago

          yeah. RISC is cool.

          I believe I heard this story before, in the 1990’s when SPARC, Power, Alpha, PA-RISC, and even MIPS were the cool kids on the block, all hanging out together talking about how fuggly x86 was.

            • just brew it!
            • 7 years ago

            And it [i<]was[/i<] pretty darned fugly from a technical perspective. The RISC camp fell into the trap of assuming that the technically superior solution automatically wins. And if you think about it, while the IBM/Intel/Microsoft unholy trinity is what pushed x86 to the top of the heap, it was AMD who actually ensured its continued relevance for desktops/servers in the new millennium, by grafting 64-bit extensions onto an ISA that has roots going all the way back to the 8-bit days of the 1970s.

            • ronch
            • 7 years ago

            [quote<]And if you think about it, while the IBM/Intel/Microsoft unholy trinity is what pushed x86 to the top of the heap, it was AMD who actually ensured its continued relevance for desktops/servers in the new millennium, by grafting 64-bit extensions onto an ISA that has roots going all the way back to the 8-bit days of the 1970s.[/quote<] Not to mention it was also largely due to AMD that Intel kept moving forward and kept their prices sane due to competition, which also allowed x86 to be more affordable to most people and kept other competing ISAs from seeping into the desktop space (yeah, ARM is a different topic). Too bad AMD is in shambles today. I hope AMD has learned their lesson and won't ever again just sit on their butts like they did from 2003-2007.

            • Geistbar
            • 7 years ago

            It’s my understanding as well that most x86 chips today are, internally, more RISC than CISC anyway. I could be wrong however, as I’ve never really read any significant details on that, just bits and pieces.

            • ermo
            • 7 years ago

            It is my understanding that modern x86_64 CPUs decompose the x86 ISA into micro-ops, which are specific to the chip design in question.

            So modern x86 CPUs are neither CISC, which implies an instruction set with ‘a lot’ of instructions, all hardcoded in silicon for a gain in speed at the expense of complexity in chip design, nor RISC, which implies comparatively few instructions, which serve to simplify and streamline the hardware design, thus enabling higher clock speeds whilst relying on compiler optimization to a greater extent.

            And FWIW, I believe that PowerPC (automotive) and MIPS (networking) designs are quite ubiquitous in the embedded world. The LinkSys WRT54G series of routers used to be based on a BroadCom MIPS design and my NetGear WNDR-3700v2 WiFi router is built on an Atheros MIPS design.

            • mnecaise
            • 7 years ago

            All true.

            MIPS missed the boat in the workstation and server world and was relegated to embedded applications, where it works quite well.

            You can still find SPARC in Oracle’s Sun derived servers, and Fujitsu mainframes. Power continues to exist in game consoles and IBM servers (and some mainframes). Both ISAs rule in the big database and banking transaction worlds.

            They’re not gone but they definitely lost the desktop workstation and server wars with x86 and x86_64 clearly having the lead currently.

            • willmore
            • 7 years ago

            Well, they got booted from DECs workstations when the ‘future of the VAX’ architecture AXP was developed. That was a huge loss as they didn’t have many other large customers. Microsoft backed away from supporting them and that left them in a bad situation in both hardware and software. They’ve always had a ton of embedded wins. PS1, N64, PS2, GC, etc.. Oh, I guess they had SGI, so that was something–until they left for Intel, too.

            • just brew it!
            • 7 years ago

            SGI actually [i<]owned[/i<] them for a while. Which made SGI's defection to Itanium all the more inexplicable.

            • willmore
            • 7 years ago

            Agreed. You tend to see some (please forgive me) synergy when you get a company using the chips being the company that made them. I wonder if it was a fab issue.

            • just brew it!
            • 7 years ago

            [quote<]MIPS missed the boat in the workstation and server world and was relegated to embedded applications, where it works quite well.[/quote<] More like they were already [i<]on[/i<] the boat, and SGI threw them overboard when Itanium came along. (Anybody who's been following the workstation/server industry for a while knows how that worked out for SGI.) As an aside, a project I worked on recently had a 100 MHz MIPS "soft core" processor compiled into an FPGA...

            • willmore
            • 7 years ago

            Sweet. I keep meaning to do more with the little Spartan-3 board I have.

            • just brew it!
            • 7 years ago

            That’s a fair assessment. In a modern x86 CPU, a front-end decoder turns instructions into RISC-like “micro-ops” for execution by the CPU core(s).

            Interestingly, there is also a hidden advantage to doing things this way. The irregular instruction encoding scheme used by x86 is quite efficient at packing machine instructions into the smallest amount of space. And although DRAM [i<]capacity[/i<] has continued to rise rapidly over the years, DRAM [i<]bandwidth[/i<] hasn't come anywhere near keeping up with the increasing speed of CPU cores. So the archaic instruction encoding used by x86 actually helps mitigate the impact of limited DRAM bandwidth on instruction fetching. What we think of as the x86 ISA is actually more like a hardware-based code compression algorithm!

            • mnecaise
            • 7 years ago

            I recall that at the time, the RISC machines screamed. Alpha and UltraSPARC were where we would go for performance.

            Problem was, the output of the unholy trinity was cheaper. It took two Pentium II workstations to match the performance of an UltraSPARC; but, I could buy those two workstations with big disks and a ton of memory for less than the cost of a single UltraSPARC. You came out ahead cost-wise, and still got the work done.

            • helix
            • 7 years ago

            Huh, that’s a very interesting perspective.

            The IBM PC was a open design (abandonware, really), that created the whole PC ecosystem. As more of the computer system moves onto a single chip, that chip needs an open design to still allow for an ecosystem of competing compatible parts. (Open to participation, not necessarily gratis or libre.) The alternative would be to move further and further away from the openness of the PC. This strategic advantage may yet again be more important than technical superiority.

            Perhaps Intel’s best move would be a few more x86 licensees. Give up some of the obscene profit margins to keep the position as designer of the dominant architecture, while forestalling the FTC’s big cleaver. It is only their quarterly reports keeping them from doing it. Of course, they should have done this while Microsoft was still firmly standing one-legged on x86.

    • ptsant
    • 7 years ago

    I’m still not convinced there is an intrinsic advantage in the ARM ISA over the x64 ISA. I mean, low-power ARM chips existed at a convenient time, during the explosion of iDevices and Smartphones, while the x86/x64 chips where mainly focused on maximum performance. While x64 chips become low-power and ARM become “high performance” I’d expect the two to converge in many metrics (ie high-performance ARM and low-power x64 will be competitive with each other in perf/watt and absolute perf)
    Finally, for what it’s worth, I think most people greatly overestimate ARM performance in absolute terms. The smartphone ecosystems are highly optimized, monolithic designs (kind like consoles), yet general compute performance is abysmal compared to even the lowest of x64 chips. Performance-per-watt is maybe competitive, user experience is certainly satisfactory (by design), but intuitively I feel that the whole story about ARM is also driven by buzz-words instead of pure technical prowess. If it brings more competition, I’m fine with that…

    • ronch
    • 7 years ago

    On the other hand, Intel disses ARM’s chances.

    [url<]http://www.extremetech.com/computing/130552-intel-dismisses-x86-tax-sees-no-future-for-arm-or-any-of-its-competitors[/url<] Well, of course ARM has no future, given Intel's marketing dollars and strong-arm (get it?) tactics. I like being sarcastic especially when I feel tired.

      • chuckula
      • 7 years ago

      I believe Intel executives who say that ARM is doomed in mobile about as much as I believe ARM executives who say that the PC is dead and that Intel is doomed in servers. Which is to say, not at all.

    • chuckula
    • 7 years ago

    Anand just posted some interesting slides on the topic of 64-bit ARM: [url<]http://www.anandtech.com/show/6420/arms-cortex-a57-and-cortex-a53-the-first-64bit-armv8-cpu-cores[/url<] Of course, based on the powerpoint slides Intel should be out of business by 2014. In the real world though, Anand has an interesting observation on the high-end A57 cores: "Architecturally, the Cortex A57 is much like a tweaked Cortex A15 with 64-bit support. The CPU is still a 3-wide/3-issue machine with a 15+ stage pipeline. ARM has increased the width of NEON execution units in the Cortex A57 (128-bits wide now?) as well as enabled support for IEEE-754 DP FP. There have been some other minor pipeline enhancements as well. The end result is up to a 20 - 30% increase in performance over the Cortex A15 while running 32-bit code. Running 64-bit code you'll see an additional performance advantage as the 64-bit register file is far simplified compared to the 32-bit RF." Basically what Anand just said is that the 64-bit ARM CPUs in 2015 will have performance that finally outstrips an equivalently clocked Core 2 from about 2006. If you think that Intel can't improve Atom to be competitive with these chips, then I've got a bridge to sell you.

      • Game_boy
      • 7 years ago

      They don’t even have to, they’ll have an large process lead (15nm vs 28nm) meaning they can just put 2x the cores or clockspeed and still be cheaper and smaller than any ARM vendor can match.

    • chuckula
    • 7 years ago

    [quote<]These ARM cores are unlikely to match today's Xeons and Opterons in raw performance, but they'll probably consume relatively little power, allowing server makers to pack lots of chips into a single cabinet. The decision to target cloud computing installations, where lots of relatively low-performance cores can serve effectively, seems to fit with that profile.[/quote<] In late 2014 (at the earliest) it will be questionable if the ARM cores can match next-generation [b<]Atoms[/b<] at performance, and frankly there's a good chance that they won't really draw any less power either. I still find it amusing when the same people that think it is impossible for x86 to scale into a cellphone chip (when it already does and can even beat ARM chips at battery life) think that the very first iteration of an ARM chip targeted at servers will suddenly take the market by storm.

      • MadManOriginal
      • 7 years ago

      Yup, ARM isn’t a magic instruction set that defies the laws of silicon physics. ARM chips are very low power right now because they’re also very low performance. Even if we set aside process differences, an ARM chip scaled up to deliver equal performance may not be any more power thrifty than an x86 chip. x86 decode takes very little silicon any more. Cost? Well, let’s say you use 16 $25 low power/low performance ARM chips to equal 1 $400 high power/high performance x86 chip. Is there any real net advantage?

        • Game_boy
        • 7 years ago

        Intel also employs 50x the people, so any realistic efficiency deficit on can be overcome quickly.

        • ronch
        • 7 years ago

        If the decode units (CISC x86 to uop converters) are not any concern in terms of power draw, then why did Intel bother doing the uop cache in Sandy?

          • MadManOriginal
          • 7 years ago

          First, I said ‘take very little silicon,’ I suppose you can extrapolate that to power draw if you like. Plus I didn’t say they are ‘not any concern’ just that they are a small concern because they take very little silicon. An argument in favor of RISC is that there is no decode silicon needed. It made sense back in the 90s when that silicon took up a lot of die space, it just refuses to die even though it doesn’t take up much space now.

          uop cache adds overall efficiency and speed, that’s why it was added. Does it save power? Hard to say, because while it means less activity in other parts of the chip, it means more activity in the otherwise non-existent uop cache.

          • just brew it!
          • 7 years ago

          Even if the power draw is small in the overall scheme of things, doing the decode still takes time.

    • ronch
    • 7 years ago

    That Arm & Hammer logo.. Why didn’t Jerry think about an ARM partnership in 1999 when they were working on K8?

      • Scrotos
      • 7 years ago

      Because Intel would have crushed them. Plus, ARM wasn’t as big then, I don’t think. MIPS and PowerPC were all the rage for embedded stuff. The timeframe you’re wondering about…

      [url<]http://en.wikipedia.org/wiki/XScale[/url<] [url<]http://en.wikipedia.org/wiki/StrongARM[/url<] [i<]DEC agreed to sell StrongARM to Intel as part of a lawsuit settlement in 1997.[3] Intel used the StrongARM to replace their ailing line of RISC processors, the i860 and i960. The XScale effort at Intel was initiated by the purchase of the StrongARM division from Digital Equipment Corporation in 1998.[17] Intel still holds an ARM license even after the sale of XScale.[17][/i<]

    • dpaus
    • 7 years ago

    You know you’re going to get sued for using that ‘Arm & Hammer’ logo without permission, right?

      • no51
      • 7 years ago

      Not if they make it into a clickable image that sends them to the Arm & Hammer website. Then it’s advertising.

      • BobbinThreadbare
      • 7 years ago

      I don’t think that’s actually true.

      A trade mark doesn’t grant those rights. TR is only not allowed to use a TM in such a way that it would create customer confusion or something like that.

        • Scrotos
        • 7 years ago

        Obviously you’ve never visited the Arm & Hammer Technology Blog!

        • dpaus
        • 7 years ago

        Obviously you’ve never visited [url=http://www.techdirt.com<]the TechDirt website[/url<] either!

          • BobbinThreadbare
          • 7 years ago

          I do, the talk about the difference between copyright and trade mark a lot.

          I guess you *can* be sued for using a trade mark, but they can’t win.

          Copyright is an whole other issue.

        • BIF
        • 7 years ago

        Under fair use, I believe TR can use that image. It’s not like TR has made it into TR’s own label or masthead.

        Besides, everytime I see “ARM” in a CPU context, I always think of the iconic Arm and Hammer image anyway. I guess maybe I should be sued for my photographic memory.

          • ludi
          • 7 years ago

          Fair use (in this case, parody) is an affirmative defense. So AFAIK if the Church & Dwight Company issued a cease and desist letter and TR refused to take it down, they could still be sued and have to go through the entire process of defending the suit in order to obtain a favorable ruling. IOW it’s probably not worth the trademark owner’s time and money to file a suit in this instance, but if they did, it’s probably not worth TR’s time and money to defend it.(1)

          (1) As always, legal advice on the Internet: worth what you paid for it.

    • Sam125
    • 7 years ago

    Thanks for the clarification, Scott!

    • codedivine
    • 7 years ago

    I am not very sure that there will be ARM+Radeon APUs. My interpretation of the slides was that APUs will remain x86 only, though they did not say so explicitly so it is open to interpretation.

    Anyway I am a little tired of seeing AMD roadmaps, future technology announcements and vision statements. We have seen a lot of those over the years, which are then silently updated/delayed. I hope this time will be different and hope that the new executive team will execute on their promises.

      • sschaem
      • 7 years ago

      You need a slow ARM server chip ?

      • chuckula
      • 7 years ago

      You are right, the APU side is still x86 and I would estimate it would take a minimum of 4 years to get out an APU that is competitive with Trinity using ARM (maybe 3 for a Brazos-like APU). Throwing ARM cores in doesn’t magically bypass the design process for a complex chip. Just ask Nvidia how long it’s taken for Denver to show up….

      EDIT: Again with the down thumbs: Please ARM fanbois, tell me how I’m factually wrong and how *exactly* AMD can come up with a complex ARM based APU in 1/2 the time it took Nvidia to do Tegra… seriously.

        • NeelyCam
        • 7 years ago

        [quote<]Again with the down thumbs[/quote<] You know whining about downthumbs will only get you more..

          • chuckula
          • 7 years ago

          Just like begging for them only gets you more 😉

          I don’t even mind the downthumbs if there is a rational response, but usually the logical train of thought is: ARM put up a Powerpoint slide, therefore WIN!

        • MadManOriginal
        • 7 years ago

        I don’t know that people are so much ARM fans as they are Intel and x86 haters. The only rationalizations I ever see for that are ‘Intel is big,bad,evil!’ (meanwhile, they drove performance/price and performance/watt to incredible levels the last 6 years) and the vague ‘competition is teh good!’ (yes, it is or at least can be, but that’s no reason to hate Intel.)

    • ronch
    • 7 years ago

    The last sentence there pretty much confirms my comment on yesterday’s article when I said the industry is kind of resetting itself. I’d love to see the industry transition to something non-x86. Now I’m not going to go ranting about how Intel is monopolistic and all, but that’s really what’s been happening. Intel’s efforts to lock everyone out of x86 is now starting to backfire as the entire industry is ganging up against them. Personally I’d love to have my future computers powered by ARM (actually for some strange reason I want SPARC.. It kinda intrigues me, plus I think it’s an open architecture) as long as someone provides an excellent emulator to run my legacy Windows/DOS apps for as long as I’d care to use them.

      • chuckula
      • 7 years ago

      [quote<]I'd love to see the industry transition to something non-x86.[/quote<] Why exactly? As a Linux user, I'm much much happier installing a single distro image onto a boring standardized x86 box with a standardized driver model than hunting around for the perfect image that kind of works with whatever ARM board I'm trying to use. I have personal experience in this area hunting around for images that install on my Raspberry Pi. It is a much less flexible process that installing an OS on any standard PC, and the driver issues make PCs look like a picnic in comparison. What's the draw for ARM? It's only "open" in as far as companies can pay for licenses and then make fragmented and only partially-compatible ARM flavored platforms that often have much more proprietary interfaces than any x86 box. The power saving features are nice on cellphones but there is absolutely no evidence that a high-performance ARM box will use less power than what we already have. As for price, it's the same thing: small ARM SoCs are certainly cheap, but if you scale them up to larger sizes the prices are going to have to go up too. If you think that ARM is such a beautiful instruction set, then what are you doing that requires you to spend so much time writing assembly? Have you seen the complaints from long time ARM developers about ARM64? That new instruction set is by no means meeting with universal acclaim from people that deal with ARM software development. Also, if you want ARM to have performance like an x86, then get ready for massive changes in the ISA that will be required to introduce real vector instruction sets. ARM is a good platform in certain areas, but unless and until it can show that it is truly open and standardized in the way that x86 is, I have no desire to replace any serious piece of computing hardware with an ARM chip. It's fun in a cellphone or a Raspberry Pi for experimenting, but there's no way I want those limitations standing in my way in a real computer. EDIT: I'm willing to bet good money that I've hacked more ARM hardware and written more software for ARM than all of the people who down thumbed me combined... and people call me a fanboy? Really?

        • ronch
        • 7 years ago

        I only mentioned ARM because it’s what’s gaining traction today. SPARC is actually what intrigues me, perhaps even PowerPC which some claim has little in the way of old baggage. And no, I totally ignored the Linux camp in my earlier comment because I’m talking hardware here and what I think the hardware industry should transition into. What do you think, Chucky?

          • chuckula
          • 7 years ago

          I’m a fan of x86 not because it is theoretically beautiful, but because it is so standardized and easy to work with in an open manner. I can and have taken a drive with Linux out of one x86 box with an Intel CPU, slapped it into another box with an AMD cpu, and booted up just fine.

          Are there fewer players in the x86 world? Sure, but the amount of compatible hardware available is vast and, more importantly, the entire platform is designed from the get-go for interoperability. Don’t believe me? Plug that AMD GPU into a PCI express slot on a system made by Big Bad Intel… and watch it work just fine. I dare you to try the same move with a “project Denver” processor or try putting an Nvidia GPU into a next-generation AMD ARM system. Even if the physical interconnects are present (which they likely won’t be) the software issue will make writing drivers for the PC look like a cakewalk.

          I wouldn’t mind if you could get the PC experience from Power, SPARC, ARM, etc. In some ways you can with Power & Sparc assuming the hardware is relatively similar, but those platforms operate in very niche markets. You really can’t get a PC like experience with ARM outside of cloning software between identical pieces of hardware. Love or hate x86, it is a standardized platform that has done wonders for a large number of companies, AMD included. You might not like Intel for whatever reason, but be careful for what you wish for with Apple & Samsung dominating the ARM market: you might find that things really weren’t as bad as you thought they were.

            • ludi
            • 7 years ago

            In fact, ARM meets those same requirements, just not quite in the place you’re looking for it (traditional desktop PC). The majority of all smartphones and tablets on earth are running some variant of an ARM processor and are rapidly replacing the desktop PC in the consumer space.

            Also, the markets that are being targeted for ultra-dense cloud computing clusters are not beholden to x86 because the data correspondence with the user is primarily over the Internet. The OS and ISA used on each end is arbitrary.

        • bjm
        • 7 years ago

        I’m not sure why your posts are getting thumbed down as much as they are. I, for one, complete agree with you. ARM is only a standardized CPU instruction set, that’s it. It doesn’t have standardization for booting, motherboards, basic initialization and I/O, etc. It’s the exact reason why CyanogenMod versions are released for each individual phone (on top of other Linux ABI issues).

        Personally, I’m rooting for x86. I love the fact that x86 has an awesome set of applications already running on it. It took Windows 8 to *really* bring the PC to the tablet world, but it’s still awesome to play Starcraft 2 on that same tablet. Or even get an emulator and copy your Playstation 1 and 2 games to the tablet, grab a controller, and play it on an airplane.

        And despite the outcry over SecureBoot, it’s the Windows 8 x86 tablets that you can install traditional Linux on. While Linux still has ways to go in the touch capability department, I still want to use it and an x86 tablet is the best device to do it on. Ubuntu can now be installed on the Nexus 7, but it lacks the power I want.

        And unlike in the x86 world, due to the lack of platform standardization in ARM, Ubuntu’s work on Nexus 7 doesn’t translate to being able to install it on something like the Samsung Galaxy Tab. Each ARM device is on it’s own island. Once you achieve compatibility on once device, it’s another swim and set of hard work to do so for another device (again, look at the CyanogenMod project).

        As it stands now, x86 offers the best combination of flexibility, compatibility, and power. ARM is still winning in the power efficiency/mobile department, but I’m not counting Intel out of that race either. Valleyview is going to be awesome for Linux and Windows 8.

        Ideally, I would’ve liked both AMD and Intel to be representing x86 with good products, but I’m not going to fault Intel for AMD’s failure to execute. If Intel is going to be the lone forerunner for x86, then so be it. Until ARM either thoroughly trounces Intel in performance or offers something seriously compelling, I’m not jumping ship from x86 on any tablet, desktop, or server.

        (Man, did I really type all that?!)

    • dpaus
    • 7 years ago

    ‘OpterARM’ – love it!

    What’s not clear is the extent to which AMD can leverage what it has learned in multi-core power gating to apply to its new ARM chips. And I think it’s becoming even more clear that OpenCL-based GCN will be the back-end; they can bring a lot of power-control technology to that area too, which could still give them a significant competitive advantage vis-a-vis Nvidia.

      • ronch
      • 7 years ago

      Opterarm… Sounds llike a name that can only be conceived by some crappy company in China.

        • dpaus
        • 7 years ago

        Hmm, makes me think of Shiva 🙂

        • MadManOriginal
        • 7 years ago

        Or Sunnyvale, CA. Bazinga!

    • chuckula
    • 7 years ago

    You called it perfectly Scott. It’s not really a surprise since these things are supposed to be “in production” in 2014, which is the absolute earliest we’ll be seeing any 64-bit ARM products on the market. AMD wouldn’t have time to make a customized core even if it wanted to on that schedule.

    The term “in production” could be spin too… Intel says that Haswell is going into production this year but we all know that you won’t be able to buy one until well into 2013. The same might apply for these ARM chips.

Pin It on Pinterest

Share This