ARM, GlobalFoundries join forces on 20 nm and FinFET devices

Last month, ARM announced a partnership with TSMC involving sub-20-nm chips with FinFET, or non-planar, transistors. That results of that partnership are expected in 2015, when TSMC ramps its 16-nm fab process.

Now, ARM has announced a similar partnership with GlobalFoundries. This latest deal involves both next-gen FinFET chips and chips manufactured at 20 nm. GlobalFoundries expects to have 22- and 20-nm fab processes ready for “product introduction” next year. The company already taped out a 20-nm ARM test chip last December, as well.

Here’s what the new partnership entails, in the jargon-laden words of the press release:

GLOBALFOUNDRIES plans to develop optimized implementations and benchmark analysis for next-generation, energy-efficient ARM Cortex™ processor and ARM Mali™ graphics processor technologies, accelerating customers” own SoC designs using the respective technologies. The comprehensive platform of ARM Artisan Physical IP for GLOBALFOUNDRIES” 20nm-LPM and FinFET processes and POP IP products provide fundamental building blocks for SoC designers. This platform builds on the existing Artisan physical IP platforms for numerous GLOBALFOUNDRIES” process technologies including 65nm, 55nm and 28nm, as well as the Cortex-A9 POP technology for 28nm SLP, now available for licensing from ARM.

Put more simply, ARM and GlobalFoundries say their new alliance will “promote rapid migration to three-dimensional FinFET transistor technology.” Simon Segars, ARM’s Executive VP and General Manager of Processor and Physical IP Divisions, weighs in, “Customers designing for mobile, tablet and computing applications will benefit extensively from the energy-efficient ARM processor and graphics processor included in this collaboration.”

FinFETs are field-effect transistors with a fin-like conducting channel. They’re similar to Intel’s tri-gate transistors—except, of course, those are already in production, powering Intel’s 22-nm Ivy Bridge processors. GlobalFoundries and TSMC are both a ways behind Intel in that respect.

Correction: This story originally stated that ARM and GlobalFoundries planned to collaborate on FinFET devices at 20 nm, which is incorrect. The official announcement mentions collaboration both at 20 nm and on FinFETs. To our knowledge, GlobalFoundries doesn’t plan to use FinFETS at 20 nm.

Comments closed
    • NeelyCam
    • 7 years ago

    [quote<] Now, ARM has announced a similar partnership with GlobalFoundries. This latest deal also involves next-gen FinFET chips, but at the 20 nm node[/quote<] Actually, their announcement doesn't say that their FinFETs will be 20nm.. if you look carefully, they imply 20nm and FinFET aren't the same process (just that there's an easy migration path from 20nm to FinFET). Previously, GloFo has said they'll stay planar at 20nm (although some random comments have pointed to FD-SOI). I would be very surprised if they have 20nm FinFET, except if they make a FinFET-updated version of the 20nm process later, in which case I wouldn't expect it to be available in volume until 2014.

    • ronch
    • 7 years ago

    If GF is gonna have 22nm next year I hope it means we’ll see AMD chips using that process node. They’ll really need it I guess.

    • DarkUltra
    • 7 years ago

    Whaaaaat??? theres always use for more performance and resources! AI, physics, local voice wreck cognition, sub-pixel rendering, battery time, text too speech… Just wait til 6+ cores is the majority of gamers desktops.

      • Theolendras
      • 7 years ago

      Right there is always room for improvement, but tell me, what would bring sub-pixel rendition to the table. Why render something you can’t see on the display ?

        • Airmantharp
        • 7 years ago

        Sub-pixel rendering is less about static images than it is about properly rendering motion, and it allows more things to be tied to the pixel level properly, like physics. It’s like SSAA but reaches back further into the rendering engine naturally.

          • Theolendras
          • 7 years ago

          Thanks for the staight answer. Still seems somehow inefficient given we are still far below Pixar quality and even much farther from photo realism. Deep colors channels would get you more splash, not to mention that in a tablet format battery life is probaly better. Still I’m sure we’ll get there in time…

      • rrr
      • 7 years ago

      Make it happen now, buy Bulldozer, and enjoy having 6-8 cores which underperform Intel’s 4 ones.

    • jdaven
    • 7 years ago

    I’m looking forward to ARMv8 64-bit processors with 8 or more cores running at 2+ GHz on a 20 nm or less manufacturing process. Talk about a lot of performance in a low power envelope.

      • Farting Bob
      • 7 years ago

      My phone doesnt need 8 cores, and i cant think of many things that would benefit from having that many on a handheld, 4″ device. Seems a waste, even in tablets where the chances of wanting that many cores are slightly higher.

        • WillBach
        • 7 years ago

        Your phone doesn’t need 64-bit addressing, either. He’s looking at either HPC or microservers. That said, the success of SoCs in the server space will have more to do with I/O efficiency and the skill put into the “fabric” than whether the process node has FinFET transistors.

        • Sahrin
        • 7 years ago

        True. The reason x86 outperforms ARM isn’t because x86 is built on a better process, it’s because an x86 CPU’s execution hardware is wider and more powerful (note distinction – it has nothing to do with the ISA, AMD and Intel design mega-wide and ‘ambidextrous’ CPUs that can quickly execute any instruction, no matter how complex). You need complex cores to get the best performance, and ARM’s existing pipelines don’t cut it. The ISA can only take you so far, you have to be willing to spend power and transistors to be able to execute big, fat instructions very quickly.

          • bcronce
          • 7 years ago

          I’m not sure about other bottlenecks, but most servers only need strong integer performance coupled with strong IO. ARM may actually have a slight advantage for x86/x64 for these types of cases.

          Only time will tell.

            • Sahrin
            • 7 years ago

            >ARM may actually have a slight advantage for x86/x64 for these types of cases.

            You’re speaking too generically. It’s possible to build an Intel or AMD-style CPU using any architecture (see IBM’s Power 7) – it is about the actual execution hardware you build, not the language it uses (ISA) to do the math. There is no ARM CPU in existence that can match an AMD or Intel CPU in this way, because ARM CPUs are designed for too simple of workloads.

            Look at a GPU – loads and loads of simple adders. But it can’t outpace a CPU in certain tasks, why? Because sometimes it’s better to be able to do the whole operation in a single, massive “bite” than to try to break it up into smaller tasks. There’s no doubt that there are small, specific workloads for which a given ARM processor running at a given clockspeed on a given process is faster than a given x86 CPU on the same conditions. But it’s not true for all cases, because it can’t be.

            That’s not to say Intel and AMD are savants, either, they have 30 years of building CPU’s to run the tasks that software designers demand under their belts, with billions spent analyzing the codebase and developing special solutions in hardware to solve them. A new instruction is great, but remember that the ISA is just an abstraction layer – it tells you how to talk to the CPU, it doesn’t tell the CPU how to perform a given instruction. There’s a reason Intel and AMD (and IBM come to that) make the best performing CPU’s in the world. It’s not because of market inertia.

            • UberGerbil
            • 7 years ago

            The history of Sun’s Niagara suggests that’s not as true as you think. And “strong IO” requires more than just a handwave; it requires some serious engineering. There were several companies like Smooth Stone that made a lot of noise about ARM-based servers a couple of years ago, got tens of millions in funding to make it happen, and AFAIK haven’t delivered anything yet.

        • BobbinThreadbare
        • 7 years ago

        I would guess server applications would be the place to use it. Of course, Intel gives you more performance per watt anyways.

        • jdaven
        • 7 years ago

        Five years ago we were saying the same thing about desktops, yet here we are.

        And yes, I was mostly talking about servers but I will not deny the opportunity of mobile OSes to leverage that kind of power if they can.

        Would you?

          • BobbinThreadbare
          • 7 years ago

          My Desktop still doesn’t need 8 cores. Rarely is more than 2 helpful really.

            • Majiir Paktu
            • 7 years ago

            You don’t need 8 cores for the sake of having eight cores, but you [i<]do[/i<] need eight cores if you want quadrouple the performance of a dual-core CPU. We've hit a wall on clock rates where it's easier to just add more cores. It's not inconceivable that a similar (and lower) threshold will be met on mobile devices. The threshold where parallelism is the best option becomes lower as applications increasingly take advantage of it.

      • bcronce
      • 7 years ago

      Sounds like a great firewall/fileserver/webserver/etc.

        • Airmantharp
        • 7 years ago

        Yup. Actually, I don’t really see a place for Intel’s x86 interpretation, if AMD can get their act together with the BD architecture. ARM makes a great case for ‘utility’ servers that literally do nothing except ‘serve’; a proper BD implementation would cover databases better, and anything HPC related will have a GPU-like co-processor, even one from ARM itself.

Pin It on Pinterest

Share This