ARM links SoC components with terabit interconnect

ARM has its eyes set on the high-performance computing market. To help with that mission, it’s developed a new interconnect to link multiple cores on a single SoC. The CoreLink CCN-504 Cache Coherent Network is capable of offering up to a terabit of bandwidth to as many as 16 processor cores based on the existing Cortex-A15 or upcoming ARMv8 designs. CoreLink isn’t just for CPU cores, though; the interconnect supports graphics processors, DSPs, and other "accelerators," too. ARM lays out the possibilities in the block diagram below.

CoreLink allows connected processors to access each others’ caches even in heterogeneous implementations that combine CPU and GPU cores. To bolster that core-to-core communication, the interconnect has 16MB of shared L3 cache of its own. ARM has also designed a new DMC-520 memory controller that plugs right into the interconnect. This controller supports not only existing DDR3 memory, but also the next-gen DDR4 standard.

According to ARM, the CoreLink makes extensive use of power gating and other techniques to curb energy usage. ARM isn’t straying from its low-power roots. The firm has plans for multiple versions of the CoreLink interconnect, which has already been adopted by Calxeda and LSI. According to the official press release, the first products to use CoreLink should arrive next year.

Comments closed
    • rgreen83
    • 7 years ago

    Two things:

    1. A “terabit of bandwidth” is nonsensical, bandwidth is measured as data over time i.e. megabits [i<]per second[/i<] I'll assume terabit per second is intended 2. I realize it's maybe just a matter of personal opinion but i find it annoying to have to do Byte/bit conversion every time a company wants to use a bigger fancier prefix like the almighty "TERA!" I opine that for correctness and simplicity sake "128 GBps" would be better and quite sufficient

      • cygnus1
      • 7 years ago

      Sorry math hurts your poor little head. But most, if not all, interconnect transfer rates are not measured in bytes/s. They’re usually bits, transactions or symbols /s. definitely not byte/s. Memory transfer rates, SATA transfer rates, network transfer rates, PCIe transfer rates. Nobody measures any of those in bytes.

        • rgreen83
        • 7 years ago

        Not so fast chief.

        Memory transfer rates are measured in MB/s, see [url<]http://en.wikipedia.org/wiki/DDR_SDRAM[/url<] , the table under spec standards. SATA although specced by SATA-IO at Gb/s rates because bigger numbers sound better, they are also misleading due to the standards use of 8b/10b encoding therefore it is equally or more common to see the rates listed as 150 MB/s, 300 MB/s, and 600 MB/s [url<]http://en.wikipedia.org/wiki/Serial_ATA[/url<] . PCIe actually reference their rate in GT/s giga transfers, their data rate has always been described in MB/s; 1.0 - 250MB/s per lane, 2.0 - 500 MB/s, 3.0 - 1 GB/s [url<]http://en.wikipedia.org/wiki/PCIE[/url<] [url<]https://techreport.com/news/14925/pci-express-3-0-to-be-backward-compatible-with-2-0[/url<] Network rates, u got me there, they like their power of 10 progression lol. And the closest equivalent to this CoreLink, Intel's ringbus introduced with sandy bridge, is referred to as being 384 GB/s at 3GHz, which by the way is three times faster than this interconnect and has been in silicon for over 3 years now. [url<]http://www.anandtech.com/show/3922/intels-sandy-bridge-architecture-exposed/4[/url<] [url<]https://techreport.com/review/20188/intel-sandy-bridge-core-processors[/url<]

    • Game_boy
    • 7 years ago

    How does this compare to the A6’s new interconnect/memory subsystem?

      • Goty
      • 7 years ago

      I don’t think the two are even remotely comparable. I believe the new interconnects are more point-to-point rather than hubs like this.

    • blastdoor
    • 7 years ago

    This wouldn’t be such a threat to Intel if it weren’t for the potential that some very big companies with very big piles of cash might take this kind of thing and run with it. Naturally, I’m thinking of Samsung and (especially) Apple.

    I have to believe that today’s combo of ARM+Apple+$$$$$$ is a much bigger threat to Intel than the 1990s combo of IBM(powerpc)+Apple+$.

      • kalelovil
      • 7 years ago

      I don’t think Apple cares about the server market any more, and Samsung never has as far as I recall.

      It will be the likes of Google, Amazon, Facebook, HP Enterprise Business, Dell Enterprise Solutions who will be the most interested in this particular solution.

        • blastdoor
        • 7 years ago

        iCloud

          • BobbinThreadbare
          • 7 years ago

          That’s just a service that Apple probably runs on Linux servers if they even run the servers themselves.

            • A_Pickle
            • 7 years ago

            Linux servers? Lol. More like Windows Azure and Amazon Web Services…

            • rgreen83
            • 7 years ago

            Unix surely?

    • dpaus
    • 7 years ago

    Last paragraph, I think you meant
    [quote<]ARM isn't straying [s<]for[/s<] [b<][i<]from[/b<][/i<] its low-power roots[/quote<]

    • chuckula
    • 7 years ago

    TL;DR version: It’s a shared L3 cache that is about 1/3 the speed of the L3 in a Sandy Bridge.

      • AlvinTheNerd
      • 7 years ago

      Its unusual ground for Intel. They are losing marketshare from a competitor that provides inferior performance. Intel’s method of reletenless tick tock pushing towards more performance isn’t going to nessarially help them.

      The competition is now on whether we have rather locked down CPU’s built into very customizable end user systems or rather customizable CPU’s (from the OEM standpoint) on a locked down end user system. In this race, ARM doesn’t need to be first to market with any new feature.

      What Intel really needs is a killer app that actually uses what SB can offer.

        • blastdoor
        • 7 years ago

        it’s a little premature to say Intel is losing marketshare to ARM in the server space, isn’t it?

          • chuckula
          • 7 years ago

          In a theoretical sense, any ARM server that gets at least one sale means that Intel has lost marketshare. Of course, if Intel sells even one Medfield phone or a single Atom tablet it means that ARM has lost marketshare in the mobile space.

            • willmore
            • 7 years ago

            No. It’s not a zero sum game. If that is no Intel based server that satisfies a need, then no server gets sold and the need goes unsatisfied.

            Not all decisions are “we must by a server, buy whatever makes the most sense”.

            Edit: News flash, I still can’t spell worth crap.

            • chuckula
            • 7 years ago

            Actually market[b<]share[/b<] most certainly is a zero-sum game: 1. If there is a world market for 100 servers and Intel chips are used in all 100, then Intel has 100% marketshare. 2. If Intel still sells the 100 servers, but ARM comes in and sells another server, then Intel [b<]no longer has 100% marketshare[/b<] (it has 100/101 ~= 99.0099% marketshare). 3. The [b<]entire market[/b<] (i.e. size of the total pie) can grow in a non-zero sum manner, but [b<]marketshare[/b<] (i.e. your relative slice of the pie) is most certainly zero sum.

            • cygnus1
            • 7 years ago

            [quote<] 3. The entire market (i.e. size of the total pie) can grow in a non-zero sum manner, but marketshare (i.e. your relative slice of the pie) is most certainly zero sum. [/quote<] so... like willmore said, it's not a zero sum game.... market share is a silly thing to quantify anyway. you can divide up sales of stuff into 'markets' and make those numbers look like anything you want.

            • BobbinThreadbare
            • 7 years ago

            You’re assuming all servers sold are one market.

            Also, I think Intel would rather have higher margins and make more money with 80% marketshare, so while it might be a zero-sum game, it’s a stupid game to play.

      • Goty
      • 7 years ago

      I don’t think it’s appropriate to compare it to tradition CPU cache. It’s an interconnect with its own cache (i.e. separate from that of the CPU) that can be used to mirror the contents of the various other caches on the SoC. This is obviously a move toward the HPC space for ARM, but even then, is there really enough traffic on an SoC to need an interconnect that wide?

Pin It on Pinterest

Share This