Inside Intel’s Atom C2000-series ‘Avoton’ processors

We covered some of the basics of the chip known as Avoton, Intel’s low-power and low-cost system-on-a-chip based on the next-generation Atom CPU core, earlier this summer. Today, Intel is officially unveiling the Atom C2000-series products based on Avoton, so we have the opportunity to offer a little more detail about this distinctive new SoC.

Although Avoton is based on a low-power CPU core, its mission is not to power a new generation of mobile devices. Intel has another SoC, code-named Bay Trail, for that market. Instead, Avoton is aimed at various spots in the data center where Intel’s Xeon processors are either too big or too expensive to serve. Among them: the emerging breed of rack-based systems known as microservers and enterprise storage applications. Avoton also has a sister chip, known as Rangeley, that’s based on very similar silicon but is intended for networking and communications devices.

The chip

Intel produces this SoC on a custom-tuned variant of its 22-nm fabrication process, which has some of the finest geometries in the industry and is the first process to adopt a “3D” or FinFET-style transistor structure. We’ve already seen quite a few bigger cores manufactured at 22-nm, but the benefits of this process are arguably most notable for low-power chips like Avoton. Intel is taking full advantage of its celebrated manufacturing advantage here.

Avoton is a true system on a chip, with everything one might expect from a traditional server-class system integrated into a single die. Nevertheless, Intel has a separate name for the platform on which Avoton resides: Edisonville. Thanks to Avoton’s extensive integration, the Edisonville platform’s footprint is about the size of a credit card, including memory and external I/O connections.

You can expect to see fully functional Avoton-based computing nodes mounted on compact cards that will plug into microserver enclosures. Part of the appeal of such systems is the ability to cram lots of nodes into a single rack. With the right balance of resources and the right application, such a deployment has the potential to offer higher compute density and better power efficiency than a rack of traditional Xeon-based servers.

The block diagram above is the most granular look we have from Intel at the layout of the Avoton SoC. You can probably pick up the basics just by scanning it, but we’ll cover some of the highlights in a little more detail.

Each of Avoton’s eight CPU cores is based on the brand-new Silvermont microarchitecture, which Intel outed earlier this year. Silvermont is the first true reworking of the Atom microarchitecture since its beginnings, and it is a new-from-the-ground-up design with a renewed emphasis on per-thread performance. Gone is the symmetric multithreading used in the old core; Silvermont extracts instruction level parallelism via out-of-order execution instead.

Of course, Silvermont retains full compatibility with Intel’s x86 ISA, including support for newer instructions up to the level of the big Westmere core, and it’s capable of true 64-bit addressing. Intel likes to emphasize both of those attributes, an indicator that ARM-based SoCs are the true competitive target for Avoton and Rangeley.

Silvermont’s cores are grouped into dual-core modules with 1MB of shared L2 cache, as shown above. Avoton has four of these modules, and each one talks to the rest of the world via a bit of glue known as the Silvermont system agent. The SA enables multi-core Atom SoCs like Avoton; it is a modular design that can scale up and down as needed. (We can presumably expect the Silvermont SA to make an appearance in SoCs like Bay Trail, as well.) The system agent coordinates between the modules’ four L2 caches, maintaining coherency, and routes requests to the rest of the system, as well.

The front-side bus used in prior Atoms is now well and truly gone, replaced by an Intel-developed interconnect known as IDI. This point-to-point link has been used in Intel’s larger cores since Nehalem, and in Avoton, it delivers up to 25.6 GB/s of bandwidth via a crossbar-style fabric.

That 25.6 GB/s figure is no accident; it’s also the peak amount of throughput possible via Avoton’s dual-channel memory interface, which supports DDR3 and DDR3L DRAM at speeds up to 1600 MT/s. Robust data integrity protection is offered, including ECC (SEC-DED), for external memory. When configured with two DIMMs per channel, a single Avoton node can support up to 64GB of physical memory.

Avoton’s integrated south bridge, which Intel has dubbed a “south complex,” packs in quite a bit of connectivity, including 16 lanes of PCI Express Gen2. The Avoton team chose to use the older PCIe Gen2 standard for a quick time to market, at least in part. I suspect the 64 Gbps of effective bandwidth provided by those lanes should suffice for the vast majority of roles this chip will play. The chip has four separate PCIe roots, each of them with four lanes, that can be combined into a single x16 connection, a dual x8 config, or so on.

That PCIe bandwidth shouldn’t be needed for Ethernet networking, since Avoton’s south complex also includes four GigE connections. In fact, those connections support an early pseudo-standard Ethernet speed of 2.5 Gbps, so they can offer 10 Gbps in aggregate across four ports when connected to a switch that supports that data rate.

The communication-focused Rangeley variant of the SoC includes an extra bit of logic, as well, represented in the diagram above as “QAT accel” and better known as Intel QuickAssist Technology. This is a hardware block dedicated to the acceleration of a host of popular data encryption algorithms, to allow for higher throughput while relieving the CPU cores of the burden. Intel says it provides an API for making use of this hardware and has already enabled direct access to the acceleration hardware via open-source frameworks. Although this is the first iteration of QuickAssist of which we’re aware, the technology is a scalable “building block” and could be expanded in future implementations in order to achieve higher throughput.

Perhaps the most intriguing part of the Avoton south complex, though, is the interconnect that ties everything together. Called IOSF, for Intel On-Chip System Fabric, its presence is a clear indication of how far Intel has moved toward the SoC-style of modular chip design. This common communication fabric enables the company to re-use functional blocks across multiple chip designs. Any component that speaks this common language should, in theory, drop into a new SoC layout with relative ease. Today’s Haswell client chips make use of IOSF, as do all of Intel’s platform controller hubs. Going forward, Intel says “everything being developed now” will employ IOSF.

ISOF fully supports PCI Express headers and ordering rules, and it looks like a PCIe device to software, so it shouldn’t require any special support. We don’t have all of the details, but IOSF is apparently a fairly wide interface that runs at clock frequencies up to 400MHz. It can be scaled back to save power when needed, and in Avoton, it has been. In fact, two separate speeds of IOSF fabric are deployed in the south complex.

In addition to PCIe, Ethernet, and SATA, the south complex supports a host of different types of legacy PC I/O. The block that facilitates this legacy I/O also includes the chip’s power management controller. Having the power control unit on-chip should allow for faster state transitions and more granular power gating than would be possible with the external PMIC used by most SoCs.

In fact, Avoton truly can “shift power around” on the die depending on the current usage scenario, to make full use of its power envelope. For instance, the chip can take better advantage of its Turbo clock speed headroom when fewer I/O connections are in use.

Speaking of power management, although Avoton is a low-power solution, Intel makes some sharp distinctions between this chip’s dynamic voltage and frequency scaling behavior and the way a chip for mobile devices might be tuned. The Avoton team wasn’t as willing to trade additional wake-time latency for incremental savings in idle power. For instance, capturing a savings of 50 mW at the expense of 100 microseconds of wake latency might work well for a phone, but it can mean dropped packets in a server. Avoton’s DVFS policies are very similar to the Xeon’s, in order to avoid such problems.

Unlike the Xeon, though, the Avoton SoC is intended only for single-socket systems. When asked about the prospects for multi-socket systems, Intel’s Brad Burres allowed that multi-socket SoCs of this class are possible in the future. He cast doubt on their prospects, though, by pointing out that the socket-to-socket interconnect burns 5-10W of power, a cost that is “easy to amortize” on a big Xeon but more difficult to justify in this class of chip.

The Atom C2000 series

Intel is offering a host of Atom C2000-series products based on Avoton and Rangeley. As you can see, the power envelopes range from 6W to 20W, with four to eight CPU cores. All of these models are based on the Avoton and Rangeley dies, which natively have eight cores. Those with lower core counts just have one or two dual-core modules disabled. The fastest versions have base clocks of 2.4GHz and Turbo peaks just a smidge higher, at 2.6GHz.

We haven’t tested an Atom C2000 ourselves (yet?), but Intel has provided a few performance numbers that offer a sense of what to expect. The rise in Stream performance compared to the older “Centeron”-based Atom S2160 is substantial. I expect the gain comes in part from Avoton’s dual channels of memory at 1600 MT/s and in part from architectural changes, with more cores and more internal bandwidth via the system agent and IDI.

Since these performance numbers are only relative, we can’t compare bandwidth to Xeons and Opterons running Stream, unfortunately.

Here’s a look at integer computation performance. Again, the improvement from the prior generation is over 4X. Obviously, the Marvell SoC based on quad ARM Cortex-A9s is overmatched, partially because it simply can’t accommodate enough RAM to run four threads simultaneously.

Then again, one gets the sense that Avoton’s true competition will be based on ARM’s Cortex-A57 core, with true 64-bit addressing via the ARMv8 ISA and copious amounts of bandwidth courtesy of the truly impressive “uncore” complexes ARM is now licensing to its partners. AMD and others have products in the works that should match up much better against Avoton, at least on paper.

I suppose that’s a big part of the story here: by delivering Avoton-based products today, Intel is well out ahead of its competition in a market where it has perceived a threat to its business. The interesting question now is whether Avoton’s apparent advantages in terms of compatibility, performance, and availability will be enough to head off the threat from a host of ARM-based SoCs that will surely be inexpensive, power efficient, and tailored exceptionally well for specific uses. At the very least, Intel isn’t making it easy for them.

Comments closed
    • bowman
    • 6 years ago

    ‘The Avoton team chose to use the older PCIe Gen2 standard for a quick time to market, at least in part.’

    HAHAHAHAHA.

    No, the only reason for this is market segmention. Intel is the world champion of screwed-up market segmentation. Oh, you want feature x? Better get the other server, the one that’s twice as expensive.

    They did it back in the early 2ks before AMD mopped the floor with them, and since Nehalem they’ve been back at it again.

    All ARM needs to compete with Intel is for their 64-bit arch to finally get ready, and to actually supply all modern features to people who want them. You know, what companies do when they want to attract customers.

      • chuckula
      • 6 years ago

      One part about market segmentation is that there needs to be a market to segment.. where is the Intel microserver part with PCIe 3.0?

      Oh and where are all those amazing ARM server parts with PCIe 3.0 and 64 bit? What was that? Something about showing up in late 2014?

        • Flatland_Spider
        • 6 years ago

        This would be an ideal CPU for when the work is going to be offloaded to something else, like a GPU, so it’s valid criticism.

      • Flatland_Spider
      • 6 years ago

      The one that caught my eye was the QAT accel feature. I like how they assume that’s not needed in a server.

    • kamikaziechameleon
    • 6 years ago

    ARM sure has been an interesting experiment. I think that intel is fighting on so many fronts. Meanwhile their Ivy bridge-E line is a joke.

    If AMD puts together a reasonably priced and decently preforming desktop part to undercut what intel has in the 100-300 dollar market the high end consumer market seems void of real products right now. Xeon seems the way to go there.

    Mobile AMD has a decent offering that was only recently overcome by Haswell, and Now Atom is actually becoming a not useless consumer devices component to rival ARM.

    The consumer wins 🙂

    • Unknown-Error
    • 6 years ago

    Bye bye ARM or should I say ARM-less

      • HisDivineOrder
      • 6 years ago

      ARM isn’t going anywhere so long as Intel refuses to lowball on pricing.

    • btb
    • 6 years ago

    Would love to see Synology or Qnap start using the Atom C2550 in their NAS boxes( [url<]http://ark.intel.com/products/77982/Intel-Atom-Processor-C2550-2M-Cache-2_40-GHz[/url<]) The AES-NI instruction set should help raise their encryption performance alot. ECC support nice bonus as well for peace of mind.

    • Wirko
    • 6 years ago

    The die shot vs. the [url=http://en.wikipedia.org/wiki/Flag_of_England<]flag of England[/url<]: is it a sort of a sarcastic tribute?

      • Chrispy_
      • 6 years ago

      I would imagine that the technical expertise responsible for this Silvermont is Israeli, since that’s where all Intel’s brand new architectures have been developed since the days of Core2, I think.

      Ireland is the closest Intel facility to England, unless there’s one in England that I haven’t heard about.

        • NeelyCam
        • 6 years ago

        [quote<]where all Intel's brand new architectures have been developed since the days of Core2, I think.[/quote<] I thought Nehalem and Haswell were developed in Oregon..?

          • Chrispy_
          • 6 years ago

          I’m not saying you’re wrong, but all the articles I read on Intel’s architecture development usually involve Shlomit Weiss or Yoav Hochberg both of who are directors of Intel microprocessor divisions and both of them are based in Israel.

          A quick Google pulls up plenty of matches for Haswell and Israel.
          A quick Google of Haswell and Oregon just brings up articles about how Haswell was [i<]named[/i<] :meh: I can't be arsed to look it up in too much detail, I have beers to drink before the sun goes down!

            • NeelyCam
            • 6 years ago

            At least Nehalem-EX was developed in Oregon:

            [url<]http://blogs.intel.com/technology/2009/03/from_pentium_pro_to_nehalem_th/[/url<] Some Daily Tech comments claim Nehalem was developed in Oregon: [url<]http://www.dailytech.com/Intel+Slates+Nehalem+for+Q4+2008/article9417.htm[/url<] This article talks about Intel's Israel design team: [url<]http://www.zdnet.com/israel-inside-a-history-of-intels-r-and-d-in-israel-7000003122/[/url<] They don't mention Nehalem, and say Haswell was "largely" designed in the USA. Wow, that was a waste of my time.

            • chuckula
            • 6 years ago

            Since the days of Core is has flipped back & forth:

            Core & Core 2: Israel.
            Nehalem/Westmere: Oregon.
            Sandy/Ivy Bridge: Israel.
            Haswell/Broadwell: Oregon.
            Sky Lake/Sky whatever [Neely insists it isn’t Skymont]: Israel.

            • Damage
            • 6 years ago

            The Atom architectures are designed in Austin, Texas.

            • travbrad
            • 6 years ago

            I guess that’s why Atom was still stuck on 32nm. They say everything is bigger in Texas.

            • NeelyCam
            • 6 years ago

            Some Intel engineer said in an AMA that it’s not Skymont.

            To me it would make sense it’s not Sky-something, considering how the naming has been since Sandy Bridge. If anything, I would guess it’s “Something Lake”

            • chuckula
            • 6 years ago

            Ah… then it’s Groom Lake [better known as Area 51].

            • Diplomacy42
            • 6 years ago

            how about Montlake?

            • NeelyCam
            • 6 years ago

            It integrates EVERYTHING

        • Wirko
        • 6 years ago

        The competitor’s headquarters is there, that’s what I was referring to.

          • Chrispy_
          • 6 years ago

          Oh, ARM? 😀

          I don’t think Intel have to worry too much as long as they hold the x86 license. It would take Apple and Microsoft to simultaneously screw up before x86 architecture is worthless.

          Oh wait, Microsoft have screwed up big time, and the RDF was lost with Steve!

      • willmore
      • 6 years ago

      It’s Denmark, you insensitive clod!

        • NeelyCam
        • 6 years ago

        Oh. I thought Denmark was the northern region of Germany

          • willmore
          • 6 years ago

          Oh, snap!

    • Aliasundercover
    • 6 years ago

    Intel clearly intends for no one to make any money selling ARM for server applications. This is no wait and see, more like mobilize the panzers to kill them on the beach.

    It will be interesting to see how much Intel lets this compete with their high margin server processors. I guess the obvious move is make single threaded performance expensive and lightweight cores cheap. ARM will be stuck fighting over the bargain bin much like AMD.

    If your application is well served by a flock of pedestrian CPUs you will see nice bargains while the battle rages. In the near term my money is on Intel. In the long term they will smash in to the Good Enough Wall where any CPU is good enough so no one cares.

    • ronch
    • 6 years ago

    Wasn’t Kabini supposed to come out months ago? With OEM adoption of Kabini this slow I bet Avoton-based devices would be flooding the market before AMD could even ship a truck-load of Kabinis. What’s going on, AMD? Is TSMC holding up your orders? Is Intel up to their old tricks again? Are you concentrating too much on consoles while neglecting PC-bound Kabinis? I’d personally buy a Kabini 2.0GHz machine if I were out for that sort of machine. Hey, Warsam… what’s up?

    Edit – Point is, guys, is that Kabini/Temash are based off the same Jaguar cores and they’re rarer than hen’s teeth. The first chips based on Intel’s next Atom core are starting to appear and Jaguar is really yet to make a strong presence.

      • Klimax
      • 6 years ago

      Would Kabini even fit market Avaton is going for?

        • Hattig
        • 6 years ago

        Kabini has ECC support, so yes. Kabini does also provide an on-board OpenCL co-processor.

        But let’s be honest here, Intel has made a low power, multi-threaded chip that’s perfectly suited for what it is targeted at – low power servers that aren’t doing OpenCL/GPGPU. It turbos to a higher speed than Jaguar can achieve, and probably uses a lot less power because of 22nm, although probably not that significant.

        AMD hasn’t made a server-centric SoC. It’s not even making enough of the damn generic things for consumer products. AMD should have done an octo-core GPU-less Jaguar chip for servers, but it didn’t. There’s just no vision in the company. They have all the ingredients for a range of killer SoCs each targeting specific markets, and they make one generic SoC instead.

          • Klimax
          • 6 years ago

          That was mine point. Market these chips go in have bit different requirements and expectations. GPU/OpenCL not included. (HW block still murders It on efficiency and performance)

      • chuckula
      • 6 years ago

      Kabini is available, although it’s not in hundreds of products and Kabini is really intended to be used inside of a finished product as opposed to being a DIY platform (although those do exist).

      The real issue isn’t Kabini as a whole but Temash, which is supposed to be targeted at tablets just like BayTrail. Aside from a couple of demos at CES, there have been basically no announcements of Temash tablets even though Temash was purportedly launched in Q2.

        • raddude9
        • 6 years ago

        You missed this announcement then:
        [url<]https://techreport.com/news/25318/temash-apu-pops-up-in-toshiba-convertible-tablet[/url<] ...it's like that announcement popped up just 2 hours after your post just to contradict you!

          • chuckula
          • 6 years ago

          If a $600 tablet with a 720p display and mechanical hard drive is “success” then I’d hate to see what failure looks like…

            • raddude9
            • 6 years ago

            I didn’t say it was a good tablet, but it most definitely is a “Tablet Announcement”.

        • NeelyCam
        • 6 years ago

        I saw a cute Temash netbook (Asus? Acer?) in Best Buy a few weeks back.

        Temash tablets are nowhere to be found, though, and yes – I saw the Toshiba announcement, and I also saw the TR article showing Temash tablets from Gigabyte and Quanta (=AMD reference design). But still waiting to see one available at Best Buy, Newegg, Amazon…

          • NeelyCam
          • 6 years ago

          Actually, it was the same as what Anandtech is giving away here:

          [url<]http://www.anandtech.com/show/7282/amd-center-giveaway-acer-v5-116-quadcore-temash-notebooks[/url<] Looks like AMD's marketing is getting more active than in the past... Firat WarSam, now Anand's AMD Center

        • frogg
        • 6 years ago

        Well, i’m still waiting for all those Kabini motherboards announced at Computex in June; Kabini itself was announced in May…….. I was waiting for an Asus XS-A motherboard who looked tempting. I feel i will end up with a BayTrail board, just because i can buy one!

      • HisDivineOrder
      • 6 years ago

      AMD does not release a product without a lot of delays. When in doubt, they say, “Always Must Delay!”

    • not@home
    • 6 years ago

    So what kind of integrated graphics does this have? How well would it work for a DIY NAS/HTPC combo?

      • NeelyCam
      • 6 years ago

      Bay Trail is supposed to have 8 (?) EUs of Ivy Bridge graphics (HD4000 is 20EUs), if I remember right. So, I’d call it insufficient for a HTPC..

        • cal_guy
        • 6 years ago

        I believe it’s actually 4 EU.

          • NeelyCam
          • 6 years ago

          Mm.. even worse..

      • Damage
      • 6 years ago

      Graphics? What graphics?

        • chuckula
        • 6 years ago

        Yeah, none to be found in the Avoton parts. The Baytrail-D parts are more desktop oriented since they include a (weak for the desktop) IGP + 4 silvermont cores. I also think they get some form of PCIe for a discrete GPU if you want one.

      • ronch
      • 6 years ago

      I don’t think it will run Crysis.

      Enough said.

    • poohbah10
    • 6 years ago

    And missing from the comparison is….Jaguar based chips. I fully expect that Avoton is stronger in most respects, and we don’t really have a comparable chip (except in the Xbox One and PS4) but it would definitely be a much closer contest than the other chip comparisons provided.

    Of course, AMD is not able to get even 4-core desktop parts delivered (probably due to fab capacity).

      • NeelyCam
      • 6 years ago

      [quote<]And missing from the comparison is....Jaguar based chips. [/quote<] Not exactly Avoton, but here's some Cinebench scores on Silvermont from Anand, comparing it to Jaguar: [url<]http://www.anandtech.com/show/7263/intel-teases-baytrail-performance-with-atom-z3770-cinebench-score[/url<]

        • f0d
        • 6 years ago

        ouch 4 cores gets massacred by IBE dual core
        so it would take an 8 core of one of these to be around equal to a dual core IBE?

        i just dont see the point of these highly threaded cpu’s on a desktop where single threaded performance matters more for most workloads

          • Klimax
          • 6 years ago

          That’s why there won’t be much of problem with ported games from consoles on mainstream parts.

          • OneArmedScissor
          • 6 years ago

          That IB can turbo up to 2.4 GHz with 2 cores. Bad comparison. There’s faster Atom and Jaguar, and slower IB and Haswell.

          The only thing of interest there is that Atom and Jaguar both stomp on the higher clocked Core 2, which is still a perfectly capable CPU for that type of computer.

            • f0d
            • 6 years ago

            ah yes i did forget they could turbo
            i still dont see the point of lots of cores (8) for low end desktop use (and i know this wont be for desktop – but some are saying it would be an interesting desktop cpu)

            i still think a celeron IB would stomp on even a quad core one of these for most workloads on the desktop – they might be ok in laptops or small computers like the intel nuc but on the average low end desktop system which doesnt need such low power usage i think i would still rather a celeron

            either way we will see when they do release these as low end desktop cpu’s to replace the celeron/pentium

        • poohbah10
        • 6 years ago

        Good find. So, silvermont core efficiency is basically equal to jaguar and power looks roughly equivalent (IIRC the jaguar power draw). A better showing by AMD in this space than I thought.

          • chuckula
          • 6 years ago

          Uh…. you got the part about per-clock efficiency right, but you are comparing a 15 watt TDP AMD part to a ~3 watt TDP Intel part… that A4-5000 is *not* intended for tablets, it is intended for ultrabook/nettop type devices that are in a much higher power class.

            • raddude9
            • 6 years ago

            3 Watt TDP? I hadn’t realized that the TDP values for the z3770 had been released, only the “SDP” are you using that to come up with a TDP guesstimate? Also, if you look at the review where Anand got that cinebench score:
            [url<]http://www.anandtech.com/show/6981/the-kabini-deal-can-amd-improve-the-quality-of-mainstream-pcs-with-its-latest-apu/2[/url<] You can see that the entire platform only used 11.5W while running that multi-threaded benchmark. I've no doubt the z3770 will use less power in total when it's released, but the real-world difference will likely be a lot closer than a faux comparison of a TDP value to an SDP one.

            • chuckula
            • 6 years ago

            [quote<]You can see that the entire platform only used 11.5W while running that multi-threaded benchmark.[/quote<] Which is to be expected when running a CPU-intensive and GPU-light benchmark with a SoC. Of course, I'd expect the A4 to have a stronger GPU both because it has more hardware devoted to the GPU and because it has a higher power envelope to use the GPU. The z3770 is a flavor of Baytrail that is specifically aimed at tablets. There are higher TDP models for netbook/nettop type applications as well.

            • NeelyCam
            • 6 years ago

            [quote<]I've no doubt the z3770 will use less power in total when it's released[/quote<] Shall we make a bet..? C'mon - you're an engineer! You've seen the Silvermont efficiency promises from Intel, right? You know that Silvermont is 22nm while Kabini/Temash are 28nm, and you know what a node difference does to power efficiency. You know that Silvermont is FinFET while Kabini/Temash are planar, and you've seen Intel's slides (and actual numbers from IEDM) showing how efficient FinFETs are at low voltages compared to planar. There are a lot of reasons why Silvermont, without question, should be a lot more efficient than Kabini/Temash, and [b<]no reason[/b<] why they should be equal. The data-driven engineer in you should force you to look at the evidence, and make a better-informed prediction... But if you refuse to do that, I'm more than happy to make a bet. Wings/Beer?

            • chuckula
            • 6 years ago

            There has been some confusion with the different power/performance profiles with Bay Trail as well. It looks like these chips are theoretically capable of boosting all the way to 2.4GHz in some situations, where the TDP jumps to ~7.5 watts. During battery operation, however, it is unclear how or if they jump above their stated 1.46 GHz clock speed with a correspondingly lower TDP.

            This may come as a shock to some people, but overclocking a chip raises the TDP. Once again, *in a tablet* I’d expect to see the power consumption numbers at ~ 3 watts.

            • raddude9
            • 6 years ago

            Sure I’ll take that bet.

            So seeing that I said that the z3770 will use less power than the AMD’s A4-5000. You must think that (despite all the reasons you gave) it will use more power? Really?

            Are you sure you read my sentence correctly?

            • NeelyCam
            • 6 years ago

            Um… No I didn’t… [s<]Readuling[/s<] [i<]Reading[/i<] comprehension fail on my part... I was taking issue with your statement that in real world their power consumption is close (it's not). But didn't quote that part because I was stupid EDIT: "Readuling".. wtf is wrong with me today? I blame the cell phone keyboard..

            • raddude9
            • 6 years ago

            [quote<]I was taking issue with your statement that in real world their power consumption is close (it's not).[/quote<] Actually I didn't say they would be "close", all I said it would be "closer" than what Chuckie was implying. He said that the Z3770 has a 3W TDP (maybe?) and compared it to a 15W TDP chip, implying there was a 5x difference in the chips when running cinebench. I don't think the real-world whole-system difference will be anything like 5x, particularly for a CPU bound benchmark like Cinebench. Mainly because the AMD chips come nowhere it's TDP when it's GPU isn't being taxed and the x3770 likely makes full use of it's TDP with it's turbo clocks. Also, there's no getting around having to power other system components like memory etc.. So, my guess is that on that benchmark where the A4-5000 system used 11.5W, the z3770 system would come in somewhere around 6W.

            • NeelyCam
            • 6 years ago

            [quote<]So, my guess is that on that benchmark where the A4-5000 system used 11.5W, the z3770 system would come in somewhere around 6W.[/quote<] So, 2x better efficiency.. Maybe. I'll guess 3x (i.e., 4W) because of those slides where Silvermont was 3x more efficient than anything else

    • chuckula
    • 6 years ago

    Comparing this article and the review of the 4960X makes one thing very obvious: For better or worse, Intel is pouring a *lot* of R&D effort into low-power systems from mobile all the way to microservers right now.

    It’s not all that interesting for people who want the fastest overclocks, but it’s very interesting* for the world of mobile and low-power systems to see some real alternatives coming on to the market.

    * ARM is learning all about the ancient Chinese curse: May you live in interestly times.

      • MadManOriginal
      • 6 years ago

      Sort of…I’d say it’s not just about absolute low power but increasing the performance/power ratio as much as possible.

      Also, +1 for the Chinese proverb mashup.

    • DPete27
    • 6 years ago

    Anybody else interested in the Silvermont desktop chips? I sure am.

      • Waco
      • 6 years ago

      I am. The 8 core 20 watt part is extremely tempting.

      If Intel has half a brain they’ll bring these to the desktop market with increased power envelopes and speeds…unlocked would be nice, but that’s a pipe-dream at this point.

        • derFunkenstein
        • 6 years ago

        I thought that’s what the new Celeron and Pentium processers were going to be, that Haswell only makes its way down to i3 at the low end.

          • Chrispy_
          • 6 years ago

          Celeron and Pentium processors will be perfect on Silvermont for the low-price mom & pop internet browsing laptops.

          The cheaper those devices become, the better.

      • f0d
      • 6 years ago

      not really as the performance would be low

      for things that you need 8 threads for you need high performance and i suspect a quad core haswell would murder one of these even in highly threaded applications

        • Klimax
        • 6 years ago

        Not exactly, this is for non-CPU bounded workloads, where you need throughput bounded to I/O. (Web servers, caches, routers.) Also in quite low TDP and with some extra stuff integrated.

          • f0d
          • 6 years ago

          exactly (i agree with you) – but thats why i was saying they wont be good for desktop use (as the original thread starter wanted)
          “Anybody else interested in the Silvermont desktop chips? I sure am.”

          they have their uses and are pretty exciting for those uses but for desktop? i dont really see the point in them for desktop

            • Klimax
            • 6 years ago

            Agreed. (Also failed to recheck subthread. On the other hand it might be usefull for some Office loads involving Excel and VBA…)

        • Waco
        • 6 years ago

        I’m not so sure. If they even get it up to the level of a the Pentium M it could easily be a desktop chip for a vast majority of the population.

        I have an EeePC 1025C with the N2600…it’s zippy enough with an SSD but I would buy netbook/laptop with one of these in a heartbeat.

          • f0d
          • 6 years ago

          why not get a laptop with a dual core ivy bridge in it? around the same performance as an 8 core one of these and better single threaded performance by far

          even a low end ivy bridge pentium would probably thrash one of these in quad core configuration (the most likely model to come out for laptop/desktop)

          i just dont see the point in these when dual core ivy would probably be better in every way

          either way we will see when they do actually come out – as derfunkenstein said they will be replacing pentium and celeron at some future point in time with these cpus’s so you will get you chance to own one (a 4 core version anyways)

            • Waco
            • 6 years ago

            Because the entire system is a single chip and power consumption would be minimal. The rub with getting a full-fat IB or Haswell is that you need the chipset to go along with it which consumes a not-trivial amount of power.

            Task energy, I would bet, will be lower with CPUs like this than with a very low clocked / low voltage desktop CPU.

            I’d like an overclockable 8-core version just for funsies.

            • f0d
            • 6 years ago

            the only way i could think that task energy “could” be lower is if you could fill all the cpu’s with a workload like encoding videos or doing rendering work and with something like that im still doubtful as a lot of the time finishing the task earlier and doing it faster reduces overall power consumption

            most desktop workloads are single thread limited so unless you are putting all 8 cores to work all the time anything else would probably use more power as s single core has to work harder for longer than say a low powered haswell or ivy with better IPC

            either way everything is really just speculation until it comes out for desktop (or even laptop) and TR reviews it (these are coming with 4 cores on desktop to replace pentium/celeron) and it will be an interesting read nonetheless

            i agree an overclockable 8 core would be fun to play with (i overclock everything i can get my hands on lol) but i very much doubt that would happen as intel like to lock most cpu’s down except for the high end – still we can wish cant we?

            • NeelyCam
            • 6 years ago

            [quote<]the only way i could think that task energy "could" be lower is if you could fill all the cpu's with a workload like encoding videos or doing rendering work and with something like that im still doubtful as a lot of the time finishing the task earlier and doing it faster reduces overall power consumption[/quote<] It can be more complicated than that. (And sorry if this gets too theoretically boring..) For a given task, a certain amount of computations need to be completed. For the same exact computational block, (assuming there is no leakage,) the total amount of energy used for the task is the same if you clock it slower (and don't change the supply voltage), as the same number of charges/discharges of the capacitors inside the computational block will have to happen, and this is what consumes the energy. Now, if you clock the circuit slower, you can (generally) lower the supply voltage too. This means each charge/discharge event consumes less energy. So, (again assuming no leakage,) clocking the circuit slower and reducing the voltage means you need less energy to complete your computation. Furthermore, if the circuit doesn't have to run that fast, you can design it differently - you can increase the load/driver ratio ("FanOut") which means you need fewer and/or smaller circuits. Fewer/smaller circuits mean less capacitance to charge/discharge, and less energy required to complete the task. This is why slower circuits can operate more efficienly (think ARM A8 vs. Core2Duo). But there is also leakage which means if you go slower, the task takes longer to complete, and the circuits will leak longer before you can shut them down through powergating. Intel has usually gone with the 'hurry-up-and-go-idle' approach probably partly because of the leakage thing, but probably also partly because people like speed. But if you can minimize/eliminate the leakage, running slower (and at a lower voltage) will generally give you better efficiency and lower task energy

      • NeelyCam
      • 6 years ago

      Maybe for a cheapo NUC-like box, but I would still prefer Haswell on one of those

      • HisDivineOrder
      • 6 years ago

      Yes. I doubt Intel will give it to us in any form we’d actually like, though. Like with the original Atom and netbooks, they’ll give it to us only in a form that makes us realize we “have to” upgrade to something a lot more expensive.

        • NeelyCam
        • 6 years ago

        Intel isn’t supposed to be the one “giving” us end products – it’s the OEMs. It’s just that when OEMs can’t get their act together, Intel is forced to develop something themselves (like the NUC).

    • chuckula
    • 6 years ago

    Biggest difference between Avoton & 64Bit ARM: Avoton exists in real products.

      • HisDivineOrder
      • 6 years ago

      I remember a time when people scoffed at the very idea of ARM even getting anywhere close to challenging Intel. Now Intel is creating entire product lines to counter them.

      I don’t think I’d count ARM out on the eve of the next step in their potential rise…

        • chuckula
        • 6 years ago

        [quote<]I remember a time when people scoffed at the very idea of ARM even getting anywhere close to challenging Intel.[/quote<] I remember a time.. last week... when people scoffed at Intel ever being able to produce mobile parts that could compete with ARM. Now ARM is panicking and trying to diversify out of mobile parts because it knows there will soon be competition from Intel. I don't think I'd count Intel out on the [s<]eve[/s<] [u<]actual product launch[/u<] of the next step in their potential rise against the ARM hegemony...

    • dpaus
    • 6 years ago

    First ‘real’ shot across ARM’s bow….

Pin It on Pinterest

Share This