The next Atom: Intel’s Silvermont architecture revealed

Intel’s Atom processor debuted five years ago as the first x86-compatible CPU from Intel tailored explicitly for low-power operation. At that time, the iPhone was less than a year old, and Asus had only recently introduced the first-generation Eee PC. Intel was talking about a new class of products, known as MIDs or “mobile Internet devices,” as the natural home for the Atom.

You know what happened next. Without robust touch interfaces, MIDs never took off. Instead, the netbook craze came and went, and tablets became an outright phenomenon. Smartphones grew in size, tablets shrank, and “phablets” bridged the gap between the two.

Something else happened along the way—or perhaps didn’t. The Atom never really replicated its initial success in netbooks among other consumer devices. Intel revved the various incarnations of the Atom, reducing power envelopes and physical footprints. It integrated ever more functionality into true SoC products like Moorestown, aiming for smartphones, and won some business along the way. But the Atom has captured only a handful of high-profile design wins among smartphones, and the few Windows 8 tablets based on the current Clover Trail platform have seen only modest adoption to date. Instead, the great majority of mobile computing devices for consumers are based on ARM’s CPU technology—or are compatible with it.

Another thing that didn’t happen is a change to the Atom’s microarchitecture. Intel wrung out some performance and power efficiency gains through integration, improved process tech, and higher clock speeds, but the CPU cores themselves remained largely the same.

That fact seems a little odd, since it’s been clear for several years that Intel views ARM as its biggest competitive threat. But the world’s biggest chipmaker hasn’t been idle. It has been pushing its highest profile Core processors into ever-lower power envelopes, and the hotly anticipated Haswell chip is expected to hit the market early next month with TDPs reaching down to 10W or less. Meanwhile, the firm’s Austin-based design team has been hard at work on a clean-sheet redesign of the Atom microarchitecture, code-named Silvermont. Today, we can reveal the first details about Silvermont, and they look very promising indeed. Intel is claiming that this new architecture, when combined with its 22-nm fabrication process, will enable chips that offer three times the performance of the prior-generation Saltwell Atom with “5X lower” power consumption.

Silvermont isn’t just a new architecture; it’s also the beginning of an accelerated update schedule for Intel’s low-power processors. Going forward, the Atom will be getting the same sort of “tick-tock” cadence that Intel has employed to great effect with its Core processors. As before, the Atom will be shrunk to a new process node roughly every other year. In between, the CPU architecture will be revised, as well. As you can see in the image above, “Airmont” will be a shrink of Silvermont to 14 nm. After that, we should see a revamped microarchitecture on this same fab process, although Intel isn’t ready to reveal its codename.

It goes without saying, perhaps, but the move to a tick-tock cadence in the low-power segment means Intel is dead serious about winning in this part of the market.

Before that new plan can take hold, Intel has to deliver Silvermont-based products. That’s slated to begin happening later this year, in several system-on-a-chip (SoC) configurations intended for different market segments.

The Bay Trail SoC will replace Clover Trail and offer more than double the compute performance for tablets. Bay Trail should also make its way into entry-level notebooks and desktops. It’s slated to arrive inside of new systems ahead of the holiday buying season. Merrifield, the phone chip, should start shipping to smartphone makers by year’s end, and products based on it should be announced in the first quarter of 2014. Avoton is targeted at micro-servers and is already sampling, with an official launch coming in the second half of 2013. Rangeley, the communications infrastructure part, will also launch in the year’s second half. Intel intends to address other parts of the embedded CPU space, such as automotive infotainment systems, with additional Silvermont-based platforms that have yet to be announced.

The Silvermont story
Despite driving its Core architecture into power envelopes of 10W and lower, Intel is making a big commitment to the separate development of a low-power architecture going forward, because the Atom can go places Core cannot: into power envelopes measured in hundreds of milliwatts, into smaller physical footprints, and into much lower-cost platforms. The list of SoCs being created with Silvermont tells that tale. Necessarily, then, this low-power architecture must accept a different set of compromises than Core, with a focus on operating at very low voltages using a more modest transistor budget.

Within the scope of these limitations, Silvermont’s architects have reached for a much higher performance target, especially for individual threads. The big news here is the move from the original Atom’s in-order execution scheme to out-of-order execution. Going out-of-order adds some complexity, but it allows for more efficient scheduling and execution of instructions. Most big, modern CPU cores employ OoO execution, and newer low-power cores like AMD’s Jaguar, ARM’s Cortex-A15, and Qualcomm’s Krait do, as well. Silvermont is joining the party. Belli Kuttanna, Intel Fellow and Silvermont chief architect, tells us the new architecture will achieve lower instruction latencies and higher throughput than the prior generation.

Interestingly, Silvermont tracks and executes only a single thread per core, doing away with symmetric multithreading (SMT)—or Hyper-Threading, in Intel’s lingo. SMT helped the prior generations of Atom achieve relatively strong performance for an in-order architecture, but the resource sharing between threads can reduce per-thread throughput. Kuttanna says SMT and out-of-order execution have a similar cost in terms of die area, so the switch from SMT to OoO was evidently a fairly straightforward tradeoff.

This decision makes a lot of sense in the context of Silvermont’s new fundamental building block, which is a dual-core “module” with a single, shared L2 cache. Intel talks of the two cores being “tightly coupled,” echoing the way AMD describes the dual-core module used by its Bulldozer architecture, but no logic is shared between the two cores—just the cache. The module can scale up to four on a chip, or eight cores on a single SoC. With core counts like that possible, Silvermont-based systems ought to exploit thread-level parallelism sufficiently without the use of SMT.

Silvermont will have additional opportunities for parallelism thanks to its expanded ISA support, which brings Intel’s low-power architecture largely up to parity with Westmere-class desktop processors. That means support for the SSE4.1 and 4.2 extensions along with AES-NI encryption. AVX isn’t supported, which is no great surprise given the requirements and Atom’s mission in life. The architecture does include expanded virtualization support, including extended page tables and the rest of Intel’s VT-x2 suite, which could benefit those SoCs targeted at micro-servers. Another new feature is real-time instruction tracing, to aid with debugging.

The dual-core Silvermont module talks to the rest of the chip via a new system fabric architecture, which should offer higher transfer rates and easier integration than the internal front-side bus used in prior Atoms.

The new core

Although Silvermont is a brand-new, clean-sheet design, Kuttanna tells us it carries over certain key principles and concepts from the last Atom. Indeed, the new architecture sometimes seems like an evolutionary step. For instance, the core retains the same 32KB L1 instruction cache and 24KB L1 data cache sizes as before.

Another attribute carried over is what Intel calls the “macro-op execution pipeline.” Most x86 processors break up the CISC-style instructions of the x86 ISA into multiple, simpler internal operations, but Silvermont executes the vast majority of x86 instructions atomically, as single units. Certain really complex legacy x86 instructions are handled via microcode. Compared to older Atoms, such as the prior-gen Saltwell core, Silvermont microcodes substantially fewer x86 instructions, which should translate into higher performance when those instructions are in use. We’d expect Silvermont to tolerate the vast amounts of legacy code in consumer applications better than current Atoms do.

Kuttanna shared the above block diagram of the Silvermont core with us. We haven’t had time to map out the new architecture in any great detail, but we can pass along the highlights he identified.

In the front end, Silvermont can decode two x86 instructions per clock cycle, like its predecessor. However, the branch predictors are larger (and thus, presumably, more accurate), and they include an improved facility for the prediction of indirect branches. Also upgraded is the loop stream buffer, which detects loops that will repeat, buffers the decoded instruction sequence (up to 32 macro-ops in Silvermont), and feeds the sequence into the execution engine. The chip can then shut down its fetch and decode units while the loop executes, to save power.

The execution units have been redesigned with a different mix of resources. The FPU is largely 128 bits wide, but the floating-point multiplier is 64 bits wide, mirroring the prior-gen Atom architecture.

Out-of-order loads are now supported, naturally. Note the presence of only a single address generation unit, with a reissue queue ahead of it. In a bit of dark magic, the architecture can handle a load and a store in parallel when that queue comes into use. The caches have larger translation lookaside buffers, which should allow for quicker accesses. And store-to-load forwarding has been enhanced, as well.

One consequence of the move to OoO execution is that the pipeline is effectively shorter for instructions that don’t need to access the cache. The penalty for branch misprediction in Saltwell’s in-order pipeline was 13 cycles, but that penalty is reduced to 10 cycles in Silvermont.

The end result of all of Silvermont’s enhancements, from the fetch and decode to retirement, is a roughly 50% increase in instruction throughput per clock compared to the generation before. That improvement will be compounded, of course, by higher clock speeds and integration into SoCs with faster complementary subsystems, such as improved memory controllers.

Speaking of which, each dual-core Silvermont module connects to the SoC fabric using a dedicated, point-to-point interface known as IDI. This interface has independent read and write channels, and it features higher bandwidth and lower latency than the old Atom bus, along with support for out-of-order transactions. In the example above, a pair of Silvermont modules connect to the system agent via a pair of IDI links. The system agent then routes requests to the memory controller for access to DRAM.

Oh, I should mention that the L2 cache in each Silvermont module, shared between two cores, is 1MB in size. L2 access latencies have been reduced by two clocks compared to Saltwell, whose L2 cache was smaller at 512KB.

Better burst and power management

The new architecture has gained quite a bit of flexibility and capability in its dynamic frequency and power management schemes. The headliner here is a more capable “burst” mode, as the Atom guys call it, similar to the Turbo Boost feature in Core processors. The prior-gen Atom’s boost feature was fairly simple; it exposed an additional P-state to the operating system to allow higher-speed operation when thermal headroom allowed. The frequencies for Silvermont’s burst mode are managed in hardware and take into account the current thermal, electrical, and power delivery constraints, both locally and at the platform level. We don’t yet have many specifics about the SoCs that Silvermont will inhabit, but we assume an on-chip power microcontroller will be calling the shots.

Silvermont’s more sophisticated power management opens up several notable new capabilities, illustrated in the images above. The example on the left shows power sharing between two cores, where an unoccupied core drops into a sleep state, ceding its thermal headroom to the busy core, which can then operate at a higher frequency than its default baseline. In the middle example, the two CPU cores share power with the SoC’s integrated graphics processor; since the graphics workload is light, both cores can burst up a couple of steps beyond their default speed. In the example on the left, the cores can temporarily step up to a high frequency even under relatively full utilization, so long as platform-level thermals will allow it. All of these behaviors are familiar from larger Intel SoCs like Ivy Bridge, but the exact algorithms and mechanisms are distinct.

Each Silvermont module is fed by a single voltage plane, but oddly enough, each core in the module can run at its own frequency, independently of the other one. When speeds differ, the shared L2 cache will run at the higher of the two frequencies. The existence of this capability seems rather odd, since we’ve seen a number of x86 processors run into performance problems when threads hop around onto cores running at low frequencies. Still, architects keep building fully independent clocking into their processors. Our understanding is that independent core clocking within a module probably won’t be used in the Bay Trail platform that’s most likely to run Windows or other desktop-class operating systems. Instead, Intel tells us independent clocking schemes might be used in specific scenarios, such as very-low-cost parts where one of the two cores might not operate perfectly at higher frequencies or as an enabler for custom TDPs chosen by the system vendor.

Good power management is largely about taking advantage of the idle time between user inputs, and Silvermont is definitely geared to do that. Each core can drop into the C6 “deep sleep” state independently. When it does so, a power gate will shut off power to the core completely.

Silvermont modules can choose from a suite of C6 sub-states depending on the status of their two cores, as shown above. The L2 cache can be kept fully active, partially flushed, or shut down entirely, with each step into a lower-power state carrying a longer wake-up time.

The 22-nm advantage

One of the great thing about being Intel, of course, is having the lead in chip fabrication tech. The firm was first to market with a 3D transistor structure, or FinFET, when it shipped products based on its 22-nm process last year. To date, the company says it has shipped over 100 million processors built on its 22-nm process, and it claims defect densities are now lower than they were with its 32-nm process two years ago. In short, Intel appears to be well over a year ahead of the rest of the industry in terms of process geometries—and even further ahead in productizing FinFETs.

The firm’s 22-nm process technology offers some advantages that seem almost ideally suited to low-power processors. Those start with a threshold voltage for transistor operation that’s about 100 mV lower than with the 32-nm planar transistors on Intel’s older process node. At relatively low voltages, the 22-nm process with tri-gate transistors can operate up to 37% faster. At higher voltages, it can offer similar switching performance to the 22-nm planar process while consuming about half the active power.

What’s more, Silvermont-based chips will be built using a variant of this 22-nm process tailored for SoCs. In fact, Intel says the Silvermont architecture and its SoC process variant have been “co-optimized” for one another. Compared to the P1270 process used for Ivy Bridge chips, the P1271 SoC process offers several additional points of flexibility. The SoC process provides more tuning points in the form of lower speed, lower leakage transistors better suited for low-power devices. At the same time, it adds the high-voltage transistors needed for external I/O. These transistors have increased oxide thickness and gate length, and they support both 1.8 and 3.3V operation. Also, the process can be tweaked to provide a range of density options, from 9 to 11 metal layers, at different costs. Interestingly enough, Intel says the 22-nm tri-gate process is better suited for analog devices than the last three generations of planar transistors, as well.

Performance claims: Silvermont versus the world

The combination of a relatively long and frequency-optimized pipeline, dynamic clocking flexibility, and very hospitable process tech apparently adds up to good things for Silvermont cores in silicon, because Intel is already making some strong performance claims. Above is a comparison versus an unnamed ARM big.LITTLE asymmetric core implementation. Intel expects Silvermont to deliver superior power-efficient performance across the whole range of operating conditions.

The slide above shows how Silvermont stacks up against the prior-gen Saltwell Atom core. Single-threaded, single-core performance is shown on the left. The combination of IPC increases, higher frequency, and other improvements has yielded a 2X gain in peak single-thread performance and a doubling of performance at the same power consumption level. Silvermont can deliver the same level of performance as the older core at “4.7X lower power,” as well.

The charts on the right compare a dual-core, quad-threaded Saltwell implementation with a quad-core, quad-threaded Silvermont-based chip. Again, the gains are substantial, whether the comparison is at similar levels of power or performance.

Silvermont’s true target, naturally, is the ARM-based competition. One point Intel wants to underscore in this context is that a dual-core Silvermont has the potential to outperform a quad-core ARM-based SoC in terms of peak performance and in terms of power consumption at comparable power levels. The three competitors aren’t named here, but I expect SoCs from big names like Apple and Qualcomm are the basis for these results.

Here’s a set of estimates matching up Silvermont against its likely ARM-based competitors in the tablet space. The comparisons are made at equivalent core counts, and the workload is SPECint rate_base2000. Although these are still just projections, you can see that Intel expects Silvermont to outperform its competition by some wide margins at a core power level of 1.5W while drawing much less power at peak performance levels.

On paper, then, Silvermont looks more than competitive at this early stage of the game. We’ll have a better sense of how it matches up once we know more details about the SoCs based on it. If those products turn out to be as strong as Intel projects, then they will surely find some traction in the PC space. Bay Trail should do well in Windows-based tablets and in low-cost laptops and desktops, although it will be arriving with a similar incumbent already in the market in the form of AMD’s quad-core Temash and Kabini SoCs.

The tougher question is whether Silvermont-based SoCs can make further inroads into smartphones and tablets based on the most popular touch-based mobile operating systems, like iOS and Android. The trouble here is that the ARM instruction set has become the standard in that space, a barrier to be overcome. ARM’s practice of licensing its IP and letting customers integrate it as they choose may be even more of an obstacle, because it means firms like Apple and Samsung, who together own a huge proportion of the phone and tablet markets, can roll their own solutions. Intel appears to be fully capable of creating superior SoC technology. But how much better does that technology have to be in order to woo device makers away from their current freedom and into an x86 standard that now feels strangely proprietary by comparison? With Silvermont on deck and a tick-tock development cadence underway, Intel may soon be testing the limits of that equation.

Comments closed
    • jackie399c
    • 6 years ago
    • vvas
    • 6 years ago

    Thanks for the overview, but I wonder if it was perhaps a bit hastily written? I noticed two points in particular:

    [quote<]Intel is claiming that this new architecture, when combined with its 22-nm fabrication process, will enable chips that offer three times the performance of the prior-generation Saltwell Atom with "5X lower" power consumption.[/quote<] Except of course it's, if I'm not mistaken, [i<]either[/i<] 3x the performance at the same power consumption, [i<]or[/i<] 5x lower power consumption at the same performance. If Intel had managed to achieve both at the same time, it would be quite a feat. :^) [quote<]At relatively low voltages, the 22-nm process with tri-gate transistors can operate up to 37% faster. At higher voltages, it can offer similar switching performance to the 22-nm planar process while consuming about half the active power.[/quote<] Surely it's the other way round? I.e., up to 37% faster with similar power consumption at higher voltages, and half the active power with similar performance at lower voltages? The way that sentence is written doesn't make much sense to me, but maybe that's just me being thick. Also, while I'm at it, isn't the comparison supposed to be with the [i<]32[/i<]-nm planar process?

      • NeelyCam
      • 6 years ago

      [quote<]Surely it's the other way round? I.e., up to 37% faster with similar power consumption at higher voltages, and half the active power with similar performance at lower voltages?[/quote<] Yes. Same performance, lower voltage -> half the power. Same voltage, same power -> 37% more performance. [quote<]Also, while I'm at it, isn't the comparison supposed to be with the 32-nm planar process?[/quote<] Yes.

    • NeelyCam
    • 6 years ago

    You know, I’m thinking that even though these 22nm Atoms offer superior power efficiency and performance over anything ARM, inertia will probably help ARM hold on to most of its marketshare in phones. But where this will really hurt ARM is microservers.

    Avoton is going to hit the market before solid ARM competitors, will have superior performance, and will allow Intel to establish the ecosystem using first mover advantage and previous server customer relationships. This will create a huge barrier of entry for ARM.

    And this was supposed to be one of those great growth stories for ARM… Warren East really knew when to exit stage left

      • Flying Fox
      • 6 years ago

      More like Intel defending the server turf from the ARM invasion, but the result is sort of the same.

        • NeelyCam
        • 6 years ago

        Well, yes – in a way they are defending the “server turf”. The curious thing is that the ARM community sort of invented these microservers, only to miss the boat themselves. Was Intel just so nimble that they could undercut ARM in TTM for something ARM folks came up with…? Or did Intel have something in flight for this for years already, and they finally got it done?

        I doubt we’ll ever know what truly happened, but as a biased INTC owner, I’d like to think that the AMD assault of mid 2000’s and ARM assault of early 2010 has transformed Intel into a fast and fierce competitor that will take over [i<]all[/i<] the markets with strength and precision. The socialist in me is thinking that this is awfully bad for the people. I'm conflicted.

      • maxxcool
      • 6 years ago

      +1 and with x86 AND now out of order execution in the back pocket software transitions will be a snap…

    • Bensam123
    • 6 years ago

    I’m not sure if I buy into turbo features. Every time new processors come around, both AMD and Intel talk about boosting cores when other cores are idle. At least in Windows this next to never happens. There is always something happening on the other cores so others never really reach a full blown boost stage. Of course companies like Asus implement their own enhancement tech and that completely messes anything like this up and it just turns it into another multiplier level almost no different then simply not having a boost feature.

    When playing something like a game or running any sort of program that puts a load on your processor it completely washes this away. The more active the computer the less useful boosting functionality seems to be, it doesn’t seem to operate on a level where it can respond to programs under load, even heavily single threaded ones…

    If you’re interested in looking at something like this install Argus Monitor. You can watch boost states for seperate cores and even has a graph setup for it, among other monitoring features. It’s a really neat app.

    Something else I noticed is AMD jumps around a lot more and seems to more meaningfully use boost stages, compared to it being another random tick the processor goes through. Meaning that Intel usually ramps up to whatever load your computer is under, where as AMD seems to spike more frequently depending on what you’re doing.

      • [+Duracell-]
      • 6 years ago

      Boost is intended to squeeze out a bit of extra performance within certain limits. Just because you don’t see it boosting up doesn’t mean it’s not working.

      [quote<]Of course companies like Asus implement their own enhancement tech and that completely messes anything like this up and it just turns it into another multiplier level almost no different then simply not having a boost feature.[/quote<] Actually, the highest boost level is a multiplier itself. Think of the boost technology as dynamic overclocking, where the processor or mobo is tweaking the multiplier depending on the load or thermals of the processor. On an i5-3570K, the base multiplier is 34x. If the processor detects that it is able to boost at this moment, it adjusts the multiplier per core up to its rated boost multiplier depending on the type of load, which would be 38x. [quote<]The more active the computer the less useful boosting functionality seems to be[/quote<] This is correct, and that's how it's designed to be. Both Intel and AMD solutions are designed around thermal, power, and current limits. If only one core is being loaded, the other cores can idle and the frequency of the loaded core can possibly go up to it's rated boost speed. If all four cores are loaded, the boost is still there, but since you're significantly loading the cores, boost would be limited to your cooling solution. If you want to see it working, I bet you can use Prime95.

        • Bensam123
        • 6 years ago

        I never said it didn’t work, I was discussing it’s actualized benefits verse perceived benefits.

        You’re stating it as dynamic overclocking, I’m stating that it’s not very dynamic at all as it doesn’t do anything once the computer is under full load or even small loads.

        I know full well how turbo core and turbo boost work.

        Prime95 would be one of the cases where you wouldn’t see it work well if at all (in AMDs case).

          • [+Duracell-]
          • 6 years ago

          [quote<]You're stating it as dynamic overclocking, I'm stating that it's not very dynamic at all as it doesn't do anything once the computer is under full load or even small loads.[/quote<] It [i<]is[/i<] dynamic. It is designed to work within a certain spec, and it's why Intel and OEMs can advertise a speed of up to 3.1Ghz on a CPU with a 2.1Ghz base clock and 35W TDP (i7-3612QM). It attempts to not break those thermal, power, and current limits without going over, and if it does, it scales back the clock until its within limits. If your computer is under full load, you're probably pushing against those limits and the clock speeds turn down to at least the base clock to get the processor within spec. REAL overclocking pushes your CPU out of spec. Enthusiasts overclock to squeeze every bit of performance out regardless of thermals or current. Boost is a way to squeeze more performance out of the chip without having to resort to real overclocking. This tech is MUCH more useful in the mobile space where thermal design limits come to play. Boost allows the processor to run for a short time above the base clock to get to idle faster. [quote<]Prime95 would be one of the cases where you wouldn't see it work well if at all (in AMDs case).[/quote<] I don't have an AMD CPU handy, but I decided to do a quick run of Prime95 on my i5-3570K. Benchmark (Only uses first core) - CPU goes between 3.7 and 3.8Ghz Small FFTs on 1 worker thread (bounces between cores) - CPU goes between 3.6 and 3.8Ghz Small FFTs on 4 worker threads (loads all four cores) - CPU is at 3.6Ghz I have a Corsair H60 cooling it, so it's likely that the thermal limits aren't met and it can constantly run higher than the base speed. If I were on air cooling, I would expect to see the speeds fluctuate a bit more. [quote<]I never said it didn't work, I was discussing it's actualized benefits verse perceived benefits.[/quote<] Which are you arguing? It's a way for Intel to differentiate its product. Also, if a chip can run at 3.4Ghz but can't turbo up to 3.8Ghz as often as another chip, Intel can still sell it as such or bin it somewhere else. To the end user, it's free performance and the only way to really get more performance out of a locked chip (outside adjusting the BCLK). You seem to be arguing "I don't see it working, therefore it must not be worth it."

      • chuckula
      • 6 years ago

      I disagree. I have video evidence from numerous episodes of Knight Rider where Turboboost was extremely effective at getting KITT and Michael out of all sorts of sticky situations. The look on the bad guys faces when a 1982 Trans Am flies over their heads says it all.

      • MadManOriginal
      • 6 years ago

      If only it were possible to benchmark CPUs under real-world load in order to compare their performance with things like Turboboost happening in the background…

    • ronch
    • 6 years ago

    Let’s see if Tick-Tock will do to the ARM bandwagon what it did to AMD.

    Beating on AMD was easy and fun. AMD is small, poor, loves dozing off at the beach, and it’s just one company trying to get a piece of the x86 pie.

    The ARM bandwagon is comprised of a big bunch of big, powerful companies with huge money bins, heavily entrenched on a hardware/software ecosystem that is gonna be difficult to penetrate with another ISA.

      • NeelyCam
      • 6 years ago

      [quote<]Let's see if Tick-Tock will do to the ARM bandwagon what it did to AMD.[/quote<] It will, and here's why. At the moment, there are only four [b<]big[/b<] players developing ARM chips for tablets/cellphones: Apple, NVidia, Qualcomm and Samsung. Apple is focusing most of its R&D budget into making the end products. Their chip division is pretty good, but the main reason A5 or A6 are so compelling is the OS. They can keep developing their own chips, but they just simply don't dedicate enough budget for that to compete with Intel. NVidia has a lot of focus on graphics products. Tegra line is an interesting revenue generator, particularly because NVidia has made an informed decision to try to stay off the bleeding edge of process technology - this has served them well, considering the way others have tripped on TSMC's mishaps. However, they aren't developing their own cores (unlike Apple/Qualcomm), and their budget is below Intel''s. Qualcomm has been very successful, largely because they have the CPU, GPU [i<]and[/i<] communication technology expertise. They have developed their own ARM cores, and they have been funding this R&D more than Apple/NVidia. They have a ton of IP on LTE, and on older technologies as well. They were definitely the first-to-market with solid LTE solutions, which is why they scored a ton of product wins in the last year or so. However, others are coming up with their own LTE solutions now (NVidia/Icera, Intel/Infineon..), so the full monopoly is about to disappear (plenty of IP revenue will still flow Qualcomm's way). Qualcomm has a more fundamental problem compared to Intel, though: reliance on TSMC. This is something NVidia (and, in the future, maybe Apple) shares with Qualcomm. TSMC has been tripping in delivering high-quality silicon on time, and is significantly behind Intel in process technology (2 years or so). Not only does this imply higher cost per chip, it also means lower power efficiency. TSMC has lately started ramping up R&D funding for future processes, but are well behind Intel on this funding. Finally, Samsung is an interesting beast. Like Intel, they develop both the chip architecture and the process technology to produce those chips. Like Intel, they have the capability to tweak the process to optimize it for a particular low-power CPU. Unlike Intel (or Qualcomm/Apple), they don't really develop their own ARM core - they take what's available, and merely tweak it a bit to reach higher clocks. And although they do develop their own process and fund that R&D heavily, they are quite a bit behind Intel on that front. Finally, the 'grandfather' of the movement: ARM itself. ARM develops the core IP (and graphics around it). They don't focus on end products, particular ARM "APUs", or process technology. Their revenue model is based on license and royalty fees for ARM technology. Compared to the others mentioned here, their revenue is puny, as is their R&D budget for future technology. Their R&D spending is nowhere near that of Intel's. So, this is the ARM ecosystem. There are three main parts: 1) Core product, 2) Product development, 3) Process development. No member of the "ARM ecosystem" does all these - Intel does. That gives Intel the ability to optimize everything to make a compelling product. Moreover, a lot of these ARM members are doing parallel work on the same things (e.g., Qualcomm, Apple and ARM itself are all developing cores). That's a lot of re-inventing the wheel... those R&D dollar's aren't really that 'additive'. Meanwhile, Intel can put a ton of resources on developing the next-gen Atom, and there would be very little wasted work on multiple teams working on the same thing. This is why combining all those ARM ecosystem R&D budgets and claiming there is more research going on doesn't make sense. Finally, and maybe most shockingly, considering all that parallel work and all the work done in various different companies, Intel [i<]still[/i<] spends more money on R&D than all these other companies I mentioned [b<]combined[/b<]. This is why Intel's tick tock will beat the ARM ecosystem. They are too far ahead in process technology, they've been doing CPU development longer than any ARM ecosystem member, and they have [b<]huge[/b<] budgets to keep that advantage. This is why Silvermont is promising to be so much better than anything from the ARM camp. This is why AMD couldn't compete with Intel's "big cores". And this is why my Intel stock will double in value in the next 18 months

        • ronch
        • 6 years ago

        Intel obviously has a good shot at gatecrashing the ARM party but hey, Intel is trying to creep into the core businesses or major businesses of these companies. I’m not optimistic these guys are gonna band together, but they certainly won’t stand still while Intel kicks the door. Of course, some of these ARM licensees’ dependence on ARM is not as critical as the others (Samsung and Apple may decide to switch to x86 whenever they feel like it and reallocate R&D spending somewhere else), while ARM (Holdings), Nvidia, and Qualcomm have a lot more at stake with the ARM ISA mostly because they can’t do x86 or doing some other ISA like MIPS doesn’t make a lot of sense at this point. For ARM itself, Nvidia, and Qualcomm, fending off Intel is more important than it is for Apple and Samsung and I’d imagine they’re already aware of the serious threat Intel poses and are doing everything they could to help fend them off as we speak.

        It’s gonna be fun to watch, that’s for sure.

        • Mr. Eco
        • 6 years ago

        TL;DR version: Intel has and can spend a lot more money than all of them, hence will win.
        Probably correct; not fair in a sense. Whoever has more money, keeps making even more money.

          • chuckula
          • 6 years ago

          Lots of people talk about how Qualcomm has a bigger market cap than Intel, and Samsung is an absolute giant. Intel is by no means some huge Goliath going up against poor hapless Davids here.

          • bjm
          • 6 years ago

          Yeah, because Samsung, Apple, and Qualcomm are broke.

            • Mr. Eco
            • 6 years ago

            OK, I see, I didn’t think it through. Just make a quick check – Apple, Samsung, and Qualcomm are of the same relative size as Intel.
            I am not from US and not interested much in shares and stuff, so a bit ignorant 🙂

            • NeelyCam
            • 6 years ago

            I’m not saying Apple/Qualcomm can’t spend on R&D – I’m saying they just don’t. Both Qualcomm and Apple spent a bit over $3bil on R&D in 2012; Intel spent over $10bil.

            • Anonymous Coward
            • 6 years ago

            [quote<]Qualcomm and Apple spent a bit over $3bil on R&D in 2012; Intel spent over $10bil.[/quote<] Process tech can probably absorb any amount of cash, but it seems to me that tiny processor cores reach saturation at a reasonably small sum. I think Intel's only significant advantage is their fabs.

            • NeelyCam
            • 6 years ago

            Well, I think we can all agree that Intel’s 32nm chips were completely and utterly beating AMD’s 32nm chips – that wasn’t because of a fab advantage… so Intel definitely has other advantages as well.

            Kanter mentioned towards the end of his Silvermont writeup that Silvermont as an architecture is just superior to ARM, and would beat ARM even if Intel didn’t have a process advantage. The bottom line is that Intel has a much larger CPU R&D budget than, say, ARM, and decades of experience. I think it would be a mistake to dismiss Intel’s superiority as simply a process advantage – their CPU design teams are also better than those of the competition.

            • Action_Parsnip
            • 6 years ago

            “Kanter mentioned towards the end of his Silvermont writeup that Silvermont as an architecture is just superior to ARM”

            Not remotely true. You made that up.

            • NeelyCam
            • 6 years ago

            Here:

            [quote<]"Silvermont microarchitecture might be even more efficient than some ARM cores when normalized for Intel’s advantage in process technology. That seems an unlikely and counter-intuitive outcome, but could be explained by Intel’s larger teams and superb physical design. "[/quote<] [url<]http://www.realworldtech.com/silvermont/8/[/url<]

            • Anonymous Coward
            • 6 years ago

            So Kanter doesn’t say what you said he did.

            • NeelyCam
            • 6 years ago

            I said:
            “Kanter mentioned towards the end of his Silvermont writeup that Silvermont as an architecture is just superior to ARM, and would beat ARM even if Intel didn’t have a process advantage.”

            He said:
            “Silvermont microarchitecture might be even more efficient than some ARM cores when normalized for Intel’s advantage in process technology. That seems an unlikely and counter-intuitive outcome, but could be explained by Intel’s larger teams and superb physical design.”

            So, the difference is that he said “some” and “might”; that doesn’t really change the point, wouldn’t you agree?

            • Action_Parsnip
            • 6 years ago

            It does change the point. Your point.

            “[b<]Even if Intel’s projections are modestly optimistic, Silvermont should conclusively dispel the myth that the x86 instruction set is a barrier to power efficient microarchitecture.[/b<] If Intel’s claims hold true, the Silvermont microarchitecture might be even more efficient than some ARM cores when normalized for Intel’s advantage in process technology. That seems an unlikely and counter-intuitive outcome, but could be explained by Intel’s larger teams and superb physical design." He meant power efficiency. He didn't mean "is just superior". The paragraph above that he wrote: "The usual caveats apply to any vendor supplied performance projections, as benchmarks are rather sensitive to software tuning and rely on numerous various assumptions about the future and system configurations. [u<]At this point, it would be far wiser to take any such numbers with a large grain of salt until real hardware is available and benchmarked.[/u<] Missing performance or power projections by 10-20% is not uncommon, particularly for a new microarchitecture that may be challenging to model. But the gains claimed for Silvermont over Saltwell are too large to be lost to error bars." It certainly changes your point: "So, the difference is that he said "some" and "might" You willfully misrepresented what he said in his article to suit your own point

            • NeelyCam
            • 6 years ago

            [quote<]He meant power efficiency. He didn't mean "is just superior".[/quote<] When I said "superior" I meant power efficiency, which it the main thing that matters when considering cellphones and tablets.

            • KarateBob
            • 6 years ago

            I logged in just to thumbs up Parsnip and thumbs down NeelyCam warped sense of reality

            • Anonymous Coward
            • 6 years ago

            I won’t hold AMD up as an example of flawless chip design. They are under huge pressure and, crucially, the designs they are making are big.

            I expect that the smaller and simpler cores used in power-sensitive applications can be perfected at much lower cost. I do not expect that there is much wiggle room on the basic parameters of these designs. Probably a fairly constrained mix of execution units, a fairly constrained length of pipeline, a fairly constrained size of cache, a fairly constrained number of instructions issued per clock, etc. There simply has to be an optimal way to do this, if the objective is optimal power usage per unit of “mobile device” performance. Intel might implement a clever idea first, but I doubt the smart engineers at the ARM competitors would be far behind.

            So, this line of thought is why I say Intel’s only advantage is process technology.

            • Anonymous Coward
            • 6 years ago

            Three down votes and no counter arguments.

            • chuckula
            • 6 years ago

            Made it -2 for you! Although while you are right that in general “smaller & simpler” has a lower development cost, ARM is rapidly moving away from “smaller & simpler” when it comes to the types of cores that it wants to see in smartphones/tablets and micro-servers. From the evidence I have seen over the past 5 years, Intel has done a much much better job of scaling down than ARM has of scaling up.

            • NeelyCam
            • 6 years ago

            Scaling down is easier than scaling up.

        • Beelzebubba9
        • 6 years ago

        I agree with you overall but:

        1. Apple’s A6 was both the fastest and the most power efficient chip at the time of it’s launch (according to Anand’s deep dive) and with their massive capital there’s no reason to sell them short. Not that I think Intel will need to fear them, especially as the process gap grows, but I think they have a good shot at being #2 just because of their sheer size.

        2. nVIdia certainly is developing their own high performance ARM core called Project Denver. We should see the first shipping projects by the same time Airmont hits.

        But yeah, overall I agree that now Intel has begun to stretch their legs and lean hard on their massive process advantage they’ll start to walk away from the ARM camp. Whether this translates into marketshare remains to be seen, but by 2015 I’d wager it will.

        • Action_Parsnip
        • 6 years ago

        What a load of old tosh. You run away with yourself at the end.

        1. “the main reason A5 or A6 are so compelling is the OS”

        No. They were fastest.

        2. “However, they aren’t developing their own cores (unlike Apple/Qualcomm), and their budget is below Intel”s.”

        They’re strength isn’t in cpus. Tiny detail.

        3. “Something about dazzling successful Qualcomm being doomed”

        David Kanter said: “this is merely the starting point of a long battle for design wins in phones, especially given that the software ecosystem is ARM-centric.”

        4. “Their R&D spending is nowhere near that of Intel’s”

        Yet they are stupendously successful. Examine theory again.

        5. “those R&D dollar’s aren’t really that ‘additive’.”

        You don’t know that. You’re guessing. You don’t know what research from outside arm get’s shared back to arm to influence the direction the ip takes in the future for the benefit of those doing the research in the first place.

        6. “So, this is the ARM ecosystem. There are three main parts: 1) Core product, 2) Product development, 3) Process development.”

        And: 4) Arm software ecosystem, which should probably be the first thing on that list, because that’s the most pertinent reason why breaking into the market will be a challenge.

        7. “Finally, and maybe most shockingly”

        Give it a rest.

        8. “And this is why my Intel stock will double in value in the next 18 months”

        You’re a charlatan

          • someuid
          • 6 years ago

          -1?

          No, you guys need to get off the Intel Float Parade truck and re-read Action_Parsnip’s post.

          Intel can make the grandest mobile CPUs in the world. If there’s no in-demand OS to run on it, it’s shelfware. The two biggest mobile OSes – iOS and Android – are ARM ecosystem OSes. Neither of those camps are going to dump ARM and flip over to x86, especially when they’ve put their own R&D into ARM chips.

          Don’t even mention Windows 8 or Windows Blue. Microsoft is entirely lost and flopping all over the place with that crap. I don’t care if you downloaded a free app to bring back your start menu and manage to live on the desktop all day long. Microsoft’s grand transition to The Mobile OS is failing and they refuse to see the writing on the wall.

          The only people who are going to pick this up are the usual PC ecosystem people for small laptops , and then we’re going to see reports about how 7-Zip, video encoding, 3DMark, and other benchmarks return dismal results compared to i3 and i5 chips.

          I don’t hate Intel. They’ve built a wonderful themepark of CPUs but their main partner, Microsoft, is busy playing in a mud puddle by the ticket booth.

          Intel should partner with Ubuntu for the OS and Enterprise stuff and Steam for the apps and games for the consumers. If anything, it would give Microsoft the slap-in-the-face-wake-up they need.

            • chuckula
            • 6 years ago

            [quote<] Intel can make the grandest mobile CPUs in the world. If there's no in-demand OS to run on it, it's shelfware. The two biggest mobile OSes - iOS and Android - are ARM ecosystem OSes. Neither of those camps are going to dump ARM and flip over to x86, especially when they've put their own R&D into ARM chips.[/quote<] Android is not an "ARM ecosystem" OS. It is running a freakin' frankensteined Java VM over the Linux kernel, and I have a bridge to sell you if you think that is an "ARM ecosystem." There are already multiple Android products on the market *right now* that run on Atom just fine. iOS is also *not* an "ARM ecosystem" OS... it is an [b<]APPLE[/b<] OS. The only ecosystem Apple recognizes is one that it controls. As I correctly stated below, every single iOS application that is sold in the Apple store has been natively compiled to x86 and executed on an x86 CPU!

        • Stonebender
        • 6 years ago

        “And this is why my Intel stock will double in value in the next 18 months”

        Don’t hold your breath. What did Intel’s stock do when they kicked AMDs ass with Core? Nothing. What did the stock do when the company had record quarter after record quarter? Nothing. You’ll be lucky to see the stock hit $30.

        • oldog
        • 6 years ago

        I believe the error of your argument is that the only important companies that make mobile devices are building their own CPUs based on ARM.

        It doesn’t really matter that an Intel processor or process is better. It seems unlikely that Apple or Samsung will use them in their devices. Why would they do so?

        So unless the next Apple is lurking out there somewhere with killer mobile designs and an optimized OS for Atom it would seem to me that Intel will suffer from the same downward drag as Microsoft in mobile.

        Honestly, I believe that Intel’s best hope for the future would be to buy AMDs graphics division and kill all other manufactures on the desktop both CPU and GPU. It would leave them in an enviable long term position when users realize that the best computing device is still a “desktop”.

        Heck they can still lease their fabs to smaller ARM companies and make a killing.

      • beck2448
      • 6 years ago

      Competitors who have underestimated Intel in the past are either out of business or on life support.

    • ronch
    • 6 years ago

    I wonder just how Jaguar will stack up against this. And honesty I don’t see how Intel/AMD can really take on the ARM bandwagon with their respective offerings. I mean, performance and power look good but the industry has become aware of what sort of abusive monopolistic company Intel can be. Of course business is business but having a monopoly isn’t good for OEMs and system builders. Add to that the fact that Android and IOS are pretty much stuck with ARM. Does Intel think all those devs out there are just gonna port all their programs to x86?

      • NeelyCam
      • 6 years ago

      [quote<]I mean, performance and power look good but the industry has become aware of what sort of abusive monopolistic company Intel can be.[/quote<] If anything, that should make them [i<]more likely[/i<] to steer happily towards new options and away from ARM monopoly

        • ronch
        • 6 years ago

        Yes, but making x86 tablets would encourage devs to support the platform and move away from ARM (assuming Intel will blow ARM’s technology out of the water), thereby repeating what Intel did with x86 PCs. The industry knows Intel wants to keep the industry all to itself.

      • beck2448
      • 6 years ago

      AMD is not the competition in smartphones and tablets. ARM is.

    • UberGerbil
    • 6 years ago

    Kanter’s in-depth article is up now at [url=http://www.realworldtech.com/silvermont/<]RealWorldTech.com[/url<]

      • NeelyCam
      • 6 years ago

      Awesome! I was hoping Kanter would write up something about this

      • chuckula
      • 6 years ago

      Kanter’s words of wisdom are always greatly appreciated.

        • Action_Parsnip
        • 6 years ago

        The forum comments he makes are extremely illuminating between articles being released. Linus Torvalds also comments there too.

    • tipoo
    • 6 years ago

    This makes me wonder if companies that make in-house SoCs (I guess Apple in specific, since Samsung also sells them to others while Apple just does it for themselves) will ever switch mobile devices to Intel if they just can’t match the performance per watt of future Atom cores.

    • Chrispy_
    • 6 years ago

    Hell, it’s about time.

    Atom has been a joke for almost its entire life, thanks to terrible in-order Bonnell architecture in an OoO world.

      • NeelyCam
      • 6 years ago

      Then the real joke must be on those ARM chips that Anandtech’s review showed got beaten by the terrible 32nm Atom chip.

        • Action_Parsnip
        • 6 years ago

        [url<]http://citavia.blog.de/2013/02/19/isscc-2013-and-next-gen-consoles-15549512/[/url<] Atom is alot larger than anything else it competes against. Even normalised for process node. In that respect it is 'terrible'.

          • A_Pickle
          • 6 years ago

          Yeah, but I can still buy an ASUS tablet that runs desktop Windows with one of those “slow, big, power hungry” Atoms and still get competitive battery life and [i<]way[/i<] better performance...

      • Anonymous Coward
      • 6 years ago

      [quote<]thanks to terrible in-order Bonnell architecture in an OoO world.[/quote<] Considering that nothing was compiled for the Atom's particular instruction ordering requirements, and no effort was made to improve performance, you should in fact be impressed how well it did. Bobcat could hardly beat it.

    • Anonymous Coward
    • 6 years ago

    Seems to me that AMD is gonna get hurt in a battle between the big boys.

    Jaguar vs Silvermont will be interesting. Probably the first time AMD and Intel have faced off with designs that are both very similar and intentionally kept modest.

      • chuckula
      • 6 years ago

      [quote<]Jaguar vs Silvermont will be interesting[/quote<] There's minimal overlap between the two. Jaguar is not aimed at smartphones/low power tablets at all. There could be overlap between the lower-end Jaguars and the highest-end Baytrail tablet SoCs with pretty predictable results: Jaguar will win at GPU but have higher power draws in the process.

        • NeelyCam
        • 6 years ago

        Actually, it’s very interesting; Temash is sitting sort of in that small pocket between Bay Trail and Haswell in terms of power consumption. Maybe it’s the sweet spot..? Too bad it doesn’t use 22nm trigate

        [quote<]Jaguar will win at GPU but have higher power draws in the process.[/quote<] Jaguar doesn't have a GPU [/nitpick]

          • chuckula
          • 6 years ago

          [quote<]Jaguar doesn't have a GPU[/quote<] I knew it! AMD has perfected the technology to put the 7990 in a tablet! (yes, yes, I know what you meant with your nitpick)

        • Anonymous Coward
        • 6 years ago

        [quote<] There could be overlap between the lower-end Jaguars and the highest-end Baytrail tablet SoCs with pretty predictable results: Jaguar will win at GPU but have higher power draws in the process.[/quote<] I expect Intel to have a product that competes with AMD wherever it is that Jaguar is launched, and I'll be looking forward to those benchmarks, especially the CPU ones. I'm thinking quad cores at about 2ghz is a likely place they'll compete. No doubt Intel wins the wattage contest.

      • moog
      • 6 years ago

      AMD and Nvidia are going to suffer. (Tegra sales were already non-existent)

        • UberGerbil
        • 6 years ago

        Really? So none of those Nexus 7 sales actually happened?

        (Yes, they may be selling out of inventory now pending a non-Tegra refresh, so NVidia may be scrambling for new business, but Tegra had a pretty good run in just that one device)

        • Prion
        • 6 years ago

        But what about all of those OUYAs? 😛

      • ronch
      • 6 years ago

      It’s also interesting to note how AMD is doing the ‘ambidextrous’ approach here, rolling out Jaguar soon as well as planning to get ARM chips on the market. Intel has no choice but stick with x86 because it’s their pride and joy, while AMD is really just tagging along for the x86 ride and is somewhat seen as less tied to x86. Also interesting to note that AMD is doing two ISAs with what few resources it has while almighty Intel is concentrating on just one ISA.

        • NeelyCam
        • 6 years ago

        [quote<]Also interesting to note that AMD is doing two ISAs with what few resources it has while almighty Intel is concentrating on just one ISA.[/quote<] Maybe AMD decided that they don't have the resources to come up with a low-power architecture that's competitive with Intel/ARM in the same TDP space, and figured it's easier and cheaper to just buy it from ARM.

          • Anonymous Coward
          • 6 years ago

          It seems like it would be impossible for AMD to make a profitable world-class ARM core at the same time as they try to keep up with Intel.

            • ronch
            • 6 years ago

            They’re having enough trouble keeping up with Intel without adding the burden of developing a world-class ARM core.

          • ronch
          • 6 years ago

          They need to reduce their dependence on x86 fast, so I can imagine they evaluated their options when they got on the ARM list a few years ago. Designing a new ARM core from scratch involves a lot of risk especially with their diminishing bank balance, lack of experience developing ARM cores, possible lack of manpower to design and release an ARM core on time given their reduced work force, as well as not being sure whether their ARM product will succeed. I’d imagine it would be cheaper and far less risky to just license ARM cores from ARM Holdings because the cost of R&D spent by ARM Holdings can be spread out across more chips produced by more licensees instead of developing an ARM core by yourself and shouldering all the costs and risks associated with such a decision. Also, if AMD’s ARM-based products don’t succeed (knock on wood!), they can probably just reduce production (by reducing orders from TSMC or some other foundry) and cut their losses (if this is part of their agreement) instead of writing off the whole cost of developing the core.

            • Anonymous Coward
            • 6 years ago

            [quote<]They need to reduce their dependence on x86 fast[/quote<] It would be pretty funny if AMD manages to transform into an ARM-centric business just as Intel manages to torpedo ARM. Not that I believe Intel will meet with complete success in their endeavor, but anyway there is nowhere to hide.

    • gamerk2
    • 6 years ago

    Sounds good on paper; we’ll see going forward.

    • drfish
    • 6 years ago

    Anyone else picturing AMD as Captain Hook and Intel as Tick-Tock the crocodile…?

      • MadManOriginal
      • 6 years ago

      I wasn’t, but I am now. Thanks.

      • maxxcool
      • 6 years ago

      add a peg-leg to the hook… mebee … 😉 and make smeed Arm with a musket full of nails and buckshot..

      • drfish
      • 6 years ago

      Wow, I must be even less funny than I thought…

    • maxxcool
    • 6 years ago

    Finally, a little competition (from arm) and intel gets off its ass… keep it up Qcom…

    • Flying Fox
    • 6 years ago

    Looking forward to the Silvermont based Atom in the Thinkpad Tablet 2 form factor. May be we will have a decent Win8 (or 8.1?) tablet after all.

    • Star Brood
    • 6 years ago

    This is amazing. Maybe one day we can see Core 2 performance out of a 1W CPU.

      • tipoo
      • 6 years ago

      2015 by Intels own count iirc? Not 1W specifically, but low enough power for phones.

        • Star Brood
        • 6 years ago

        And then I’ll buy an x86 tablet.

    • smilingcrow
    • 6 years ago

    This is what Intel need to have a chance to stop the ARMslaught. A decent convertible tablet really appeals to me as I don’t generally require much performance in portable devices. A convertible obviously because then I only require one device.
    If Wintel along with the OEMs can pull this off they have a good shot at stealing market share from the premium tablet sector at least.

    I compared the Cinebench R10 32 bit figures for the Atom Z2760 versus an i3110M and arrived at this in terms of equivalency to an Ivy Bridge 2C4T mobile chip:

    Z2760 (2C4T) 550MHz
    Silvermont (4C4T) 1.4GHz (assuming the 2.5x gain is real at the same TDP)
    14nm (4C4T) 1.8GHz (assuming a 30% gain over Silvermont at the same TDP)

    [url<]http://www.notebookcheck.net/Review-Dell-Latitude-10-Tablet.88933.0.html[/url<] Not sure if the fact that it was a 32 bit test will skew the results at 64 bit? There are Ultrabooks with 1.8GHz IB i3 chips so to have that in a convertible fanless tablet in 18 months sounds great. Looks like Intel are on track so when can we expect to get an idea of what AMD have to offer this year for tablets? These have the potential to kill Intel’s margins though and more significantly their profits.

    • Arclight
    • 6 years ago

    TBD stands for throtted Bulldozer. Am i rite guise? Anyone? Please respond.

      • chuckula
      • 6 years ago

      No No No, it stands for [b<]T[/b<]iny Bulldozer. As I explained above, Intel stole the designs for Bulldozer and then shrank it to run as an Atom.

      • ronch
      • 6 years ago

      After the AMD Bulldozer fiasco which left folks scratching their heads as to how AMD could even think of code-naming their new baby ‘Bulldozer’, Intel decided to call their next Atom…

      [b<][u<]THE[/b<][/u<] BULLDOZER

    • NeelyCam
    • 6 years ago

    My god, those performance numbers are even better than I expected..

    So, under load 22nm Atom kills all competition in power efficiency. What remains to be seen is how good is the idle power and power management – if they are anywhere as good as the performance, Intel has a tablet/cellphone chip winner in its hands.

      • MadManOriginal
      • 6 years ago

      Tablet, yes, cellphone…maybe. Still needs an integrated modem to make a top-tier cellphone SoC.

      *It does say something about analog circuits in the article. I get a little confused about the difference between a modem and ‘baseband’. modem is the ‘network interface’ so to speak, and baseband is the radio? Analog circuits would be the baseband then?

        • NeelyCam
        • 6 years ago

        [quote<]Still needs an integrated modem to make a top-tier cellphone SoC.[/quote<] It helps, yes, but isn't absolutely necessary, as long as the solution performs well and is low power. [quote<] get a little confused about the difference between a modem and 'baseband'. modem is the 'network interface' so to speak, and baseband is the radio? Analog circuits would be the baseband then?[/quote<] Generally (or, rather, 'traditionally'),a receiver "baseband" refers to the circuitry that handles the signal after it has been translated down from Radio Frequency (e.g., 900MHz) to 'baseband' frequencies (from about DC to a few MHz or so); it can include analog stuff like filters, or an ADC to convert the analog signal into a digital one that can then be operated on with a digital demodulator. In practice, the lines can get a bit blurry. For instance, in a "software radio", an ADC might sample the RF signal directly, so all the "baseband" stuff happens in digital domain. Alternatively, old school radios had a lot of modulation/demodulation functionality done with analog circuits. At some point, the signal has to be translated from analog to digital or digital to analog, and where that translation happens is a design choice. As far as I know, no practical cellphone product has integrated the RF section into the same chip with the application processor, so all these systems are still at least two-chip solutions. Qualcomm is famous for integrating the modem with the processor, and maybe even baseband but RF is on a separate chip, while Infineon/Intel has RF, baseband and modem integrated into one chip, and application processor is on a different chip. (Somebody please correct me if I'm wrong.)

          • NeelyCam
          • 6 years ago

          Hmm… maybe I should add the “Was NeelyCam right… again? YES!” tidbit there, because -1 is nowhere near what I expected to get

        • Stonebender
        • 6 years ago

        Intel has already developed a multi-band lte modem that will be paired with the cellphone SoC.

        [url<]http://software.intel.com/en-us/blogs/2013/03/07/intel-android-at-mwc-2013-new-4g-intel-mobile-lte-modem-intel-xmm-7160[/url<]

          • NeelyCam
          • 6 years ago

          Is that the one that some third party comparison found to be the best LTE solution out there?

          [url<]http://newsroom.intel.com/community/intel_newsroom/blog/2013/02/25/chipshot-intel-s-lte-chipset-captures-top-honors-in-signal-research-group-s-benchmark-test[/url<]

      • Peldor
      • 6 years ago

      There’s an old Confucian saying that applies here:

      “Everybody’s PowerPoint numbers look good.”

      Edit: I’m not really knocking Intel or you, I just prefer to wait for the actual reviews.

        • chuckula
        • 6 years ago

        Ooh! What did Confucius have to say about Sharepoint server?

          • MadManOriginal
          • 6 years ago

          “Sharepoint? More like FAILpoint! lolol”

          (He was drunk at the time)

        • NeelyCam
        • 6 years ago

        Intel has a pretty long history of delivering what they promise, unlike some companies whose names have three letters and start with “A”

          • Peldor
          • 6 years ago

          That’s true, but I’d still make the assumption that the relevant competition by the time these Atom chips actually make it onto retail shelves is somewhat better than the best shown here. It looks like Intel will have a serious contender, but nobody else is standing still either.

            • NeelyCam
            • 6 years ago

            According to the footnotes, those comparisons were done against a prediction of what the competition will have available this year and the next. At least that’s what i’ve seen others with better eyesight/slides state…

            As we’ve seen from A15, it’s very difficult to improve performance without sacrificing efficiency. Intel achieved that partly because the previous architecture wasn’t well tuned for cell phone power envelopes (a low hanging fruit ARM doesn’t have), but mainly because of process shrink and – particularly – trigate (something that ARM licensees don’t have anytime soon).

            I made a prediction maybe a year ago, saying that Intel will dominate everyone in power efficiency until 2015 (when others get finfets), and i am still confident that will be the case.

    • chuckula
    • 6 years ago

    Peeling back the onion layers of this design: Silvermont puts two cores in a module with shared L2 cache… hrmm… modules, shared L2… SILVERMONT IS A CARBON-COPY KNOCKOFF OF BULLDOZER! I knew Intel would have to steal from AMD to improve its chips!

      • smilingcrow
      • 6 years ago

      Intel are using Spanish onions so their modules don’t make you cry when you see their performance.

      • Antimatter
      • 6 years ago

      More likely a carbon copy of Conroe.

        • chuckula
        • 6 years ago

        Exactly! Intel stole Silvermont FROM ITSELF!

          • smilingcrow
          • 6 years ago

          That sounds like a Philip K Dick novel and as Intel’s old chief used to say:

          “Only the paranoid survive”.

      • ronch
      • 6 years ago

      Well, AMD did copy Intel from the 8086 to the 486. And as if that wasn’t enough, AMD has the nerve to continue copying Intel’s latest instructions such as MMX, SSE, AES-NI, and AVX!

      So, it’s PAYBACK TIME!!!

    • bjm
    • 6 years ago

    “If Atom bites into Apple’s marketshare, it will be banished forever from the walled garden.”

    Jobs, 19:84.

    • chuckula
    • 6 years ago

    Is Intel the fastest company at adapting to new markets? Of course not.
    Is Intel dangerous once it gets pointed in the right direction and starts to build momentum? Yes.
    Intel isn’t the flashiest outfit in existence, but once it decides that something is important it can be relentless in pursuing its goals.

      • ronch
      • 6 years ago

      Someone once said 50% of the world’s Ph.D.’s work for NASA. The other 50% works for Intel.

        • bjm
        • 6 years ago

        …and the remaining 33% work in stock market forecasting.

          • ronch
          • 6 years ago

          Er… for a total of 133%?

            • chuckula
            • 6 years ago

            Exactly! What happened to the other 24.3% I wonder?

            • ronch
            • 6 years ago

            Well, don’t you know? [url=http://techreport.com/image.x/2011_11_15_Execution/comic-20111114-big.png<]Here, of course.[/url<]

            • NeelyCam
            • 6 years ago

            That was funny

            • bjm
            • 6 years ago

            Impressive valuation given the present market conditions, I know. It had surprised me, too.

        • Peldor
        • 6 years ago

        That’s quite impressively stupid.

    • smilingcrow
    • 6 years ago

    Up to 8 (real) cores and Haswell is still stuck on up to 4 + HT!
    I suppose the 8 core chips are for servers?

      • CampinCarl
      • 6 years ago

      Go buy a Xeon Phi and have 60 real cores. With Linux. And MPI.

        • smilingcrow
        • 6 years ago

        I’d rather be in Philadelphia!

          • CuttinHobo
          • 6 years ago

          I hear it’s always sunny there!

          • NeelyCam
          • 6 years ago

          Once I accidentally found myself in the “bad side” of Philadelphia. The way all the people on their porches were staring at me, I was 95% confident I would die if the car stopped moving

            • smilingcrow
            • 6 years ago

            Do you means the side where they use AMD processors?
            They’d have stripped your car down and used the engine block as a heat-sink on a Bulldozer running at 5GHz.

            • chuckula
            • 6 years ago

            The bad side of Philadelphia… oh wait, that would be: ALL OF IT!

            • willmore
            • 6 years ago

            I think he meant to say “the worse side”.

    • WaltC
    • 6 years ago

    Get ready to wave bye-bye, to ARM…;) “Nice knowing you, fella’. Too bad you couldn’t stay longer!”

    Intel will keep relentlessly plugging away here until it gets it right, even if this architecture isn’t quite there yet.

      • Ushio01
      • 6 years ago

      yeah no, Qualcomm design there own ARM chips and there as big as Intel in market cap and net income and Apple also design there own ARM chips and there even bigger.

      The question I want answered is how much Intel will charge for their SoC’s as the Qualcomm SoC in the Samsung S4 is only $20 according to isuppli and I don’t see Intel being that cheap.

        • willyolio
        • 6 years ago

        you clearly don’t know how humongous Intel is. if they want to brute-force their way into a lucrative market, they can and will. ARM could go the way of AMD.

        • NeelyCam
        • 6 years ago

        [quote<]I don't see Intel being that cheap.[/quote<] Why not? They could surely make the chip at lower cost than Qualcomm. And it's not like Qualcomm isn't making any money with those chips

        • Farting Bob
        • 6 years ago

        Intel is much bigger than Qualcomm in chip production. They have a big lead in manufacturing and their design teams are widely regarded as the best in the industry. If intel want to enter a market, not much can stop them.

    • chuckula
    • 6 years ago

    Looks extremely interestly although it won’t help Neelycam for this simple reason: As you note on the last page of the article, Apple & Samsung each own huge chunks of the market. Even if their chips aren’t as technologically advanced, the control and vertical integration that they get from having their own ARM SoC is too great for Intel to overcome.

    Now, smaller players in the market and comparative newcomers like Lenovo may have more flexibility in choosing Intel over an ARM chip that may be made by one of their competitors….

      • End User
      • 6 years ago

      [quote<]Even if their chips aren't as technologically advanced, the control and vertical integration that they get from having their own ARM SoC is too great for Intel to overcome.[/quote<] Bingo Get ready to wave bye-bye, to Intel...;) "Nice knowing you, fella'. Too bad you couldn't stay longer!"

        • chuckula
        • 6 years ago

        Well… remember that there’s no reason that Apple/Samsung have to remain on top… There are plenty of other manufacturers who can churn out products, and there is high turnover and relatively little loyalty in the world of mobile devices.

          • Anonymous Coward
          • 6 years ago

          [quote<]Well... remember that there's no reason that Apple/Samsung have to remain on top...[/quote<] There is no loyalty, but there is blind trend-following. Also, I'm not sure many people would really care if a competing product got somewhat better battery life. I've just received a new work phone (Lumina 920) and was surprised to discover it needs charging daily, whether I use it or not.

      • NeelyCam
      • 6 years ago

      [quote<]Even if their chips aren't as technologically advanced, the control and vertical integration that they get from having their own ARM SoC is too great for Intel to overcome.[/quote<] I disagree. If their chips are too far behind Intel's, they are forced to switch. Apple might be OK for a year or two because their iOS is smooth enough to hide the slowness of the chip, but Samsung may have to switch by mid 2014.

        • chuckula
        • 6 years ago

        I’m actually betting on other players gaining big marketshare in the short to mid-term instead of Apple/Samsung having an immediate conversion. For example, Lenovo has shown the ability to grow rapidly in the PC market and they already are putting out some pretty nice Atom mobile phones.

        Of the big two, I definitely see Samsung being much more flexible in trying out an Intel chip since Samsung would still get the money for the device and since it isn’t as rigid as Apple (Samsung is already using Qualcomm chips in many phones anyway).

          • NeelyCam
          • 6 years ago

          Lenovo maybe, or HTC. Sure, there is a chance that Apple will lose market share to hard-hitting competition. Kings come and go.. remember Nokia?

            • Anonymous Coward
            • 6 years ago

            [quote<]remember Nokia?[/quote<] I think its hard to compare the situation of Apple or Samsung to what Nokia experienced. We're talking about a processor here, and its not even clear if it will have a meaningful edge.

            • NeelyCam
            • 6 years ago

            [quote<]I think its hard to compare the situation of Apple or Samsung to what Nokia experienced.[/quote<] In a way, I think we're already seeing it with Apple... They destroyed Nokia with iPhones, and became the king of the new smartphone market. Now, Apple is slowly getting killed by Samsung. In the future, it's completely plausible that low-cost competition from China/Taiwan like HTC beats Samsung, especially if they have something Samsung doesn't, like Intel's new power-efficient superchip that probably also comes with that superfast digital image engine for taking pictures. HTC's cameras are already well liked - it would make sense for them to drive the brand further with Intel's chip.

      • PixelArmy
      • 6 years ago

      I don’t think Apple is too invested in their own SoCs… They’ve only really designed the latest A6/A6x, even then only the CPU side. IMO, it’s all about fabs. They go to Samsung for that and I can only assume they want out of that arrangement and if so, who else has that manufacturing capability? Unless they want to build their own (they certainly have the $$$)..

      So long as Intel meets their requirements and they can put some resources into porting iOS and support the running of old ARM apps on x86 (emulation/VM) while transitioning, it’s whether Apple would deem it worth those trade offs (obviously).

        • chuckula
        • 6 years ago

        [quote<]So long as Intel meets their requirements and they can put some resources into porting iOS and support the running of old ARM apps on x86 (emulation/VM) while transitioning, it's whether Apple would deem it worth those trade offs (obviously).[/quote<] Interesting factoid that not many people realize: Every single iOS application that you have ever run on your phone has been executed natively as an x86 application. The iOS development environment (that is obviously running on x86 processors) implements an iOS *simulator* to test & debug iOS apps. Note the careful use of the word "simulator" there. There's no virtual ARM CPU that is running the iOS app in a VM that simulates ARM opcodes. Instead, the iOS app is compiled down to x86 for testing, then cross-compiled to ARM for loading on the phone. So basically, yes Apple could certainly port over to x86 with minimal technical fuss.

          • Peldor
          • 6 years ago

          Given you recent posting history, I have no idea whether this is fact or crap.

            • chuckula
            • 6 years ago

            I assure you it is fact and it is very easy to verify BTW. Is it really hard to tell the difference between my trolls and my 100% factual posts? I must be getting better at trolling!

        • BoilerGamer
        • 6 years ago

        Apple is [url=http://www.tomshardware.com/news/TSMC-iPhone-Phase-5-Samsung,22423.html<]moving all their 2014 Iphone Processor production to TSMC 20nm process[/url<]. So losing Samsung as a Fab isn't going to matter to Apple.

          • NeelyCam
          • 6 years ago

          A risky play by Apple.. Didn’t they see AMD struggle with initial 40nm (2009) and 28nm (2011) availability? Trusting TSMC to somehow magically break that trend could really hurt Apple

          • moog
          • 6 years ago

          ARM and TSMC taped out a 16nm chip recently. Reading between the lines, looks like 20nm was a dud.

          22nm and 28nm were also worthless with companies like Nvidia very vocal with their complaints.

          How do you think 16nm is going to turn out for TSMC?

            • NeelyCam
            • 6 years ago

            [quote<]How do you think 16nm is going to turn out for TSMC?[/quote<] Better than 20nm or 28nm, but it'll take a long time to get there.

      • Andrew Lauritzen
      • 6 years ago

      That strategy is risky in the long run though. The further you let performance/battery life/etc. drift away the more you open up the market for someone to come in provide a better alternative. Some might have said that Apple’s position and lock-in with iOS was unassailable a few years ago… turned out it wasn’t 😉

      • ronch
      • 6 years ago

      ‘Interestly’ just wouldn’t die, would it? 🙂

    • MadManOriginal
    • 6 years ago

    Darn, I was hoping we’d see the first comprehensive real-world benchmarks today, not just a pre/over-view.

      • nico1982
      • 6 years ago

      Same here. I’m skeptical of any benchmarks involving unnamed competitors and lacking actual numbers. Still, it looks like the long overdue update of an underrated platform is quite solid.

        • smilingcrow
        • 6 years ago

        Best ignore those and focus on the benchmarks where they compare against their own chips as that is tangible.

          • nico1982
          • 6 years ago

          Same issue: no actual figures. It is best case? Average? Of what benchmarks/applications? I’m sure the new Atom will run circles around the old one (it’s half a decade old after all), it is just that the data they presented did not satisfy my curiosity in the slightest 🙂

            • smilingcrow
            • 6 years ago

            “All of the Intel comparisons report the geometric mean performance advantage over a spectrum of benchmarks. The benchmarks used include SPECint2K, CoreMark, SunSpider, web page load tests in IE/Chrome/Firefox, Linpack, AnTuTu and Quadrant (ugh) among others. The point here isn’t to demonstrate absolute peak performance in one benchmark, but to instead give us a general idea of the sorts of gains we should expect to see from Silvermont/Baytrail tablets vs the competition.”

            The above taken from Anandtech where they have deeper coverage of this event.

            Considering that products aren’t expected to be on sale until Q4 I wasn’t even expecting this much info. I think it shows that Intel know that they need to be aggressive in getting the word out early that they are serious in mobile.

Pin It on Pinterest

Share This