Intel turns off its tick-tock metronome

Those familiar with the CPU industry should be familiar with Intel's long-running "tick-tock" product development strategy. The chip maker alternated between releasing products based on a smaller transistor process (a tick), followed by chips with a new architecture based on that process node (a tock). That strategy served the company well for many a year, but Intel is now turning off that proverbial metronome. In its latest 10-K filing, the chip giant said the clockwork-like cycle is coming to a close. In the company's own words:

We expect to lengthen the amount of time we will utilize our 14nm and our next generation 10nm process technologies, further optimizing our products and process technologies while meeting the yearly market cadence for product introductions.

Intel is now moving to a three-step R&D cycle: process, architecture, and optimization. Generally speaking, shrinking transistor size creates challenges in the areas of heat output, power delivery, and electromigration. Intel plans to hold on to the 14-nm process node a little longer for the release of Kaby Lake, an "optimization" that the company describes as "[having] key performance enhancements as compared to our 6th generation Intel Core family."

The company still wants to keep Moore's Law alive in an amended form, though. When discussing its silicon manufacturing acumen in the K-10 filing, the firm had to say:

We continue executing to Moore's Law by enabling new devices with higher functionality and complexity while controlling power, cost, and size. In keeping with Moore's Law, we drive a regular and predictable upgrade cycle—introducing the next generation of silicon process technology approximately every two to three years.

The filing reveals that Intel spent a staggering $12.1 billion on research and development during 2015 to keep the dream alive. That's up from $11.5 billion in 2014 and $10.6 billion in 2013.

Comments closed
    • BIF
    • 4 years ago

    I can’t wait for the first Intel/Rolex ir7 processors.

    • Coyote_ar
    • 4 years ago

    Its been a long time since i even cared whats up with the latest cpu. i got an old ivy bridge, and its still good enough to cope with anything i throw at it. its the GPU whats the limiting factor.

    i just wish Intel would enter the discrete GPU business in full force, that would be interesting. but right now … at least on desktop CPUs … its really boring.

    • FireGryphon
    • 4 years ago

    Is it 10-K or K-10? You have it written both ways.

      • lordcheeto
      • 4 years ago

      10-K

    • CScottG
    • 4 years ago

    Yup. Broadwell was a financial “wake-up call”.

    14 nm for a while for a broader range product grouping.

    10 nm for most mobile + high-density.

    7 nm for a broader range again, but silicone’s last “wafer” (or at least single wafer rather than a stacked vertical design).

    I also suspect during this time that greater emphasis will be placed on new tech to improve thermal performance for Intel to offer higher clock-speeds to get more “headroom” out of their product range to enhance their new architecture. It should also be in prep. for a vertical design.

    At some point I’ll also expect the architecture to effectively inverse itself, from a synthetic multi-core to a synthetic single/dual core design.

    • BitBlaster
    • 4 years ago

    Tick. Tock. $Dough$

    • Legend
    • 4 years ago

    “In keeping with Moore’s Law…..”

    Trick, Talk, Tough.

    • WhatMeWorry
    • 4 years ago

    Here is the long term R&D cycle:

    PROCESS > ARCHITECTURE > OPTIMIZATION > ARCHITECTURE > OPTIMIZATION > ARCHITECTURE > OPTIMIZATION > ARCHITECTURE > OPTIMIZATION > ARCHITECTURE > OPTIMIZATION > ARCHITECTURE > OPTIMIZATION > ARCHITECTURE > OPTIMIZATION > ARCHITECTURE > OPTIMIZATION > ARCHITECTURE > OPTIMIZATION > ARCHITECTURE > OPTIMIZATION > ARCHITECTURE > OPTIMIZATION > ARCHITECTURE > OPTIMIZATION > ARCHITECTURE > OPTIMIZATION > ARCHITECTURE > OPTIMIZATION > ARCHITECTURE > OPTIMIZATION > …

    At least there is 3D XPoint (Optane) to fall back on.

    • wingless
    • 4 years ago

    They have to wait for AMD to come out with some more innovative ideas so they can improve upon them and stay #1 for the next decade again. AMD, honestly, has been slacking in that department.

      • reever
      • 4 years ago

      This article could come out every year for the next decade and people like you would still believe the laws of physics mean nothing to making integrated circuits. The age of regular die shrinks is ending, and may be over already.

      • ronch
      • 4 years ago

      I’m just grateful AMD is still even around. With the increasing difficulty of producing better performing processors coupled with having to compete with a far larger and more resourceful competitor who themselves are straining under the laws of physics and practically out of ideas to increase IPC, it’s amazing how AMD can even come out with a bleeding edge core called Zen and how TSMC and Samsung could even come close to Intel’s nodes for AMD.

        • jihadjoe
        • 4 years ago

        TSMC (and other dedicated fabs like Samsung and Glofo) have the resources for R&D because they’re effectively being funded by their customers. AMD, Nvidia, Qualcomm, Apple, Mediatek… Every single one of these companies jointly contribute to TSMC’s R&D funds.

        IMO it’s far more amazing that Intel is able to sustain both chip making and fabrication on its own, and still beat the combined resources of everyone else on both ends.

        Edit: Let’s add some concrete figures.

        In 2014, TSMC made $23.4B in revenue, $11.6B in profits, and spent $8.78B in “Production Facilities, R&D and Production Equipment”. That’s almost as much as Intel, and it’s all going into chip fabrication. ([url=http://www.tsmc.com/download/ir/annualReports/2014/english/e_6_2.html<]sauce[/url<])

      • Klimax
      • 4 years ago

      I’d say you should learn more about history of CPUs. There are not many (useful) technologies AMD got first. (Also depends on how narrow view one takes – x86 only or wider) And lastly, none of those they got early did them much good.

        • jihadjoe
        • 4 years ago

        Aside from x64 most of AMD’s greatest hits actually come from cannibalizing from DEC Alpha’s corpse.

          • Klimax
          • 4 years ago

          Not only Alpha. Their most notable chips in Pentium era were acquisition too.

    • djayjp
    • 4 years ago

    “…introducing the next generation of silicon process technology approximately every two to three years.”

    3 years is not Moore’s law but something else entirely….

    • ronch
    • 4 years ago

    If you thought being stuck at 28nm was bad….

      • Voldenuit
      • 4 years ago

      [quote<]If you thought being stuck at 28nm was bad....[/quote<] GeForce 680 (28 nm) vs 980 Ti (28 nm) [url<]http://www.anandtech.com/bench/product/1494?vs=1496[/url<] Radeon 7970 (28 nm) vs Fury X (28 nm) [url<]http://www.anandtech.com/bench/product/1495?vs=1513[/url<] So it hasn't been all bad. Certainly bigger gains than Sandy Bridge to Skylake. EDIT: Sandy Bridge (2500K) vs Skylake (6600K) [url<]http://www.anandtech.com/bench/product/288?vs=1544[/url<]

        • chuckula
        • 4 years ago

        If we look at IGP benchmarks between the 2500K and the 6600K the numbers might not look so bad for Intel…

        • Firestarter
        • 4 years ago

        With massively larger dies AND significantly higher power consumption, you’d expect much better performance. The GTX 980 Ti is almost exactly twice as large as the GTX 680!

        • synthtel2
        • 4 years ago

        Where would we go from here if 14/16nm weren’t on the horizon, though?

        The 7970 was only 352 mm[super<]2[/super<], and AMD has done better since then by going to 596 mm[super<]2[/super<] and improving power consumption (a combination of improvements in AMD's skill with 28nm and power management tweaks, I expect). Mostly, that's dependent on 28nm being mature. You don't make 596 mm[super<]2[/super<] chips at the beginning of a process cycle unless you've got some seriously high-margin stuff lined up for them, because they're not going to yield well without a whole lot of help from luck. You certainly don't try to push the density too hard with the first chip on a new process, for the same reason [i<]and[/i<] because it's likely to wreck your power efficiency. Nvidia went all the way from 294 to 601 mm[super<]2[/super<] in that time, and had to drop features and change things around enough that their chips from the beginning of the 28nm era perform sometimes less than 2/3rds as well as they did in their prime. Large parts of their gains were architecture hacks (because we gotta still be showing improvement, right? not that it worked out badly for them or anything) and again, maturity of the 28nm process. At 1000 MHz and 250W, Tahiti ran 2048 SPs, Hawaii ran 2816, and the Nano can run 4096 SPs at that clock in only 175W. Hawaii increases transistors/mm[super<]2[/super<] over Tahiti by 15.5%, and Fiji increases it another 5.5%. Maxwell is about 6.9% denser than Kepler (avg of G???4/G???0), despite the huge performance/watt delta between them (which usually likes low-density transistor layout). 2015's 28nm is pretty impressive compared to 2012's version of it, one way or another. How many spins do you think they can keep getting major gains out of a process? Mind that 600 mm[super<]2[/super<] is basically the limit - the fabs just can't make chips much bigger than that, period, so there's no ability to keep increasing parallelism to make faster chips. It's just an opinion, but I think we're really up against the limits of what 28nm can do by now.

        • ronch
        • 4 years ago

        Yes but you gotta admit it would’ve been nice if 20nm and smaller nodes came out earlier, right? Wouldnt it be nice if the 980Ti is bui!t on 20nm or 16nm?

    • Anonymous Hamster
    • 4 years ago

    and turned on its tick-tock-tuck metronome.

    Really not too surprising – given the increasing time and expense of developing new silicon process technology, it makes more sense to continue and refine the architecture for any given technology node.

      • Wirko
      • 4 years ago

      They’d rather “stop the metronome” than let us common people invent weird names for the thing we hear after a tock.

      Toinnnng!

    • willmore
    • 4 years ago

    How is this going to work? With tick/tock, you start with the architecture from the previous generation but on a new process node. Then you introduce a new architecture on that node. When you go to the next tick, you keep that architecture and just change the process.
    1122334455 <=process
    ABBCCDDEEF <=architecture

    With this new scheme, you have an extra ‘optimize’ step where the new architecture introduced in the process is optimized *for that process*. Those optimizations don’t carry over to the next process, so is that work effectively thrown out for the next process cycle?
    111222333
    abBbcCcdDd

    Where lower case is the new architecture and the capital is the optimization of it.

    Or by ‘optimize’ are they refering to the architecture? I’m just trying to see parallels to what AMD did with their APUs where they went to more optimized designs for many of the important functional units.

    • ronch
    • 4 years ago

    So the next time you buy a new set of parts, make sure you pick the best, most durable, most long-lasting motherboard you can buy because you’ll be using whatever CPU you buy next for a long, long time. I’m especially wary of boards that crap out after just 1 to 2 years.

    As someone who’s like most of you here who feels excited about buying a new set of parts to put together, it’s sad how the interval between new rigs has been growing longer and longer over the past decade, On the bright side though, at least we get to use what we spent a lot of money on for longer, the world [u<]may[/u<] somehow minimize electronic waste dumped in countries like Ghana because people won't throw out perfectly usable machines because they have something new, and of course, less expenses.

    • Anovoca
    • 4 years ago

    The second the tempo of releases switches to a nice waltz they drop the whole musical metaphor.

      • morphine
      • 4 years ago

      Intel is now doing triplets on the hi-hat.

    • tanker27
    • 4 years ago

    I havent followed hardware tech closely for quite sometime. But wouldn’t this mean longer lifespans for MOBO chipsets? To be thats a win.

      • ronch
      • 4 years ago

      No. CPUs may move forward more slowly but if new connectivity standards come out it would be nice to have newer chipset that support them. Of course mobo makers can simply put in auxillary chips like those separate USB 3.0 controllers and people can plug in expansion cards that add functionality but as I’ve said, it would be nice if updated chipsets still came out as they’re deemd to be more elegant.

    • blastdoor
    • 4 years ago

    Ironic and perhaps appropriate that Moore’s Law is ending at the same time as Andy Grove’s unfortunate passing. Would it be fair to say that Grove is the one primarily responsible for making Moore’s prediction come true?

    Perhaps if the “only the paranoid survive” attitude were still at Intel, we’d still have tick-tock?

    • chuckula
    • 4 years ago

    The fabs were completely wasted, out of EUV and down
    On the forums it’s so frustrating as Krogoth puts you down
    Feel as though nobody cares if I tick or tock
    So I might as well begin to put less action in my roadmap!

    Breaking Moore’s law, breaking Moore’s law
    Breaking Moore’s law, breaking Moore’s law
    Breaking Moore’s law, breaking Moore’s law
    Breaking Moore’s law, breaking Moore’s law

    So much for the tiny future, I can’t even resolve
    I’ve had every schedule broken, there’s anger in my shareholders
    You don’t know what physics is like, you don’t have a clue
    If you did you’d find yourselves fabbing the same chips too!!

    Breaking Moore’s law, breaking Moore’s law
    Breaking Moore’s law, breaking Moore’s law
    Breaking Moore’s law, breaking Moore’s law
    Breaking Moore’s law, breaking Moore’s law

      • kuraegomon
      • 4 years ago

      To be sung to the tune of…?

        • chuckula
        • 4 years ago

        [url<]https://www.youtube.com/watch?v=L397TWLwrUU[/url<]

      • TwoEars
      • 4 years ago

      This forum post goes to 11.

        • MOSFET
        • 4 years ago

        [quote<]This forum post goes to 11.[/quote<] I want to thumbs up this but you're currently at 11.

          • jihadjoe
          • 4 years ago

          I hereby bestow upon you the thumb-up that I was going to give to TwoEars.

            • TwoEars
            • 4 years ago

            Let’s make them all go to 11 !

      • K-L-Waster
      • 4 years ago

      Waiting for the follow up about process nodes: “You’ve got another Shrink coming”

        • NeelyCam
        • 4 years ago

        Flawless!! +1

        • drfish
        • 4 years ago

        Also awesome! 🙂

        • Srsly_Bro
        • 4 years ago

        Where was Chuck with “turbo lover?”

      • TheMonkeyKing
      • 4 years ago

      Great! And now for…<twin guitar duo of K.K. Downing and Glenn Tipton>

      \m/ ( -_- ) \m/

      • emredjan
      • 4 years ago

      I started singing involuntarily

      • dstrbd
      • 4 years ago

      +1
      This made my day

      • drfish
      • 4 years ago

      Awesome, just awesome. 🙂

      • radializer0
      • 4 years ago

      When I saw this, I had to try to dig out my old defunct TR account … and when that didn’t work, I re-registered afresh just to say “holy Judas Priest batman!!”

      you Sir, have won the internets 🙂

      • anotherengineer
      • 4 years ago

      I think you know who is going to sign chuckula

      [url<]https://www.youtube.com/watch?v=qpMvS1Q1sos[/url<]

    • bfar
    • 4 years ago

    The problem with Moore’s Law is that people think it was a law.

      • Srsly_Bro
      • 4 years ago

      Lay people hear law and keep reusing it. Shun the non believers.

      • Krogoth
      • 4 years ago

      It was nothing more than an observation of a trend. It has been invalidated for a decade now.

        • ronch
        • 4 years ago

        So it should’ve been called Moore’s Observation?

          • chuckula
          • 4 years ago

          More like: Moore’s practical joke that got way WAY out of hand.

            • ronch
            • 4 years ago

            Well, I wouldn’t exactly call it a joke because it was pretty steady for a while.

        • Stonebender
        • 4 years ago

        Intel’s definition of a process node change is predicated on Moore’s law. 14nm is twice as dense as 22nm, etc. So, no it hasn’t been invalid for a decade.

          • Krogoth
          • 4 years ago

          Just one of many misconceptions of “Moore’s observation”.

          It has little to do with IC and transistor density and clockspeeds.

            • Stonebender
            • 4 years ago

            [url<]https://en.wikipedia.org/wiki/Transistor_count#/media/File:Transistor_Count_and_Moore%27s_Law_-_2011.svg[/url<]

            • chuckula
            • 4 years ago

            It has absolutely everything to do with transistor density.
            It has nothing to do with clockspeeds, that’s Dennard scaling, not Moore’s law.

    • Tristan
    • 4 years ago

    with 3 years per proces we have Moore’s Law / 2. Below 7 nm, /3

      • JustAnEngineer
      • 4 years ago

      Moore’s law has been dead for over a decade.

    • TheJack
    • 4 years ago

    $12000000000 for a 5% improvement? Not impressed.

      • Pwnstar
      • 4 years ago

      Yup. Intel’s performance increases have been really pathetic for the amount of money they blew on it.

      AMD’s haven’t been good either, but at least they wasted less money on it.

        • Stonebender
        • 4 years ago

        “AMD’s haven’t been good either”

        Understatement of the year right here lol

        • Klimax
        • 4 years ago

        As long as you go for absolute performance. Include TDP and you get different picture.

    • Krogoth
    • 4 years ago

    Not surprising at all.

    The laws of physics are catching up fast and R&D costs are escalating. We are in the “end times” for good old silicon and perhaps semiconductor-based computing in general.

      • chuckula
      • 4 years ago

      You just invoked Krogoth’s lesser-known second law: Krogoth is never surprised.

    • Metonymy
    • 4 years ago

    I’m trying to understand the $12.1 billion in R&D last year in context… Was that more than AMD?

      • chuckula
      • 4 years ago

      [quote<]Was that more than AMD?[/quote<] Technically yes, but only because Intel counts the non-dairy creamer for coffee in its R&D budget numbers. Once you remove the non-dairy creamer offsets, AMD is clearly in the lead.

        • James296
        • 4 years ago

        yes, yeesss, all our computers are powered by non-dairy creamer fuel processors.

      • nanoflower
      • 4 years ago

      LOL. Given that chuckula just posted AMD’s total revenue was about 4 billion for 2015 I think it’s safe to say Intel outspent AMD on R&D a bit. Though that’s really not surprising given Intel has to spend money on researching new process tech which is very expensive while AMD doesn’t pay directly for that research and development any more since selling off Global Foundries.

      • K-L-Waster
      • 4 years ago

      The number 1 reason AMD doesn’t spend as much on R&D as Intel is because even the people who think buying AMD’s bonds is a good investment plan aren’t masochistic enough to lend them *that* much money…

        • Prestige Worldwide
        • 4 years ago

        Guilty.

      • Tirk
      • 4 years ago

      Apples and oranges, AMD no longer directly owns FABs, hence the obvious lack in research they do in order to produce smaller fabrication nodes.

      Maybe adding in a Fab’s R&D plus a chip designer’s R&D would resemble Intel’s R&D costs but even then it greatly depends on what the end goal of the product is. As the “computer” market remains severely lopsided towards one company or another in market niches there’s no clear direct comparison to any company: be that Intel, ARM, Samsung, Qualcomm, AMD, Mediatek, TSMC, etc. Some companies are more broad than others but none compete directly in the way different supermarket companies compete for example.

      Be that as it may, yes Intel is doing very well for itself. AMD isn’t broken yet, but it is certainly not the same scale of company that Intel is and I doubt it will ever be in the foreseeable future.

      • GatoRat
      • 4 years ago

      $947 million, according to their last annual filing.

    • chuckula
    • 4 years ago

    I still prefer Tic-Tac-Toe.

    [quote<]The filing reveals that Intel spent a staggering $12.1 billion on research and development during 2015 to keep the dream alive. That's up from $11.5 billion in 2014 and $10.6 billion in 2013.[/quote<] So much for the notion that Intel stopped spending money on R&D. For context, Nvidia + AMD's total revenue for 2015 was about: $4.68B + $3.99B = $8.67 Billion.

      • 223 Fan
      • 4 years ago

      You are on a roll today. The coffee must be exceptionally good this morning. More interesting would be how much Intel’s R&D spending compares to Samsung and TSMC since we are talking about silicon process development.

        • chuckula
        • 4 years ago

        Samsung: $13.8 Billion in total R&D… but that money is being pumped into all kinds of areas that Intel doesn’t particularly care about including everything from OLED displays to making sure the Galaxy smartphones with curves don’t break too easily. Even in the semi end of things, people forget that Samsung is a RAM manufacturer first, and an SoC manufacturer second. Plus, at least some of that $13.8 Billion sounds more like CAPEX than R&D… they aren’t the same thing:

        [quote<]Samsung Electronics has received its yearly audit, and the results released the other day show a breathtaking amount of research and development spending. How much? Well, how do $13.8 billion sound to you, almost three times as much as Apple invests here. Yep, that jaw-dropping amount has been ploughed into everything - from new memory tech and factories to make it, through flexible AMOLED displays, to state-of-the-art mobile chipset production. [/quote<] [url<]http://www.phonearena.com/news/Samsung-breaks-R-D-spending-records-invests-14-billion-in-new-tech_id67054[/url<] As for TSMC, they have promised to massively outstrip Moore's law. I'll believe it when I see it and relabeling your process every year -- like they've already done with "16nm" parts -- doesn't impress me.

          • 223 Fan
          • 4 years ago

          So all of Intel’s $12.1 billion is process research? Or is it an aggregate number like Samsung’s? I take TSMC’s claims at face value: a bald faced lie.

            • chuckula
            • 4 years ago

            Intel has a separate CAPEX budget for things like building a new fab or installing the equipment that actually produces commercial products. CAPEX happens *after* the R&D has come through and they are ready to make a product commercially.

            There was a huge deal made about how Intel was cutting CAPEX since it didn’t open as many 14nm fabs as it had opened for 22nm. People then incorrectly took the reduction in CAPEX to mean that there was no R&D taking place.. and Intel’s own numbers from 2013 – 2015 show that to be incorrect.

            • rems
            • 4 years ago

            Still the CAPEX does not imply a pure separation in R&D between pure CPU R&D and the rest which is what your #1 223 Fan is going at which makes sense, companies do not release detailed infos about their R&Ds breakdowns but if you have one to share be my guest!

            • the
            • 4 years ago

            Part of the CAPEX reduction was also due to the lack of 450 mm wafer production. Intel at one point has planned 14 nm to use the larger 450 mm wafers to maintain output levels even if they had to start double or triple patterning.

            • Zizy
            • 4 years ago

            EDIT: wrong.

          • ronch
          • 4 years ago

          Rebadged process nodes?? Damn you GPU makers!!!

      • the
      • 4 years ago

      With that amount of expenditure, I’d love to see what Intel has in their labs both in terms of tools and the experiments they’re running. Attempting to break the laws of physics isn’t cheap.

      Much has to effectively be gambled away on the off chance they get something ground breaking like a room temperature superconductor. Such a discovery would lead to a revolution and let Intel move beyond lithography for chip design. Intel has to make such low probability gambles now since they clearly see an end to lithography coming.

Pin It on Pinterest

Share This