Intel defends its process-technology leadership at 14nm and 10nm

At its Technology and Manufacturing Day event in San Francisco this week, Intel delivered a stern rebuke to the growing chorus of questioners asking whether it’s lost its process-technology lead. The company brought several luminaries from its manufacturing division on stage to talk about how its 14-nm process technology compares to its competitors’ 16-nm, 14-nm, and 10-nm offerings. It also offered some projections about how its upcoming 10-nm process will stack up.

From the top, Intel came right out and said what many in the industry have understood for some time: that the names of recent process nodes have become unmoored from the actual characteristics of the underlying technologies they claim to represent. A brief refresher: Intel’s cutting-edge Broadwell, Skylake, and Kaby Lake CPUs are all manufactured on its proprietary 14-nm FinFET process. AMD’s Ryzen CPUs and Polaris GPUs are produced on GlobalFoundries’ 14-nm FinFET technology, and Nvidia fabricates many of its Pascal GPUs on TSMC’s 16-nm FinFET tech. If one were to take those numbers at face value, one might believe that Intel, TSMC, and GlobalFoundries are all on relatively even footing when it comes to process tech.

The worry among investors and the analyst community, in turn, seems to be that Intel’s competitors will beat it to the next process node advance. For just one example of this perceived threat, Qualcomm’s upcoming Snapdragon 835 chip will be fabricated on Samsung’s 10-nm process, and that chip (plus Samsung’s Exynos 8895 SoC) will both be shipping soon in Samsung’s Galaxy S8. MediaTek has previously announced plans to make its Helio X30 SoC on TSMC’s 10-nm process, as well. That SoC could arrive sometime this year.

Not so fast, Intel says. The company points out that pitch measurements are just one characteristic of a semiconductor product, and it feels that using these measurements alone to characterize process capabilities isn’t painting a complete picture.

Instead, the company argues that a more useful measure of process advancement is to consider the density of logic transistors that a given process can achieve, independent of its node. With modern process technologies, the company suggests that measuring logic density using megatransistors per square millimeter (henceforth MTr/mm²) offers a better picture of what a given process can do.

Intel Senior Fellow Mark Bohr says the company is modeling this metric using a theoretical logic block comprising 60% NAND gates (not to be confused with an NAND flash cell) as a simple structure and 40% scan flip-flops to represent more complex structures.

The company introduced this metric using its 14-nm FinFET process as an example. That process can achieve 37.5 MTr/mm², compared to 29 MTr/mm² in what is presumably TSMC’s 16-nm process and 30.5 MTr/mm² in GlobalFoundries’ 14-nm FinFET technology. Intel says that at worst, its 14-nm process is still 25% denser than what its competitors can achieve, something that traditional n-nm feature sizes alone don’t capture.

Intel extended this measure to account for its guesses at how its competitors’ 10-nm-class products will stack up by this transistor-density measure, since the company openly admits that it hasn’t had the opportunity to reverse-engineer any shipping 10-nm products yet. Still, the blue team estimates its competitors’ 10-nm processes will offer only slightly higher logic density than its own 14-nm FinFET process—three years after chips fabricated on that tech began shipping.

The point of all this horn-tooting is that Intel believes its 14-nm process has plenty of life left in it. As we first saw with its Kaby Lake CPUs, Intel isn’t producing just two generations of chips on a given process node any longer. As part of the company’s “process-architecture-optimize” product strategy, Kaby chips are fabricated on what Intel calls a “14nm+” process, and it intends to perform another round of optimizations to produce a “14nm++” version of that process.

14nm++ is claimed to offer 25% greater performance at a given power level than the unoptimized 14nm process first used to produce Broadwell and Skylake chips, or as much as 52% less power consumption for the same level of performance. In fact, Intel’s projections show that the transistor performance of 14nm++ will actually exceed that of its first generation of 10-nm products. Expect to see 14nm++ underpin Intel’s rumored Coffee Lake CPUs later this year.

 

Popping the hood on Intel’s 10-nm process

Intel being Intel, the company also offered a full-throated defense of its advancements versus the unflinching curve of Moore’s Law, which until recently was defined as a doubling of transistor density within a given area every two years.

The company says that while its node transitions have been taking place over longer periods of time these days, it’s also been exceeding that 2x density improvement over those periods using an armory of innovations that you’ll now hear referred to as “hyper-scaling.” That marketing-friendly catch-all refers to fabrication techniques the company began using at 14nm to achieve higher density, including a proprietary technology it calls self-aligned double patterning. On average, Intel says its “hyper-scaling” achievements allow it to keep up with the density increases that Moore’s Law would dictate, even if they’re not happening on a strict two-year cadence.

Back to those “hyper-scaling” techniques, though. Multi-patterning in general is just one way foundries are overcoming the challenges of laying down chips at the extremely small feature sizes of today’s process technologies, but Intel thinks its self-aligned techniques offer greater control and a higher-quality end result compared to what it describes as its competitors’ multiple-stage lithography-etch processes. Intel claims that the self-aligning multiple-patterning process it uses allows it to lay down features predictably, resulting in high-quality circuit elements and predictable performance in the end product.

In contrast, Intel thinks the multi-stage litho-etch process used by its competitors for multi-patterning can only be as good as the precision of the masking laid down before each etching step, and it claims that process is prone to imprecision, resulting in lower-quality features and less consistent electrical characteristics in the final chip.

More broadly, Intel is defining its “hyper-scaling” innovations as proprietary technology that will allow it to achieve greater density increases than a pure process shrink alone would. As it moves to 10-nm production, Intel says it’ll be using three of these techniques to achieve those “hyper-scaling” gains.

For one, Intel 10-nm FinFET will increase fin height and decrease fin pitch, two improvements that are the hallmarks of higher-performance 3D transistors. 10-nm fin pitch will decrease from 42 nm on the 14-nm process to 34 nm on 10nm, while fin height will grow from 42 nm to 54 nm. To lay down those fins, 10nm will use an industry-standard 193-nm light source paired with another advance in Intel’s multi-patterning technology, called self-aligned quad-patterning.

Additionally, this method will allow Intel to achieve a 36-nm interconnect pitch, which it claims is the tightest in the industry. 

The logic cell height of Intel 10-nm will also be shrinking compared to 14-nm technologies. A basic logic cell in Intel 14-nm is 399nm tall, while 10-nm will decrease that figure to 272 nm, or a 0.68x reduction.

Aside from the changes in feature sizes, Intel will also be using two new techniques to reduce the area occupied by each logic cell on the chip. The first of these is called contact-over-active-gate technology. In the past, Intel says contacts have had to be laid down beside a transistor. Contact-over-active-gate allows Intel to place these elements directly above the transistor, saving space and increasing density. Intel says the contact-over-active-gate approach allows it to achieve 10% greater logic density in a given area than it would without the tech.

Another change in Intel’s move to 10nm is the use of single dummy gates at the edges of its logic cells. Dummy gates are used to isolate logic devices from one another on the chip, and Intel’s 14-nm node used two of these features at each edge of each logic cell. Reducing the number of dummy gates per cell from two to one lets the company enjoy 20% greater area scaling compared to its approach at 14nm.

All together, Intel claims the four improvements it’s disclosing for its 10-nm process let it achieve a 0.37x reduction in logic area on die compared to its 14-nm process, as well as a 2.7x logic density improvement—well above the 2x scaling that Moore’s Law would lead us to expect.

Taken as a whole, a hypothetical Intel chip with logic, I/O, and SRAM circuitry is purported to be 0.43x the size of a similar chip fabricated on Intel 14-nm FinFET.

Using the MTr/mm² metric it introduced yesterday, Intel claims its 10-nm process will deliver an eye-popping 100 MTr/mm² for logic, and it believes its competitors’ 10-nm processes will deliver roughly half that density. In remarkably pointed commentary, Intel says that density deficit represents a three-year lead for its own process technology.

Over the life of the 10-nm process, Intel further plans to deliver two refinements to the node that will improve performance, just as it will for its 14-nm tech. For its first round of refinements, the company expects 15% better performance at the same power level, or similar performance for 30% less power.

As we noted earlier, however, the company doesn’t expect its 10-nm transistors to meet or exceed the performance of its third-gen 14-nm FinFETs until this first round of refinements, or perhaps even until the second round of refinements. The advantages from 10nm could primarily come from the increase in density and a decrease in power consumption, at least at first.

 

The future of Moore’s Law

Taken together, the better-than-Moore’s-Law density improvements Intel claims from its “hyper-scaling” techniques seem to point to another redefining of Moore’s Law. Instead of considering its improvements using a rigid two-year cadence, the company suggests that the longer intervals between process nodes combined with the greater-than-2x-improvement in logic density per node should be viewed as averaging out to the expected improvement over time.

Fair enough, I suppose, presuming that Intel’s future move to a seven-nanometer process can happen quickly enough and the scaling benefits such a move provides are great enough that the “hyper-scaling” trend continues to hold. In any case, the details Intel has shared about its 10-nm process have me excited for the potential of chips produced on that node. Furthermore, if the company’s new MTr/mm² metric is accurate, the company will simply be able to pack far more transistors into a given logic area than its competitors’ 10-nm-class processes will.

That said, Intel doesn’t seem to be ready to introduce any 10-nm products any time before sometime in 2018 at the earliest. That release window coincides with TSMC’s plans to begin producing 7-nm-class products, which could be more competitive on the density metric that Intel is touting. Without setting up electron microscopes and some kind of X-ray tomography system in the TR labs, we can’t verify any of these claims independently, but we look forward to the performance and power-saving advancements these new production techniques herald when they do finally yield production silicon.

Comments closed
    • Lana Cohen
    • 3 years ago
    • DavidC1
    • 3 years ago

    This is pure crap. Before they were thinking of going into Foundry business, they just delivered. Not so much talk.

    Now they are in the marketing of selling their foundry services, they spout nonsense like this.

    Let me clarify. Their advantages are shown in the individual transistor level, but actual products are at a *significant* density disadvantage.

    Atom? Less than than ARM.
    Core? 14nm Skylake has less dense caches than 14nm Ryzen. It should not be possible. It should have been at most, equal.
    Their graphics? Nvidia beats them significantly, in both performance, and power use(See Iris Pro). Despite being discrete graphics. I know Intel was never known for graphics, but this is new level of sad.

    “ARM” also beats Intel chips in perf/clock, with Apple, they also reach Core-level performance with Atom-level power use. Apple chips beat Intel chips so throughly that its embarassing. Investments don’t pay off right away, but eventually it does. All the ideas, talents, and money are at the ARM and foundry(non-Intel) camp right now.

    Micron was talking about how DRAM consistently beat DRAM-alternative technologies despite the potential of the latter and the gloomy projections foretold of the former. The massive interest and investment in DRAM overcomes so-called limitations over time.

    In this case, its massive investment and interest in *everyone else* versus Intel.

      • DancinJack
      • 3 years ago

      You’re very confused David.

        • DavidC1
        • 3 years ago

        You should explain your opinions rather than just state them Jack.

      • Klimax
      • 3 years ago

      Some mishmash of cherry picked stuff, unbacked assertions and nonsensical comparisons.

    • xeridea
    • 3 years ago

    Jeff, I would like to see a task energy comparison, like was done with Bulldozer review. Compare 1700, or 1700x to 7700k, and the 8 core Intel chips total joules used for various tasks. This would be a good comparison to go with this article.

      • Andrew Lauritzen
      • 3 years ago

      While those results might be interesting, it’s worth noting that stuff lower down the freq/V curve will always win, so unless the performance of all of the chips is almost identical, it’s not necessarily telling us a lot of useful information.

      Ex. Xeon D will likely crush everything on that metric, but does that mean you buy one for gaming? Probably not.

        • wingless
        • 3 years ago

        I don’t believe Xeridea is talking about gaming.

    • xeridea
    • 3 years ago

    They tout transistor tech all day long, but still refuse to offer more than 4 cores to consumers. 6 core desktop CPUs were available at 45nm, they are at 14nm now, and still only give you up to 4 cores. What is the purpose of transistor density if you don’t do anything with it? What really matters is power efficiency, which obviously Ryzen and smartphones are pretty good at.

      • mganai
      • 3 years ago

      Coffee Lake is what you’ve been waiting for.

      And AMD’s only managed to begin catching up to Intel in terms of power efficiency. ARM smartphones aren’t in the same performance class much less bracket, so no real comparison.

        • xeridea
        • 3 years ago

        I know Bulldozer line was a disappointment, I am talking about the here and now, and the future. Intel no longer appears to be better on power efficiency, and there supposedly superior transistors aren’t being utilized properly. My main point is that Intel is touting density, but don’t do anything with it. ARM is in different performance, but Intel tried smartphones, they can’t compete.

        Coffee lake, 6 cores, hurray I guess? By then there may be 12 or 16 core HEDT CPUs.

          • blastdoor
          • 3 years ago

          From intels point of view they are doing the most important thing imaginable with that higher density — pumping up their profit margins.

          • Andrew Lauritzen
          • 3 years ago

          Uhh… I love Ryzen, but Intel’s stuff is almost certainly still more power efficient. It’s nice to see Ryzen in the same league, but comparing desktop chips @ stupid frequencies doesn’t tell you *anything at all* about power efficiency.

          When Ryzen has 15-45W parts then you’ll be able to make a more direct comparison. For now various Xeon’s – particularly Xeon D – still run circles around Ryzen in power efficiency and I suspect they will have a hard time competing @ 15W ultrabook levels. I’d love to be proven wrong on that though, and will happily admit Ryzen was priced more aggressively than I imagined so anything is possible!

        • danw
        • 3 years ago

        AMD has passed Intel on both price/performance and power/performance, at least in multi-threaded applications. Most applications that really need performance are multi-threaded, these days.

        I think Intel is trying to create FUD, they don’t have anything in the pipeline real soon, that will beat AMD. They are throwing out buzzwords (i.e. “hyper-scaling”) that sound good to the press, but aren’t going to dig them out of the performance hole they are currently in. They need to increase their instructions per clock, like AMD did. But, they don’t have that in the pipeline.

        They are making statements based on what they think the other fab companies are doing. But, they don’t know what improvements they have coming down the pike for their processes. They claim they are keeping up with Moore’s Law, but that is clearly not the case. You don’t make statements like that unless you are scared. Also, they don’t mention process improvements they don’t have that other fab companies are using.

          • the
          • 3 years ago

          Intel has plenty that could beat AMD if they chose to release it as it would eat into profits. Desktop parts with eDRAM would be a simple, easy means of claiming a performance lead due to it being used as a large L4 cache. Similarly the high end desktop market could be flooded with high core count chips that are normally found in the server space. None of these are new products, just moving existing ones around and pricing them differently.

            • djayjp
            • 3 years ago

            Correct me if I’m wrong, but all of Intel’s greater than four core chips have greater than 100W TDPs. So I don’t think it’s such a simple matter.

            • BurntMyBacon
            • 3 years ago

            I don’t think you are wrong, but I don’t think this is as big a deal as you seem to think. The FX-8000/FX-9000 series processors from AMD were all 95W and up. Many were 125W and some were even 220W. While it was hard or impossible to find a board for under $100 that could support these 220W atrocities, it could certainly be done for less than $200 (some less than $150 if my memory serves me correctly).

            Sandy Bridge had mainstream processors at 95W. Some Core 2 Quad processors breached the 100W mark. Once upon a time Intel released Pentium 4’s for the desktop at 115W. The point is, releasing processors over 100W, while not necessarily the most desirable option for many, isn’t power prohibitive or even new. Intel sports a bunch of 140W processors on their current HEDT platform right now. While certainly not mainstream, that is still considered a desktop platform.

            • DancinJack
            • 3 years ago

            That’s wrong.

            [url<]https://ark.intel.com/Search/FeatureFilter?productType=processors&MaxTDPMin=15&MaxTDPMax=100[/url<]

            • the
            • 3 years ago

            No. Actually the Xeon-D chips go all the way up to 16 cores and are all sub 100W. The catch its hat they don’t clock that high. They exist in an optimal voltage/clock scaling range.

            Several of the Xeon E5’s are below 100W that have more than six cores. This isn’t universal as the mix of clock speeds, core counts and power consumption varies widely here.

            • danw
            • 3 years ago

            How is any of that have to do with Price/Performance or Power/Performance? Adding eDRAM would require more power and transistors or higher cost. It isn’t that Intel couldn’t beat AMD on cost, it is that they are not willing to beat AMD on cost.

            As software is updated to take advantage of AMD specific instructions, this gap will get bigger. It is already happening in a few games, that have been updated.

          • Andrew Lauritzen
          • 3 years ago

          > AMD has passed Intel on … power/performance

          Just… no. I don’t mean to be rude but don’t go throwing around statements like that are blatantly false to anyone who actually understands how to measure that sort of thing. Hint: it’s a bit more complicated than picking arbitrary chips and comparing TDPs.

            • cegras
            • 3 years ago

            So how *is* power/performance measured? Any reviews out yet that measure energy used per task?

            • Andrew Lauritzen
            • 3 years ago

            The server guys do a pretty decent job. I haven’t seen anything I’d consider great in the client space, because typically reviewers only really focus on maximum performance on performance optimized SKUs. If you wanted to get into more power-optimized SKUs you’re talking Xeon on desktop, or stuff that is power limited in client (ultrabooks, etc).

            It’s worth noting that it’s not a question that has a single answer without additional qualifications. Ex. at what performance level? At what power level? What task? You can come up with *some* general results, but it’s important to understand that these chips are created for different design points, and comparing across design points or along axes that aren’t optimized in a given design point isn’t very useful.

            • Ninjitsu
            • 3 years ago

            Tom’s Hardware should have some pretty good efficiency measurements.

        • Kougar
        • 3 years ago

        The same Coffee Lake that Intel claims is a “15% IPC performance increase”, the same “15%” claimed of Kaby Lake over Skylake. And even Intel confesses there is zero IPC increase.

        At this point it sounds like odds are Coffee will get most or even all of its performance from yet another clockspeed bump.

      • maxxcool
      • 3 years ago

      *NORMAL* consumers do not actually need more than 4 cores …

    • ronch
    • 3 years ago

    Just a crazy thought. What if Intel agreed to fab Ryzen? Then on the box it’ll say, “Designed by AMD, built by Intel.”

      • blastdoor
      • 3 years ago

      I predict that will not happen.

        • ronch
        • 3 years ago

        It will NEVER happen.

      • Goty
      • 3 years ago

      I’d buy that.

    • ronch
    • 3 years ago

    As the article says, world+dog has known this for a while now, but apart from calming down investors, what’s the point of all this? It’s not like poor fabless chipsters can choose Intel to fab their trinkets for them instead of TSMC. And thsir leading edge nodes are almost exclusively used for their CPUs where folks simply look at the competition in terms of power efficiency, price, and performance and decide from there. Most won’t even care about the nitty gritty of chip fabrication.

    Still, a good read for those so inclined.

      • Jeff Kampman
      • 3 years ago

      Intel has a custom foundry business now, and it would doubtless like to win customers for it.

        • ronch
        • 3 years ago

        But why aren’t more fabless companies using their services if they’re so good?

          • Jeff Kampman
          • 3 years ago

          Cost? Suitability? Services? It’s an interesting question.

            • wierdo
            • 3 years ago

            Could be that allot of potential customers are in some way competing with Intel since it’s not purely a fab. So understandably they may not be comfortable with competition in charge of manufacturing their chips for them – then again Samsung falls into a similar boat so hmm.

            Perhaps if Intel the chipmaker separated from its fab business then it would make more sense for companies to contract with their fab arm. That’s not gonna happen probably.

            • derFunkenstein
            • 3 years ago

            That hasn’t stopped Apple and Samsung from getting into bed.

            In fact, Apple seems like the perfect partner for Intel. They already buy Intel products without really producing things that compete with Intel. Apple’s SoCs would be a great BUSINESS fit for Intel and I’m curious why they’re not already doing that. Maybe Intel doesn’t want to make THAT many custom designs.

            • RAGEPRO
            • 3 years ago

            Could also be something in Intel’s licensing agreements. Perhaps Intel wants more access to its partners’ designs than Apple (or other companies) is comfortable with.

            You’d think if anyone it’d be AMD chomping at the bit to get at Intel’s fabs. I’d love to see Ryzen on Intel 10nm. (Or even that 14++!)

            • derFunkenstein
            • 3 years ago

            OK that might be a great point, and another thing to wonder about. Just something else we don’t know.

            I wonder how small Ryzen could be on even Intel 14nm and how much extra headroom it might have on that process. If it had its existing 8 cores (quirks with CCXes and all) and could still go toe-to-toe, clock-for-clock with the 7700K the lightly-threaded (relatively speaking) gaming tests would be far more interesting.

            Of course it could just be that the design is capped out and no Intel magic could fix it.

            • tsk
            • 3 years ago

            They are using Intels 10nm fab for their next year chip(A12?) according to rumors.

          • Redocbew
          • 3 years ago

          You are so weird.

            • ronch
            • 3 years ago

            Could you just get off my back please?

          • Klimax
          • 3 years ago

          Most likely cost (money and time) of redesign and validation for different process. Same like moving between TSMC and GloFO/Samsung.

          There are likely also different libraries of cells so it takes time to get anywhere.

          Edit: moat -> most Bit embarrassing mistake..

          • the
          • 3 years ago

          I think it comes squarely down to cost.

          Intel has companies like Cisco* and Altera as early adopters because the were willing to pay the premium. The result for them was denser or lower power chips. The costs Cisco and Altera incurred for switching to Intel as a fab could simply be pushed on to their own customers (read businesses and three letter agencies).

          What we have not yet seen is any mass market 3rd party IP come out of Intel’s fabs.

        • blastdoor
        • 3 years ago

        No doubt, but foundry customers are presumably a bit more sophisticated than your average bear. I doubt anyone at Qualcomm is reading this and thinking “wait ,what? You mean Samsung 14nm isn’t the same as Intel 14nm? I’ve been lied to?!?!”

    • davidbowser
    • 3 years ago

    I tend to think that Intel is accurately representing most of this, so they continue to have a process advantage with roadmap to continue that advantage.

    The part that I will quibble with is that investors and analysts are only concerned with the process advantage. I believe that they are actually concerned with Intel’s ability to continue the profit margins (and thus stock valuation) in the face of increased competition. So far, Intel’s reactionary stance has been “WE have the best tech”, and “here is our tech plan!”, but have thus far not publicly shown a [b<]business[/b<] plan to address the competition that analysts or investors feel comfortable with.

      • w76
      • 3 years ago

      I’d disagree just a little. If I were an investor (I’m not), I’d be worried not about their margins due to competition. I’m sure they can keep AMD more or less in their place. I’d be worried about growth. Growth is what would earn Intel stock a higher PE ratio, but Intel has missed the boat on the smartphone SoC market and the desktop market is shrinking.

      I think it’s rather telling INTC currently has a 3.06% dividend yield, competitive with treasuries and other “income” plays. It’s becoming boring, like how utility companies used to be and still sort of are. Utilities generally aren’t expensive because they never grow, they’re just bought for the dividend. Which might be the only imaginable reason one might but INTC right now.

        • K-L-Waster
        • 3 years ago

        The desktop market may be shrinking, but the data center isn’t, and Intel has a strong (or even dominant) presence there. The data center market will if anything continue to grow.

          • NoOne ButMe
          • 3 years ago

          Yes, but competition is piling on.

          I think we’ll see Intel between 80 and 90% marketshare by 2020, and 50-60% by 2025.
          With a pessimistic view of ARM/AMD/POWER.

          Probably with lower margins due to increased competition.

          It will still be profitable, but much less so than currently I think.

    • blastdoor
    • 3 years ago

    We’ve known for a while that TSMC node N is a tad better than Intel node N-1, but definitely inferior to Intel node N. So in many ways this is old news, but Intel is providing a nice summary here.

    I’m not sure that the lines project into the future quite the way they’re implying, though.

    TSMC has received a massive injection of capital from Apple and Qualcomm. It takes time to translate that capital into node progress, but the progress will come / is coming.

    TSMC’s “10nm” (products shipping now) is a little better than Intel’s 14nm.

    Near the end of this year, Intel will start shipping their 10 nm; next year TSMC will allegedly be shipping their “7nm”, whereas Intel’s 7nm will most likely be coming out three years later — possibly about the same time as TSMC’s “5nm”.

    Presumably TSMC 7nm will be noticeably better than TSMC 10nm, but how will it compare to Intel 10nm? If it’s ballpark equal, that’s pretty bad news for Intel.

    • NoOne ButMe
    • 3 years ago

    really, this gets back to a few truths:
    1. Intel has the highest density transistors
    2. Intel has the lowest cost transistors
    3. Intel has the highest performance transistors
    4. Intel has *blah blah blah*
    and so forth

    Now Intel, trying putting an “AND” and linking some of those statements together?

      • derFunkenstein
      • 3 years ago

      I guess you could say they have the highest-density, lowest-cost, highest-performance transistors.

        • NoOne ButMe
        • 3 years ago

        Just not at the same time 😛

          • derFunkenstein
          • 3 years ago

          pick two

    • DragonDaddyBear
    • 3 years ago

    First, great article. By far easier to read and understand than some others that reported on this event.

    Would it be possible for others manufactures to adopt some of these practices to improve density and efficiency in existing nodes?

    • NoOne ButMe
    • 3 years ago

    [url<]https://newsroom.intel.com/editorials/lets-clear-up-node-naming-mess/[/url<] Intel's density counting formula and rational here.

      • chuckula
      • 3 years ago

      That’s a very reasonable metric for transistor density.
      There’s no way in hell any of TSMC/GloFo/Samsung will use that metric, so it’s DOA, but it’s definitely reasonable.

        • NoOne ButMe
        • 3 years ago

        yes, but for shipping products Intel’s density lead seems pointless.

        Or rather, max transistor density isn’t useful as a metric if the percentage of max in shipping products varies from foundry to foundry.
        TSMC, Samsung and GloFo in shipping GPUs (and Zen for GloFo) are in the 60-80% range of their ~30 MTr/mm^2
        Intel meanwhile doesn’t have anything even half of their max 37.5 MTr/mm^2. Best numbers I can find for Xeon and consumer is closer to 14-16 MTr/mm^2.

        Once/If Intel picks up foundry wins we may see it become useful. I hope.

        Until that point, Intel’s density advantage is theoretical at best, and false at worst.

          • Redocbew
          • 3 years ago

          Intel comes up with a new metric for transistor density, and hey guess what? They “win” when using it. Yes, we call that marketing.

          The difference here is that pitch measurements really have become a bit murky, and Intel really does have an edge when it comes to manufacturing. Furthermore, there may be good reason to start looking for a new metric with process shrinks presumably becoming less frequent in the future. This is just Intel playing to their strengths, and it further shows how ridiculous the demos AMD held about Ryzen really were. If you want to complain about presentations which don’t reflect reality you should start there.

          • chuckula
          • 3 years ago

          There’s a big disconnect between what a fab process can do if you want to just throw transistors onto a piece of silicon (and nobody really does that, the closest they might get are SRAM cells) and the final density of various products that are made using those processes.

          Then there’s the definition of what a “transistor” actually is, which is why we get RyZen 8-core parts with 4.8 Billion transistors and Broadwell 10-core parts with 3.2 Billion transistors…. it makes you wonder how they are counting sometimes.

            • NoOne ButMe
            • 3 years ago

            4.8B transistors seems valid as how AMD/Nvidia and everyone else who is foundry-less use.

            Intel by all means has a denser process. Which for CPUs, trading density for performance, power and clocks makes sense.

            The more cornering part is that their iGPUs don’t appear to be any denser than CPUs. My knowledge of how well you can mix and match density on wafer is iffy. So that my be an issue which cannot be fixed easily.

            • NoOne ButMe
            • 3 years ago

            Also forgot to add in last comment, RyZen is a full SoC.
            Carrizo picked up some 700-800 MTr from Kaveri going to full SoC. And I believe Kaveri is more integrated than Broadwell-E is due to different market segments/goals.

            So calling a ballpark 1B Tr in RyZen extra due to being a full SoC seems reasonable.

    • djayjp
    • 3 years ago

    Page 2: “…its 10-nm process let it achieve a 0.37x reduction in logic area on die compared to its 14-nm process, as well as a 2.7x logic density improvement—well above the 2x scaling that Moore’s Law would lead us to expect.”

    No. Intel’s own slides show that there seems to be exactly a 4 year gap between the two processes, so if following Moore’s law, then there should be a 4x increase in density.

      • UberGerbil
      • 3 years ago

      Moore’s Economic Imperative makes a prediction about transistor density per unit of time, but that unit of time has changed over the years. Moving goalposts make it difficult to assert any given team has scored, or not, in a particular timeframe.

        • djayjp
        • 3 years ago

        I’m using the definition given in the article itself.

      • Froz
      • 3 years ago

      No idea why you are being downvoted. It is true, the quoted sentence doesn’t make sense.

      Moreover, if you take a look a the graph at the top of the third screen, it’s quite obvious there what Intel is doing. The 2x per 2 year is simply not happening from 14 nm to 10 nm according to their own measurement. Instead, they found out they actually did more then 2x per 2 years when they switched from 45 to 32 nm. The trend line on the graph is false, if they started with 32 nm it would look differently. I have no idea how it would look if they started earlier instead, but I’m guessing they chose 45 nm not without a reason.

      Long story short – talking about Moore’s Law is really just a marketing BS.

        • djayjp
        • 3 years ago

        Thanks, Froz, and I completely agree.

    • POLAR
    • 3 years ago

    Oh, so the Zen arch is even better than what we’ve seen so far. Thanks for the intel.

      • Star Brood
      • 3 years ago

      What?

        • POLAR
        • 3 years ago

        A better process is an advantage, for both performance and efficiency. If we believe Intel’s words on their process, then the Zen arch can do more once GF catches up and makes a better process.

          • chuckula
          • 3 years ago

          [quote<]A better process is an advantage, for both performance and efficiency.[/quote<] Yes, you are right. Which is why AMD better have an explanation for why, given the supposedly better architecture of Zen and the fact that -- according to AMD -- GloFo's 14nm process is easily superior to Intel's 22nm process that a 2017 14nm Zen can only beat a lower-clocked 2014 22nm Haswell some of the time.

            • POLAR
            • 3 years ago

            Ok, so all the thumbs down I’ve collected so far will enjoy shopping for the “superior” kaby lake. Take it easy. Have a nice day.

            • sreams
            • 3 years ago

            The latest Tom’s Hardware review has Ryzen 7 beating the 6900K (a good comparison, as it has the same core/thread count as Ryzen 7) in nearly every workstation benchmark with much lower power consumption. If Intel’s process is superior in terms of performance and power consumption, what else would explain the difference other than architecture?

            • sreams
            • 3 years ago

            That wasn’t a “yes” or “no” question at the end of my statement. How does a thumbs down answer it? If there is some other explanation, say what it is.

            • chuckula
            • 3 years ago

            Tell ya what: If legitimate posts of mine where I post actual links and hard evidence to back up my points can be downthumbed by people like you for being inconvenient, then you officially put your whiner-card into the shredder when you can’t even be arsed to copy a link into your post that purportedly proves your point.

            Especially when the only link to THG in this thread literally says the opposite of what you want us all to believe is true without any evidence whatsoever.

            • sreams
            • 3 years ago

            Sorry… I thought you could find Tom’s Hardware yourself. 😛

            [url<]http://www.tomshardware.com/reviews/amd-ryzen-7-1700x-review,4987-7.html[/url<] 1800X beats 6900K in 32 of 48 workstation tests. Not sure what else to say.

            • derFunkenstein
            • 3 years ago

            There’s more to it than that, because if it’s [url=http://www.tomshardware.com/reviews/amd-ryzen-7-1700x-review,4987-7.html<]this review[/url<], the 7700K wins almost every single benchmark. So it's probably more to do with single-thread performance than anything, and Baby Lake still takes the Kaby Cake.

            • sreams
            • 3 years ago

            Yes, the 7700K wins most of the low-thread-count tests by a bit, and for obvious reasons. But when a test uses several threads, it is beaten by a very large margin by the 6900K and 1800X. That alone would eliminate the 7700K as a consideration for certain customers.

            • derFunkenstein
            • 3 years ago

            There were very few results in Tom’s review where anything other than the 7700K won out. It’s weird that you cite Tom’s review when TR’s review was FAR more favorable due to using far more multi-threaded applications. If you really want to paint a rosy picture of Zen, why not use the review from this very site?

            • sreams
            • 3 years ago

            The 1800X beats the 7700K in 27 of 48 of Tom’s workstation benchmarks (make sure to click through all of the charts, as they aren’t all immediately visible on the various pages). If it isn’t gaming, Ryzen is still winning more than half of the benchmarks in the TH tests when put up against any single Intel CPU.

            I’m not interested in painting a rosy picture of Zen, so the info on Tom’s Hardware is just as valid as any other site to me. Just want to see all CPUs given a fair shake.

            • sreams
            • 3 years ago

            One could just as easily ask Intel to have an explanation as to why their current $1000+ 8-core/16-thread CPU can only beat a $330-500 Ryzen CPU some of the time.

            The answer is that they are different architectures that excel in different areas due to design differences.

            • Redocbew
            • 3 years ago

            That’s far too reasonable an outcome. There must be one chip to rule them all.

            • chuckula
            • 3 years ago

            Because Intel 1. Launched their chip a whole year earlier; and 2. Don’t see the need to cut the price yet.

            If you were smarter than the average AMD fanboy who thinks that Intel is full of idiots who will never produce a chip faster than Broadwell, point 2 should maybe make you think about keeping the celebration of AMD’s “victory” in check.

            • sreams
            • 3 years ago

            My point was that both companies’ CPUs are strong in some areas and weak in others. A fanboy only chooses one company to nitpick. That’s you.

            • sreams
            • 3 years ago

            Intel fanboy:

            “who thinks that Intel is full of idiots who will never produce a chip faster than Broadwell…”

            non-fanboy:

            “who thinks that both Intel and AMD are full of idiots who will never produce a chip faster than their current ones”

            The CPUs available now are the ones available now. They are what is being compared. Of course neither company will stand still.

            • Klimax
            • 3 years ago

            Skylake-E is coming. And top end Ryzen is clocked to the edge of its ability, while Intel’s HEDT chips tend to be very conservatively clocked. (Just get 5960x to 4,2GHz as a very modest OC to see what happens to Ryzen…)

            • sreams
            • 3 years ago

            I believe you mean the 10-core/20-thread Skylake X. I’ve seen discussion that it is coming sometime around August. If pricing stays consistent, this should be a $1500 CPU like the 6950X. By August, I wouldn’t be surprised if the rumored 16-core/32-thread Ryzen CPU and X399 chipset is available as well. It would very likely trade blows with Skylake X and probably be less expensive. And, of course, both companies are working on the upcoming generations of their architectures.

            Nobody is standing still.

            [url<]http://www.digitaltrends.com/computing/amd-ryzen-16-core/[/url<]

        • LocalCitizen
        • 3 years ago

        Polar’s view is not wrong, but does require some explanation.

        Intel claims 3 year lead in process. back in 2014 intel introduced enhanced haswell, 4790. for apples to apples comparison we compare 1800x with 5960x, introduced aug 29, 2014, 8 core/ 16 thread, 140 watts.

        but 1800x leads in most of the application benchmarks, both multi-thread and single-thread

        [url<]https://techreport.com/review/31366/amd-ryzen-7-1800x-ryzen-7-1700x-and-ryzen-7-1700-cpus-reviewed/12[/url<] games are getting the ryzen optimization. Ashes of the Singularity new patch gave Ryzen quite a boost. and there is a ryzen high performance platform coming with quad memory channel. the problem for intel is that people buy chips for performance (and price!), not its process, and intel performance is nowhere near 3 years ahead. edit: stupid razen... i mean ryzen name

          • POLAR
          • 3 years ago

          Thank you.

      • Klimax
      • 3 years ago

      No. Compare say 5960x at same frequency as Ryzen. And you can get trivially 5960x to same frequency as Ryzen…

    • Wirko
    • 3 years ago

    14++ = 15.
    Or something.

      • Redocbew
      • 3 years ago

      I guess 10++ will mean they’ve taken it to 11.

    • weaktoss
    • 3 years ago

    [quote<]recent process nodes have become [b<]unmoored[/b<][/quote<] And Jeff claims he's "not that funny"!

      • UberGerbil
      • 3 years ago

      Agreed. I first thought [i<]that's a nice, unexpected word choice[/i<] and then I stopped and realized [i<]oh, that's a [b<]perfectly brilliant[/b<] word choice[/i<]

      • morphine
      • 3 years ago

      Trust us, he’s not 😛

      • cynan
      • 3 years ago

      I appreciated the witty usage also. However:

      If that was intended as a pun of Moore’s law, then while clever, I don’t see how it is all [i<]that[/i<] funny due to a lack of conventional humor devices such as irony, etc. It is a literal pun: Transistor density is no longer increasing as rapidly - which is directly, if not completely, related to process node size. Now, if the pun was also a reference to Intel's recent architecture names featuring lakes - the most literal meaning of "unmoored" being a water borne vessel's release from some sort of anchoring connotating that Intel's own ability to keep their process node size-referent naming scheme inline with prior conventions is compromised and therefore hypocritical in pointing out this lack of nomenclature rigour in others - then [i<]maybe[/i<] it qualifies as humorous as well.

        • titan
        • 3 years ago

        Found the robot!

          • cynan
          • 3 years ago

          Robot?! Pffff. I was obviously going for asperger-Vulcan.

        • Meadows
        • 3 years ago

        You’d probably be a blast at parties.
        If you had any.

    • chuckula
    • 3 years ago

    That’s great, but to be frank I found their logic to be a bit dense.

      • morphine
      • 3 years ago

      Today, you get an internet point. Don’t let that get to your head 🙂

        • derFunkenstein
        • 3 years ago

        with 37MTr/mm^2, it seems Count Chuckula’s head might already be too full.

          • chuckula
          • 3 years ago

          You think I’m on 14nm?
          Sucka!

      • tipoo
      • 3 years ago

      They could stand to work on their pitch, but don’t let that gate your understanding.

    • tipoo
    • 3 years ago

    They’re right about this, everyone else’s advertised node is closer to their n-1 node still. Others 10nm is a tad denser than Intel 14nm, but it’s far closer to that than Intels actual 10nm.

    [url<]http://m.eet.com/images/eetimes/2017/01/1331136/1-Node-positioning-ICK.png[/url<] Plus, last I checked Intel was the only one to use their advertised process on both front end of line and back end of line transistors, others use a further n-1 on BEOL. One wonders what Ryzen, the Apple A series, GPUs, etc, would be like on Intels node.

      • BurntMyBacon
      • 3 years ago

      Intel, much to my surprise, actually did a decent job of making their assertions and estimates believable. The estimate on “other’s 10nm” may not be correct in the end, but it was a pretty reasonable estimate given the information on hand. Also, I doubt very much that they’d play up 14nm++ at the expense of their own 10nm process unless there was at least some truth in it. It would unnecessarily cause doubt with investors about their sub-14nm process prospects.

      That all said, the last chart (on the first page) looks like the typical song and dance. Given the near identical architecture between Kaby Lake and Skylake, one could reasonably assume that transistor level performance and power efficiency improvements would have a significant impact on processor level performance and power efficiency. The chart shows 14nm+ has something like a -38% to power or +12% to performance. I don’t expect the improvement to be 1 to 1, and maybe I’m expecting too much, but I can’t seem to figure out where Intel achieved anywhere near these figures (particularly with respect to power) in a shipping product.

      When I consider that 14nm++ will only buy an additional (est) -14% power or +14% performance, I have to believe that 14nm++ will be a less impressive improvement than 14nm+ used in Kaby Lake. Perhaps Intel will have architectural improvements to rely on, or maybe they’ll bring six cores to their mainstream platform. I don’t think they can just rely on process improvements, though.

        • NoOne ButMe
        • 3 years ago

        New node allows X% voltage reduction at ISO clock to be much lower than Y% clock improvement at ISO power.

        It’s been pretty common for a while.

        And it’s about as believable as previous claims. Independently of each other, all true. But too luck making more than 2 claims be valid at the same time.

        100MTr/mm^2 my ass.

        I guess/expect 35-45 MTr/mm^2 from Intel against 30-55 MTr/mm^2 (10-7nm as named by TSMC, Samsung, GF) for shipping products.

          • NoOne ButMe
          • 3 years ago

          I meant you get MORE reduction of voltage at ISO clock than increase of frequency at ISO voltage.

    • NTMBK
    • 3 years ago

    So… 10nm is going to have worse performance than 14nm?

      • POLAR
      • 3 years ago

      Worse means better, you didn’t know huh?

      • Jeff Kampman
      • 3 years ago

      The performance of individual transistors (which is complex in itself) may be similar to Intel’s original 14-nm process, but there will be many more of them on a given die and they’ll be packed much closer together. There will still be power and possibly clock-speed advances over the last-gen node.

        • JustAnEngineer
        • 3 years ago

        Smaller transistors = bigger profit

        What Intel is showing is how much less it costs them to make that $1000+ Core i7 or $2000+ Xeon CPU than it costs their competitors to make their $500 CPUs.

          • NoOne ButMe
          • 3 years ago

          Not that simple. Again.
          Denser you get if you want to keep high performance you run into thermal density issues. Which leads to decreasing performance.
          And at high density yields can suffer.

          It is why even at TSMC, GF, Samsung companies don’t go to the max transistor sizes for products. Stuff would cost more, be slower and possible hotter.

          I think one of the Anandtech articles on the RV770/870 summed it up with a line about Chip Design being Russian roulette where you don’t know the results from 2-4 years after you pull the trigger

Pin It on Pinterest

Share This