Asrock preps 21 motherboards for a dip into Kaby Lake

Asus spinoff ASRock has followed its progenitor company in releasing BIOS updates adding support for the upcoming Kaby Lake CPUs. Firmware updates for 21 of ASRock's existing Intel 100-series LGA 1151 motherboards add support for what the company dubs "Next Generation" Intel Core processors.

ASRock's press release conspicuously avoids the terms "Kaby Lake" or "seventh-generation Core," but the BIOS updates doubtless add support for the new CPUs. The products of Intel's "optimization" phase of its 14-nm tri-gate manufacturing technology are expected to hit the desktop market in early 2017.

The firmware updates are not limited to high-end Z170 boards. Entry-level H110, B150, and H170 receive updates as well. The full list of supported motherboards can be found here.

Comments closed
    • albundy
    • 3 years ago

    so its the same exact boards with a revision number next to the model? should provide good competition to get awesome deals during turkey week.

    • stdRaichu
    • 3 years ago

    Can existing boards be updated without having a Skylake chip already in them…?

    As alluded to in another thread I’m toying with the idea of building a new HTPC to take advantage of the new video decode in KBL.

    There is, however, meant to be a new 200-series chipset incoming for KBL and one suspects that optane might only work with that…

    • jts888
    • 3 years ago

    Assuming I already have discrete graphics, what’s the major selling point of Kaby Lake over Skylake, or even Broadwell frankly?

    Maybe I’m missing something, but it feels like Intel is begging me to look into Zen/Summit Ridge, since it’s clear that they don’t want to sell me more than 4 cores for non-absurd (i.e., Broadwell-E) prices any time before Spring 2018 with Coffee Lake.

      • DPete27
      • 3 years ago

      Optane

        • jts888
        • 3 years ago

        I’m still waiting on compelling gains from workstation NVMe solutions before moving to the next new platform.

        It just seems like anything besides massively parallel server workloads just end up bottlenecked by CPU processing rather than SSD bandwidth or latency.

      • chuckula
      • 3 years ago

      At a little over $400 there’s nothing unreasonable about the Broadwell 6800K’s price whatsoever: [url<]https://www.amazon.com/gp/offer-listing/B01FJLA8NI/ref=dp_olp_all_mbc?ie=UTF8&condition=all[/url<] From the early benchmarks, Kaby Lake is providing another 10-15% boost over Skylake with additional overclocking knobs that are interesting to people in this segment. That's on top of what Skylake has over higher-clocked parts like the 4790K, and on and on. Incidentally, for all the hype about Zen, TR's review of Skylake already showed it to be more than 40% faster than Sandy Bridge (not Piledriver) in real-world workloads back in 2015, so that puts Kaby Lake 50% faster without much of a sweat. [url<]https://techreport.com/r.x/skylake/value.gif[/url<] Put another way: Let's pretend that like AMD, Intel had not released any CPU product remotely approaching a new architecture since Sandy Bridge in 2011. Wouldn't you be "excited" for Kaby Lake with its greater than 50% real-world boost over Sandy Bridge vs. what I'm calling as a 30% real-world boost for Zen over a highly clocked Piledriver part? That's what most of Zen's launch hype is, psychology based on AMD spending 5 years copying Sandy Bridge and pretending that it's a miracle because they didn't release anything new in the interim.

        • Waco
        • 3 years ago

        I just hope they’re under-promising, and that Zen will over deliver.

          • chuckula
          • 3 years ago

          When it comes to AMD you [b<]KNOW[/b<] they underpromise and overdeliver! Just look at [url=http://cdn.wccftech.com/wp-content/uploads/2013/06/AMD-FX-CPU-FMA4-635x356.jpg<]how modest they were with Bulldozer! [/url<]

            • Waco
            • 3 years ago

            I said I hope, not that I expect. I don’t want AMD to be out of the market entirely. 🙂

        • xeridea
        • 3 years ago

        40% IPC is compared to Excavator, not Piledriver. Also getting SMT. On transistors a lot smaller, so clocks should be good. Then you get 8 cores instead of 6. Kaby will still have better IPC, but less cores, and 6 core Intel CPUs sell for crazy prices. What one is better for a person depends on what they use it for. DX11 will still be better on Kaby, but with DX12 gaining traction it doesn’t matter that much.

          • chuckula
          • 3 years ago

          That’s a lot of nice marketing stuff from AMD that confuses the [i<]how[/i<] of AMD getting that magical 40% IPC gain with AMD's rosy performance target numbers. For example you say: 40% IPC gain. [b<] AND ON TOP OF THAT!![/b<] feature feature feature lalala. Wrong. Those features are [i<]how[/i<] they are getting the 40% IPC gain, not some magical performance boost above & beyond the 40% (which didn't just drop out of the sky, BTW). As for bringing up the magical transistors, we've already seen those transistors inaction* in Polaris. That's why I'm being generous and assuming AMD can get all 8 cores up to 3.6 GHz vs. an FX 8370 at 4.3GHz or an FX 9590 at 5 GHz (purportedly). As I said, generous. * Not a typo.

            • xeridea
            • 3 years ago

            40% IPC is… IPC. SMT does not affect IPC because IPC is done with a single thread. They haven’t given details, but it is always compared to Excavator.

            For transistors, going from 32nm to 16nm should allow you to maintain similar clocks while using a uArch that has higher IPC.

            For Polaris transistors… If you compare efficiency Polaris gained vs Pascal, it is clear the 14nm transistors are a lot better than the 16nm ones Nvidia uses. Polaris gained like 90% efficiency, Pascal like 55% IIRC. Of course those are for GPUs, so different ballgame.

            • chuckula
            • 3 years ago

            [quote<]For Polaris transistors... If you compare efficiency Polaris gained vs Pascal, it is clear the 14nm transistors are a lot better than the 16nm ones Nvidia uses.[/quote<] Oh yeah, that's why the Rx 480 with 5.6 billion transistors at 1.2 GHz (when not throttling) is leagues and leagues better at performance and power consumption than that joke of a GTX-1070 with about 6 billion transistors at 1.5 GHz (assuming you artificially limit it to its base clock). Once again, you'r either ignorant or are intentionally making deceptive posts to play games with basic facts that aren't easy to refute. For example, in this latest post you intentionally started out with the rather laughably horrible power consumption figures of ancient Tonga GPUs to "prove" that the 14nm process from GloFo that cost AMD the PCIe certification of the Rx 480 is somehow miraculously great. Too bad when we stop playing the artificial shell game of relative performance numbers using AMD's bad products as baselines to make its new products look like "miracles" on a [i<]relative[/i<] basis that the hard numbers aren't so rosy.

            • xeridea
            • 3 years ago

            You can’t compare Polaris vs Pascal directly to determine how good transistors are. They are different architectures. The Pascal uArch is more efficient for games, I am not debating this. I was comparing new vs old of each vendor. There aren’t drastic uArch changes, so you can reasonably compare transistor efficiency gain.

            The PCIe certification is because they used a 6 pin connector when they should have used 8. It isn’t related to transistors. The non reference designs are all in spec. The 1070 uses 8 pin.

            • chuckula
            • 3 years ago

            [quote<]You can't compare Polaris vs Pascal directly to determine how good transistors are.[/quote<] I sure as hell can in so much as measuring the frequency head room and power envelope of M Billion transistors on chip X vs. N Billion transistors on chip Y is a useful comparison metric for comparing the low-level features of two processes. That's a basic physical measurement of the performance of transistors on two different chips. Period. You then bring in "architecture" as a false flag in your continuing "omg relative performance percentage proves miracle!" shell game. Architecture is important in telling us how well or poorly those transistor budgets on the two chips were used, but it sure as hell doesn't prove that GloFo's process is suddenly amazing.

            • Waco
            • 3 years ago

            You can’t, actually. Comparing power consumption and frequency headroom between two majorly different architectures tells you essentially nothing. Design plays a much larger part in efficiency than you’re assuming here.

            If it were an AMD design versus another AMD design, sure, you could probably make some assumptions, but between the red and green teams…no chance. There’s nothing useful there except a comparison between the two architectures with a footnote about process technology.

            • chuckula
            • 3 years ago

            I’m just talking physics of transistors * frequency (taking into account voltage needed for that frequency) and the resulting power consumption. That doesn’t really depend upon the higher-level architecture. I’m not talking about the number of points you get on a graphics benchmark using either chip.

            There *might* be an architectural factor in play if there is a large disparity in the number of transistors that are actually active at any one time due to clock gating and other power management features. In that case, the true number of simultaneously active transistors could be much different for both chips. However, I don’t see any evidence of that being the case and if anything that wouldn’t factor in AMD’s favor here.

            • Waco
            • 3 years ago

            Might? There’s a massive difference between a boneheaded implementation and a perfect one, even on the same process/fab. You can’t compare between two architectures and two processes and glean much of anything about either process.

            It’s like comparing two different cars but neglecting to acknowledge that one is driven by a grandmother and the other by Mario Andretti…

            EDIT: To be clear, I think you can’t draw any conclusions about anything except that Nvidia happens to have a better architecture this round. No further conclusions about process technology are valid.

            • chuckula
            • 3 years ago

            There’s a massive difference between a theoretical ‘boneheaded’ and ‘perfect’ implementation but I’m not that convinced there’s such a huge difference in the [i<]real-world[/i<] architectures and -- if anything -- the 'architectural' difference should favor AMD. Case in point: Look at one of those benchmarks that doesn't "take advantage" of all the resources in a GCN card, like, for example, a GPU benchmark that doesn't use everybody's favorite "asynchronous compute". What is the big claim to fame that Nvidia has in these situations? That it is using all the resources on the GPU (or at least a very high portion) while AMD is being handicapped by old software that doesn't actually utilize all the GCN power. So... what conclusion would you draw from a [i<]transistor[/i<] standpoint? The conclusion in a worst-case "favorable" to AMD standpoint is that the transistors on the NVidia part are going hog-wild with high utilization and the part *should* be at the top of its power curve since everything is turned on. The AMD part *should* have a bunch of unused resources that -- worst case scenario due to architecture -- don't get properly turned off but at least the transistors are running at lower frequencies and some power savings of unused transistors should be happening. However, real-world benchmarks don't bear that out, and I'm not talking about [i<]graphics performance[/i<] per watt (architectural efficiency) I'm talking about [i<]transistors switching on and off at a given frequency[/i<] for a given power consumption figure (transistor efficiency). The only way around that finding is this conclusion: Even though AMD's magical GloFo transistors are light years ahead of TSMC's, Nvidia's architecture is so damn efficient that it can fully utilize all the execution units on the chip [b<]while turning off a massive proportion of the transistors at the same time that the chip is fully utilized[/b<]. That's an unrealistically positive assumption to be making about Nvidia there. A more realistic assumption is: When you rail a GTX-1070 or an Rx 480, those transistors are running. Remember what I'm talking about here: I'm talking about *transistors* that are switching on & off and consuming power at a given frequency. I'm *not* talking about a benchmark-points-per-watt graph that TR might post to compare architectures. This is the *physical layer* that we are dealing with.

            • Waco
            • 3 years ago

            That’s a big wall-o-text that basically says “I don’t know”.

            You have a 4 variable problem with only no known quantities. You can make assumptions, like I said, but they’re meaningless IMO.

            I understand transistors. I understand design. I feel like you’re discounting the latter quite a bit here, and trying to come up with conclusions based on that.

            • chuckula
            • 3 years ago

            [quote<] You have a 4 variable problem with only no known quantities. [/quote<] No known quantities? We have: 1. Transistor counts: Known. 2. Voltages: Known. 3. Frequencies: Known. 4. The result: Measured power consumption: Known (and current can be derived from V and W) You couldn't even type out what the 4 supposedly "unknown" quantities were. What are the exact "unkonwns" in here? It's pretty straightforward. I'm not making statements regarding things that are actually related to the deeper issues of the process like specific design rules for laying out transistors or the number of metal layers needed to connect them.

            • Waco
            • 3 years ago

            Two processes, two designs. No real way to judge any versus the other.

            • xeridea
            • 3 years ago

            By your reasoning, CPU transistors are all better than GPU transistors because they have a lot higher frequency for given transistors and power, P4 is better than Athlon, Bulldozer is better than Sandy bridge, longer pipelines magically make better transistors, and Titan transistors suck because Ti transistors are better.

            • chuckula
            • 3 years ago

            OK, I’m being nice and giving you the benefit of the doubt that you clearly never took a course on basic transistors here. Literally everything in your post is not only wrong but blatantly disagrees with everything I’ve been saying.

            I’ve simplified the example down so much that clearly you can’t be convinced of anything involving facts.

            • xeridea
            • 3 years ago

            [quote<]I sure as hell can in so much as measuring the frequency head room and power envelope of M Billion transistors on chip X vs. N Billion transistors on chip Y is a useful comparison metric for comparing the low-level features of two processes. That's a basic physical measurement of the performance of transistors on two different chips. Period.[/quote<] You are saying basically numTransistors * frequency / power usage = transistor efficiency, and nothing else matters, period. I was just showing the flaw in your statement. The most relevant one would be say a 980 Ti vs Titan. By your logic, Titan has better transistors, and we should ignore that Titan has a lot better DP performance.

            • jts888
            • 3 years ago

            I think you might be a little out of your depth here, chuck.

            Logic level architecture differences aside, all transistor designs aren’t created equal, even on the same fabber’s same line’s same chip. Foundries like TSMC and GloFo give their customers design rules for recommended gate lengths and spacing, etc., that can be followed to varying degrees depending on what sort of leakage and active power, density, and frequency targets exist for individual slivers of logic.

            A Polaris 10’s 5.7B transistors in 230 mm^2 vs. a GP106’s 4.4B transistors in 200 mm^2 point to very strongly differing priorities in density/power/clocking that go beyond the “base case” density differences between the TSMC 16nm and Samsung/GloFo 14nm node densities.

            • Waco
            • 3 years ago

            Yep. He’s reaching, but it’s a “knows enough to be dangerous” kind of situation that leads to incorrect conclusions.

            • chuckula
            • 3 years ago

            I never made any statements about the densities of the transistors… did I? That’s a design decision made based on the rules of the process and the intended higher level design elements and how the transistors are connected together in a higher level architecture.

            What I am saying, and this needs to be repeated, is that Xeridia’s original blanket assertion that GloFo’s supposedly magical 14nm process is going to magically make the 40% IPC improvements over Piledriver look like a footnote is wrong. The process is provably inferior to TSMC’s based on a very simple physical metric that is being overlooked in the name of being “deep” in a way that makes no sense.

            1. Transistors consume power, and we ain’t talking about sub-threshold power at idle here, we are talking about loading the entire chip. Once again, all this supposed superiority of GloFo is resting on the silly assumption that Nvidia’s parts are so far ahead of AMD’s that they are simultaneously using every ounce of computing power with near perfect efficiency but at the same time are somehow using massively fewer transistors to do it.

            2. Do we remember frequency? A bigger Nvidia chip from TSMC with *more* transistors clocking at a *higher* frequency? Don’t we remember how power consumption increases as a square of the voltage required to drive those higher frequencies?

            3. Despite all the hand waving about “architectural” differences, the *transistor level* differences between one GPU that’s running flat-out and another GPU that’s running flat-out are going to come down to the fundamentals of: 1. How many transistors are actually running (known); 2. What frequency are they running at (known); 3. What voltage is required to run them (known): 4. How much power does all of this consume when you ramp them up? (known).

            [b<] I literally don't care if one GPU is putting out 500 FPS and the other is putting out 1 FPS, that's not what is being quantified here. A simple watt meter and some voltage sensors along with monitoring the clock speed and understanding the number of transistors on the chip is what you need.[/b<] As for the assumption that the transistors in GloFo's 14nm process in Zen are somehow magically insanely better for high clockspeed operations than the transistors in GloFo's 14nm process on Polaris, that's an assumption without evidence. The density of Zen will clearly be vastly lower than Polaris (duh) but that in no way implies that GloFo has some magically superior transistor technology waiting for Zen. 4. We *have* seen the exact same chip design on TSMC and GloFo's (inherited from Samsung's) process. It was called the A9. And TSMC flat out won when the chip was placed under heavy load. TR even reported on it: [url<]https://techreport.com/news/29215/report-iphone-6s-battery-life-isnt-significantly-affected-by-soc-source[/url<] Note that the interesting takeaway from the article wasn't the headline, which is that both chips consume about the same amount of power when you turn 99% of the transistors off. The interesting takeaway was the major difference in power consumption between them when the identical SoC design was actually driven up to a high load.

        • jts888
        • 3 years ago

        I don’t think any reasonable person expects Zen to beat Intel’s newest on an absolute basis.

        The real hype is that a compelling product would lead to better perf/$ across the board, since Intel’s kept their prices up for relatively stagnant products, due to the essentially complete lack of competition.

          • xeridea
          • 3 years ago

          Pretty much what I have been saying. They likely won’t beat Intel, at least not on pure IPC, but they should at least be a reasonable alternative again, which benefits everyone.

        • MOSFET
        • 3 years ago

        [quote<]At a little over $400 there's nothing unreasonable about the Broadwell 6800K's price [/quote<] [quote<]Wouldn't you be "excited" for Kaby Lake with its greater than 50% real-world boost over Sandy Bridge...?[/quote<] Nicely said, chuckula.

      • Concupiscence
      • 3 years ago

      You get a platform update, but it amounts to a convenience more than a necessity. Even IGP benefits are dwindling at this point. You can only push transistor and power budgets so hard on parts designed for mobile applications first, and the switch to DDR4 buys some legroom but still leaves graphics parts hungry for bandwidth. Unless one of my systems dies I probably won’t build a Zen rig next year, but if it turns out well I won’t hesitate to recommend it far and wide. The sector needs better competition than it’s had for years.

Pin It on Pinterest

Share This