Intel brings a Core CPU and Radeon GPU together on one package

Mmm, crow. Intel announced this morning that it's collaborating with AMD to deliver one of its eighth-generation Core processors paired with a semi-custom Radeon graphics chip on one package. The semi-custom Radeon GPU and Intel processor cores will be joined together for the first time by Intel's Embedded Multi-Die Interconnect Bridge, or EMIB, a way of connecting discrete dies across a 2.5D, high-bandwidth on-package interconnect. The combined package will include a stack of HBM2 memory alongside the CPU and GPU cores.

Although details are extremely scarce at this stage, Intel says the combined chips can deliver "incredible performance and graphics for enthusiasts" in a form factor that can power thin-and-light gaming PCs. The company did note that the CPU on board this package will be an H-series chip, or a 35W-to-45W TDP design. Intel offered no details of the architectural generation or graphics-processing resources of the semi-custom GPU it commissioned from AMD.

Intel says the motherboard area it saves by combining the CPU, GPU, and HBM2 with EMIB will let its partners design compact thermal solutions capable of dissipating the heat this high-performance SoC produces. The company also points to vaguely-described "new features," "new board layouts," and larger batteries as potential benefits.

An example of the space savings Intel expects this CPU-and-GPU combo to realize over a traditional motherboard with a discrete GPU and GDDR5. Source: Intel

Although comparisons between AMD's recently-announced Raven Ridge APUs and this chip will naturally spring to mind, the simple fact that the CPU on this product is an H-series part should put to rest any notion that AMD is falling on its sword. Raven Ridge APUs are, for the moment, targeted at 15W to 25W power envelopes and feature no on-package memory. That fact alone will probably put a ceiling on the graphics performance we can expect from Raven Ridge.

Another distinguishing factor between Raven Ridge machines and this new Intel chip will apparently be the window sticker on systems that include it. So far, AMD seems to be positioning its APUs for notebooks priced well under $1000, while the company told PCWorld that the fruits of its collaboration with Intel will likely find homes in systems selling for $1200 to $1400—about the cost of a reasonably-priced gaming notebook with discrete Nvidia graphics inside. If that holds true, Raven Ridge PCs likely won't even occupy the same ballpark as those with Intel's as-yet-unnamed wonder of miniaturization.

Intel says more news about this product and systems based on this multi-chip module will be coming in the first quarter of 2018. That might just be enough time for us to scrape our jaws off the floor.

Comments closed
    • Delta9
    • 2 years ago

    Ladies and Gentlemen, I introduce the proto PS5 and/or Next Xbox APU. The 2019 consoles could feature an upgraded version of the current chip’s interposerless packaging. Other generational changes would make 4-8 physical cores w/SMT in the form of a full blown, fat pipe lined, 64-bit, x86 9th Generation processor from Intel, a very real prospect. As in the current version, the Intel chip could be paired with an AMD GPU, most likely a Navi based chip featuring higher clocked HMB2 or some variant thereof. The lack of an interposer brings the cost down. Combined with the maturity of a proven manufacturing process, as demonstrated in this initial generation of upcoming chips, would lead to cheaper APUs with a faster ramp to the required market volume.

      • Anonymous Coward
      • 2 years ago

      Meh, no way that Intel is going to steal that from AMD. Its going to be something from AMD, one piece of silicon, at a good price.

        • Delta9
        • 2 years ago

        Good point, that and just about any x86 chip will kill the Jaguar CPU. It was slow compared to the Atom chips when it came out. That and AMD has a competitive mainstream CPU architecture, which stripped of some general functions (Xbox 360 PPC CPU style) could have higher clocks and mΓ‘s corez. I doubt the console makers would go with an ARM based CPU on steroids, since AMD supposedly killed the K10 awhile ago. There is that whole Windows on ARM project that MS is bringing to market, but much like everything I have written, it is pure baseless speculation. It would be cool if AMD could license the interposerless tech for their graphics and custom business. We don’t know the specifics of the deal, maybe AMD got more than just providing Intel with a custom graphics processor. They already have access to some of the design specifics, and it solves the manufacturing cost and constraints of the HMB interposer design. As long as it lives in the GPU and custom SOC zone (and my imagination), it would make sense. Except Intel hired AMD former head of the Radeon graphics group and announced they were making a discrete GPU again. So I’m probably 99% wrong about the interposer tech sharing, but not so off with HBM packaged on die for future consoles.

    • Firestarter
    • 2 years ago

    Could EMIB licensing for AMD dedicated GPUs be next? My understanding is that EMIB is both way cheaper and more flexible than an interposer, I could see this being used for mainstream GPUs where whatever advantages an interposer has aren’t as important as cutting cost

    • DavidC1
    • 2 years ago

    “The company did note that the CPU on board this package will be an H-series chip, or a 35W-to-45W TDP design.”

    It may be -H, but higher TDPs will be available.

    Earlier leaks had 65/100W TDP versions. That coincides with the NUC roadmap having 65/100W versions as well. The same leak also said PCIe x8 is used to connect the CPU to the GPU/HBM2 die.

    The performance must be quite high as the 100W has the “VR” name on it.

    Some leaked benchmarks had 3DMark11 P score in the 13-14k range, which is better than 1050 Ti by quite a bit.

    • BerserkBen
    • 2 years ago

    Whoa! This is like reading Moby Dick again but this time Ahab and the whale become friends!

    • dodozoid
    • 2 years ago

    Does it realy use EMIB to connect GPU to CPU?
    Because distance between CPU and GPU is rather large.
    HBM to GPU for sure but is intel explicitly stating that EMIB connects everything?

    On unrelated note – products I want to see next:
    Radeon SSG with a large Xpoint memory pool.
    EPYC using EMIB
    Adaptive(Free)Sync everything

      • Kraaketaer
      • 2 years ago

      Doesn’t look like it, no. The Intel video and slides points out EMIB only between the GPU and HBM. There’s probably an embedded PCIe link in the substrate – that should be more than doable with traditional manufacturing methods, and far cheaper. Plus, the distance makes cooling easier, of course.

    • MrJP
    • 2 years ago

    So [url=https://wccftech.com/intel-kaby-lake-g-series-integrated-radeon-gpus-first-benchmarks-specifications/<]WCCFTech[/url<] are reporting that the GPU portion is a 24 CU part at around 1100MHz with 4GB HBM2. Given the architectural improvements with Vega CUs, might this put the performance somewhere in the ball-park of a desktop RX570?

      • Anonymous Coward
      • 2 years ago

      There has to be major thermal constraints here, compared to a dedicated graphics card.

        • MrJP
        • 2 years ago

        Yes, that’s a very fair point. A lot depends on the power budget.

        • DavidC1
        • 2 years ago

        At up to 100W TDP for the highest part, it won’t be as bad as current iGPUs.

        1536 SPs with HBM2 memory having bandwidth of 170-200GB/s. That’s Polaris 10 territory. Performance leaks are indicating just that.

        One thing is that the leaks say model number for graphics are said to be 694. That’s Polaris, not Vega.

      • chuckula
      • 2 years ago

      If the 24 CU part of it is true then I can make a very rough ballpark die size estimate based on the 484 mm^2 die size of Vega 10 with 64 CUs of approximately 180 mm^2.

      Of course, it may not scale exactly with CU count so my guess is that’s a lower-bound on a reasonable estimate of the die size.

      Edit: Looks like I was right (thanks for the downthumbs AMD bots). Photos of the chip here: [url<]http://www.guru3d.com/news-story/photo-shows-new-intel-mcm-based-cpu-with-amd-gpu.html[/url<] My estimate comes out to 188 mm^2 based on the proportionality of the known CPU size vs. the GPU size.

      • AnotherReader
      • 2 years ago

      The low clock rate, if confirmed, would suggest a Polaris derivative rather than a Vega based GPU.

        • chuckula
        • 2 years ago

        It’s most certainly Vega based. I’ve never seen anything that says Vega GPUs can’t be downclocked.

          • Anonymous Coward
          • 2 years ago

          Merely being Vega based on mobile platform is a really good reason to be downclocked.

          • AnotherReader
          • 2 years ago

          You are right about downclocking, but Raven Ridge’s Vega based IGP is clocked at [url=https://techreport.com/review/32743/amd-ryzen-7-2700u-and-ryzen-5-2500u-apus-revealed/3<]1300 MHz in the Ryzen 7 2700U[/url<]. Perhaps that 1100 MHz leak is from an engineering sample.

            • Beahmont
            • 2 years ago

            Yes, but isn’t RR significantly few CU’s? Clock speed is almost assuredly determined by heat dissipation and not Β΅arch in mobile devices these days.

            • AnotherReader
            • 2 years ago

            You are right about clock speed being determined by TDP. However, this GPU alone probably has a higher TDP than the entire Raven Ridge APU.

            • Anonymous Coward
            • 2 years ago

            This Intel-AMD hybrid won’t be beating nVidia at anything mobile if they don’t keep the clocks low. I think it was well established that Vega does [i<]poorly[/i<] at high clocks.

            • AnotherReader
            • 2 years ago

            Of course, clocking it low to take advantage of the voltage frequency curve is a given. I am just saying that for Vega, that would be probably higher than 1100 MHz.

            • Anonymous Coward
            • 2 years ago

            Perhaps the power usage is just higher than they had originally planned on, so they had no other way to stay inside a certain target TDP.

            Or, if this is a sufficiently high margin product, and one where every watt counts, maybe they’ve chosen to go really low on the frequency-voltage curve.

    • HERETIC
    • 2 years ago

    Long time between drinks-Last time we had Intel CPU and AMD(ATI)onboard graphics
    was the 600 series chipsett-somewhere around 2007 I think……………………………

    EDIT
    Some benchmarks guys-
    [url<]https://wccftech.com/intel-kaby-lake-g-series-integrated-radeon-gpus-first-benchmarks-specifications/[/url<]

      • Klimax
      • 2 years ago

      Radeon Xpress 200 (aka RC410) I got such mainboard in my collection… (also two mainboards with VIA GPU, similar vintage)

    • DavidC1
    • 2 years ago

    If HardOCP is completely correct, then its due to internal conflict between AMD and RTG that created this. In other words, politics.

    I doubt Intel will stop at this either. Look for increased integration and more products. Eventually they’ll have to choose between their own Gen graphics, and RTG.

      • chuckula
      • 2 years ago

      [quote<]Eventually they'll have to choose between their own Gen graphics, and RTG.[/quote<] Intel has had its integrated graphics coexisting with external graphics solutions since at least 2011 with Sandy Bridge. I don't see them completely dropping their own graphics in the mobile & regular consumer desktop segment any more than I see ARM SoC makers dropping their "little" and "big" ARM cores.

        • Anonymous Coward
        • 2 years ago

        Seems like dropping established specialized computing hardware at this stage (where hardware is getting more specialized) would be a strange move. If anything, Intel should [i<]invest[/i<] in general purpose computation on their GPUs.

        • DavidC1
        • 2 years ago

        Come on, this is different. With external solutions the customer gets to choose whatever GPU he wants. Do you think Intel is so stupid that they’ll can discrete GPU support since they believe most low end can be served by their GPUs?

        In this case Intel’s putting significant amount of effort to make a part that’s practically, a big iGPU.

        On the NUC roadmap the successor to Skull Canyon(Iris Pro 580, GT4e Skylake) gets replaced by Hades Canyon which uses these parts. So they planned to upscale their GPUs but now they are going to use AMD ones. Cannonlake, if it worked out would have had a GT4e version scaling up to 104EUs. We would have seen Icelake coming any day now. We had reputable sources like PCWatch saying HBM was Intel’s original goal, just the delay on that memory forced Intel to settle for eDRAM. It could have been Icelake GT4e with maybe 160EUs and HBM2.

        And look at the CanardPC and HardOCP articles. Look at Raja leaving AMD. They say Intel fired significant amount of graphics engineers to make room for RTG products. The big layoff was in Spring of 2016.

        The 3D side has been at a standstill since late 2015 for Intel. Partly that’s due to their 10nm delays, and I bet that’s also the reason they had to give up a scaled up version of Gen graphics. But firing critical people connects well with lack of advances on the GPU side.

    • DavidC1
    • 2 years ago

    Score one for CanardPC and Kyle from HardOCP.

    EMIB is only for connecting GPU and CPU. CPU and GPU will likely use PCIe x8 slot per earlier leaks.

    EMIB is cheaper than silicon interposer, because they are for both very high bandwidth. CPU doesn’t need very high bandwidth to connect to GPU. In that case, they can use much cheaper form of MCM.

    Both Radeon die and HBM2 die are not manufactured by Intel. Besides, Intel manufacturing isn’t suitable for making memory chips.

      • Beahmont
      • 2 years ago

      Isn’t EMIB supposed to be cheap enough to eventually build a whole CPU out of individual blocks? I was fairly certain that was Intel’s stated goal with this tech.

        • DavidC1
        • 2 years ago

        EMIB requires using small chiplets for communication. The advantage is against silicon interposer products. When you need very high bandwidth, you need lots of connections for pins. Silicon interposer and EMIB allows that.

        But no matter how cheap EMIB may be, it can never be as cheap as the traditional non silicon interposer that isn’t meant for high bandwidth. Extra process steps are involved for communication die. It is nowhere near as large as silicon interposer, which is why it lowers costs, but not zero.

        If high bandwidth isn’t needed(I’m talking about 100GB/s or more), then EMIB won’t be needed.

        • the
        • 2 years ago

        That’s like everyone’s goal right now. AMD has hinted about this last year before the Polaris launch, Intel announced EMIB awhile back too but pushing it heavily now and nVidia has released a paper discussing the splitting of a GPU into multiple dies.

        Products take time to develop, especially those using IP from multiple vendors. We’re on the edge of scaling revolution.

      • the
      • 2 years ago

      Intel can manufacture memory chips. The eDRAM for Iris Pro is manufactured in-house. [url<]https://www.realworldtech.com/intel-dram/[/url<] Intel still does research into DRAM manufacturing, if only to keep patents. Commodity memory product they'll let their partner Micron handle that. For the more application specific memory (SRAM, eDRAM, HBM, HMC etc.) Intel may choose to handle those in-house as they can pair production amounts directly to the host die rate. Being closely integrated does have the advantage of not being subject to the pricing whims of a commodity market.

        • DavidC1
        • 2 years ago

        Intel’s eDRAM is different from regular DRAM. The eDRAM on Intel chips have far less density than DRAM because eDRAM uses a process that’s close to what’s used in CPUs. Other companies use eDRAM on the same die as a cache.

        eDRAM was Intel’s short term plan until HBM was mature. They wanted HBM earlier but that didn’t pan out so they had to develop eDRAM for high performance graphics products.

    • Bumper
    • 2 years ago

    So is HSA going to finally be a thing ?

    • iBend
    • 2 years ago

    They got no choice.
    Their mobile iGPU can’t compete with Raven Ridge, and nVidia’s 940 need to extinct now (and where is the rumored GTX 1040?)

    it’s a win win for everyone (except nVidia)

      • DancinJack
      • 2 years ago

      They didn’t have to and I’m sure Nvidia is doing just fine…

      Nvidia stock only up 138 points over the past year lol. They’re not worried.

    • Mat3
    • 2 years ago

    AMD should be able to build something similar with their own Ryzen cores and a standard interposer. Intel marketing makes EMIB sound a lot simpler and cheaper than an interposer, I wonder if it really is.

    • Bensam123
    • 2 years ago

    Yup, was just waiting for something like this to happen. While it sounds odd at first, it works out for both companies. Limited ventures while not giving out all your cards can be mutually beneficial for both parties.

    [url<]https://techreport.com/news/30600/amd-takes-a-335m-one-time-charge-for-more-sourcing-flexibility?post=999047[/url<] [quote<]Hah, AMD does some really weird stuff sometimes for the sake of bettering their company, I wouldn't put aside the notion of them partnering with Intel. Long term it might not be a great solution, but for right now it doesn't sound like that bad of a idea. Not like AMD can put Intel out of business... Intel doesn't seem too concerned with furthering performance at the moment either.[/quote<]

    • deruberhanyok
    • 2 years ago

    I wonder if this is what is going to power the 8th gen NUC being lined up to replace the 6th gen Skull Canyon with Iris Pro?

    Listings show it with “discrete” graphics, and I guess technically this would count as not an IGP.

    Also could be nice in a Mac Mini but I see that’s already been pointed out by a lot of people. Might also end up powering lower end iMac and other AIO systems.

    • brucethemoose
    • 2 years ago

    Wait, why even connect the CPU to the GPU with EMIB? PCIe already has plenty of bandwidth, and you could do that over a cheap MCM.

    Maybe Intel/AMD (and Apple???) are reviving the old HSA push:

    [url<]https://www.extremetech.com/wp-content/uploads/2013/11/amd-hsa-2.0-explained.jpg[/url<]

      • chuckula
      • 2 years ago

      [quote<]Wait, why even connect the CPU to the GPU with EMIB? PCIe already has plenty of bandwidth, [/quote<] EMIB is a physical interconnect and PCIe is an I/O protocol that only happens to go over wires in a PCB most of the time for convenience. In other words, assuming the connection between the CPU and the GPU is using EMIB then there is no reason that the EMIB connection can't be acting as the physical layer for the regular PCIe connection that you would expect. [Edit: Although in this chip the EMIB might only be between the HBM memory and the GPU, which really does require massive bandwidth. However, EMIB [i<]could[/i<] route PCIe signals]

        • brucethemoose
        • 2 years ago

        Are there any power savings with EMIB? You could be right, but the wording suggests everything is connected with EMIB, and if Intel just needs 8 or 16 PCIe lanes, why would they spend money on a bridge that can support far more traces?

          • Beahmont
          • 2 years ago

          EMIB is supposed to be pretty darn cheap. EMIB is also supposed to support Daisy Chaining, so the CPU could theoretically have access to the HBM2 as a last level cache for the iGPU and/or CPU. And EMIB is extremely fast with high bandwidth and lower than traditional interconnect latency. EMIB is supposed to at some point in the future connect all the disparate blocks of a CPU, from cores to cache to iGPU execution units to i/o block, into one package and do so better than current interconnects in a monolithic CPU die.

          Note a lot of “supposed to’s” in that statement. We don’t yet know if EMIB is performing up to Intel’s expectations. But this product seems like a good first commercial test for EMIB.

            • Kraaketaer
            • 2 years ago

            EMIB is for short distances only, as in side-by-side chips – otherwise, the cost savings of using small embedded interconnects (which is the entire cost savings of EMIB) disappears entirely. As such, I take it pretty much for granted that this is using a traditional PCIe link embedded into the substrate (not EMIB, just plain copper traces), which should be perfectly doable with traditional manufacturing technologies while having perfectly fine bandwidth (heck, it could even be 8x PCIe – that’s plenty even for a 1080Ti). PCIe can be routed for quite long distances through PCBs (motherboards, GPUs, so on), so why shouldn’t they be able to embed it into the substrate PCB of the chip, after all?

    • NeelyCam
    • 2 years ago

    Can I have a PS5 with something like this in it…?

      • muxr
      • 2 years ago

      Why? Xbox one X is 5-6 tflops.. this chip is 3.3 tflops.

        • ET3D
        • 2 years ago

        Thanks for mentioning this figure. Helped me find the [url=https://www.tweaktown.com/news/59766/intel-amd-radeon-vega-mcm-half-tflops-xbox-one/index.html<]Tweaktown article[/url<], which includes quite a bit of info, and answers quite a few questions.

    • shank15217
    • 2 years ago

    Haha looks like Raja is laughing all the way to the bank, its time to buy some AMD stock. This is a massive win for AMD.

      • Meadows
      • 2 years ago

      You’re a couple months late for that buy.

      • derFunkenstein
      • 2 years ago

      The time to buy was a year ago when it was $6/share. The time to sell was right before Ryzen launched when it was $15 or more.

      • Klimax
      • 2 years ago

      Literally, just saw article on TR about his exit…

      • ET3D
      • 2 years ago

      Laughing all the way to an Intel job, according to rumours.

    • swaaye
    • 2 years ago

    I’m not sure I see how this thing is going to be much of a success. Seems to target a premium, smaller gamer notebook? And perhaps those not-so-popular PC gaming set top-like boxes? It’s compromised being in the 35-45 W range, but it’s expensive too.

      • chuckula
      • 2 years ago

      Raw hardware wise this thing would put the hurt on consoles but as you noted, it’s not going to be cheap and therefore it’s not really a direct competitor.

        • Chrispy_
        • 2 years ago

        I dunno, the XBONE-X is rocking a 220W TDP, an RX ‘590’ (2560-shader Polaris GPU) and rings in at only $499 as a complete system. This is just a chip, and Intel would normally charge more than half that just for the CPU portion normally, completely ignoring the more expensive GPU+HBM combination that this also has.

        I’d be amazed if Intel can get even close to the GPU in the new XBONE given the 45W limit and the fact that an RX 580 is a lesser GPU at 180W.

          • chuckula
          • 2 years ago

          I don’t know where all this “45 watt limit” stuff is coming from exactly. The only place I’ve seen it mention is that the H-series Kaby Lake chips often have 35-45 watt power ranges but that’s not an upper limit on the entire package and there’s a good chance the CPU will throttle down when the GPU really wants to ramp up.

          There’s absolutely no doubt that one of these parts would greatly outpace some Kabini parts in CPU performance. Maybe the Xbox one X might have somewhat stronger GPU.

          The leaked product listings of Kaby Lake-G describe a 65-watt and 100-watt part. The 65 watt part isn’t really far off of a higher end mobile system now, just that both the CPU and GPU are put in a single package.

          [url<]https://tech4gamers.com/intel-kaby-lake-g-cpus-with-amd-graphics-and-hbm2/[/url<]

            • Chrispy_
            • 2 years ago

            Well, Swaaye said 35-45W above you, I was just replying to that. The only mention of 65 and 100W is a Chinese site that also describes the IGP as “GT2” which doesn’t sound right at all.

            Intel’s press release specifically mentions this:
            [quote<]Now, we’re opening the door for thinner, lighter devices across notebooks, 2 in 1s and mini desktops, while delivering incredible performance and graphics for enthusiasts.[/quote<] I'm just speculating but thinner notebooks is probably talking about the 35-45W range and not the 100W range, 2 in 1s are even more constrained and Intel's idea of a 'mini desktop' is a NUC, which are usually ~15W but there is a single 45W model.

            • Kraaketaer
            • 2 years ago

            Given that we currently have thin (<20mm) 14″ and 15.6″ notebooks with H-series CPUs and GTX 1050Ti and 1060 GPUs (i.e. 45W+40-75W+ TDPs) I don’t think these will be particularly low power. The argument Intel is making here is that with reduced board area, you can fit bigger, and thus better cooling solutions in the same chassis. This is an excellent argument. Weight will still be an issue as heatsinks are heavy, but PCB area savings are still a huge plus. Essentially, this means you can add an extra fan and heatsink without increasing the chassis size. For example, if Gigabyte can use this to add a third fan to the Aero 14, it would suddenly be a much more attractive machine (better thermals, less noise, more stable).

            Of course, this would also allow Apple to integrate their rMBP 15″ motherboards even more, making room for bigger batteries, better cooling or a slimmer chassis.

            • Chrispy_
            • 2 years ago

            MBPs already have the largest legally-permissable battery capacity. If they make it larger then the laptop can’t be shipped by air and people can’t take a MBP on a flight, even if it’s in hold luggage.

            So they won’t make bigger batteries, they’ll just make it thinner.

            /sob.

          • smilingcrow
          • 2 years ago

          Consoles are subsidised as they make the money from games so not a useful comparison.

    • Mr Bill
    • 2 years ago

    ‘I’ve been meaning to ask you. In the transference, Intel got part of your wonderful brain. But what did you ever get from Intel?’
    ‘Mmmmmm…’
    ‘Oh, sweet mystery of life, at last I’ve found you!’

    • AnotherReader
    • 2 years ago

    Kyle at Hard/OCP is enjoying this.

    • frogg
    • 2 years ago

    I was just thinking yesterday that nothing really interesting happened in the PC space in the last 3 or 4 years; and today BOOM

      • chuckula
      • 2 years ago

      THINK THAT MORE OFTEN!

      • DancinJack
      • 2 years ago

      wut?

      your rock must be comfortable from the underside.

    • Krogoth
    • 2 years ago

    This is part of real goal behind Vega and Navi. They were never intended to recapture the high-end gaming market. AMD RTG conceded that market to Nvidia since Maxwell.

      • AnotherReader
      • 2 years ago

      Umm. Fury X fares better vs the 980 Ti than Vega 64 does versus the 1080 Ti. For the first finfet generation, AMD has certainly conceded the high-end to Nvidia. However, it might change in the future. The HD 2900 XT was followed a little over a year later by the 4870: perhaps the greatest mid-range GPU ever.

    • DPete27
    • 2 years ago

    Sounds like Apple is looking for an iMac update.

    • willmore
    • 2 years ago

    I am curious if the agreement that allows this device to be made restricts AMD from making a competing device of their own? In other words, does this ‘cap’ what AMD can do with their own chips?

    Because it would be hilarious to see AMD ship a one chip (less the HBM) solution to this that is performance competetive.

      • chuckula
      • 2 years ago

      ?? If AMD wants to take this “semi-custom” GPU part and turn it into a discrete graphics card I’m sure they could. I doubt it would be worth their while though. I’m not sure what else they would do exactly since they probably don’t want to buy Intel CPUs and they don’t have EMIB technology independently of Intel.

        • willmore
        • 2 years ago

        No, I’m asking if this deal limits AMD from making an APU that competes with this device in graphical performance.

          • chuckula
          • 2 years ago

          Probably not. Physics and economics have already done the work for AMD in that regard.

    • Chrispy_
    • 2 years ago

    When a company as big as Intel is sick of its decade-old chips still being perfectly acceptable for daily use, it combines someone else’s dGPU into the product in order to fast-track the built-in obsolescence.

    Coffee Lake CPUs will probably still be okay for use in 2028, but the GPU will be looking dated by 2020, most likely.

    This does bode well for NUCs though, there might soon be a NUC I want to carry around instead of a laptop….

      • NTMBK
      • 2 years ago

      This is meant to replace soldered GPUs which couldn’t be upgraded anyway.

    • DeadOfKnight
    • 2 years ago

    If you can’t beat ’em, join ’em.

      • Gadoran
      • 2 years ago

      They already won on them a lot ago dude, are you too young?. The real victim here is definitively Nvidia and its mobile GPUs stack

        • DancinJack
        • 2 years ago

        As long as the 940 is dead WHO CARES

          • tipoo
          • 2 years ago

          940MX ONE THOUSAND YEARS!

            • Voldenuit
            • 2 years ago

            Intel releases a Devil’s Canyon edition with Schezuan Sauce as the thermal interface. Still better than intel’s stock TIM.

            • trieste1s
            • 2 years ago

            Sichuan, or Szechuan

          • Kretschmer
          • 2 years ago

          This man speaks the truth.

        • DeadOfKnight
        • 2 years ago

        Yeah I get that, it was a joke. It’s still a real shame they had to collaborate in order to do this when it seems they should have a complete package all their own right around the corner to make this move.

        Unfortunately, they’ve got no mind share in the CPU market and it likely would not have worked so long as Intel stays on top. Better they do it now than wait for Nvidia to offer a deal to Intel like this.

          • Beahmont
          • 2 years ago

          Not really. Unless they go with an interposer, AMD doesn’t have the tech or patents to build something like this. They just don’t have the interconnect technology. EMIB tech is really the thing that makes this possible.

          And it’s been known that when Intel starts down it’s path to making MCM chips like EYPC and Threadripper that it was going to use EMIB as the interconnect because it is just that good. It’s supposed to be something like 90% of the benefits, none of the downside, and 10% of the cost of interposers.

          And it seems like Intel is finally ready to start making basic products before they start doing things like making bits and pieces of a CPU on different nodes and slapping them all together to form a single CPU, which is their stated goal for the EMIB tech.

      • tipoo
      • 2 years ago

      If you can’t beat ’em, join em on an EMIB with HBM2.

    • DeadOfKnight
    • 2 years ago

    Wow. AMD not falling on it’s sword? I don’t know. I expected that AMD would come out with a similar product on it’s own. Then again, they will be making money from these chips. I guess the press won’t hurt them at all either if these “graphics powered by AMD” products do well.

    It’s just, at a time when they seem to be on the up and up, they do something that a few years ago might be considered a desperate move. Then again, this could be the result of a desperate deal they actually made a few years ago. It just seems really weird to me, but what do I know?

      • DeadOfKnight
      • 2 years ago

      Actually, the more I read into this from an Intel vs AMD perspective, the more I realize that this has nothing to do with Intel vs AMD. This is clearly a move in the battle between AMD and Nvidia, a winning move. Nvidia will no longer be able to compete in this space.

        • Leader952
        • 2 years ago

        Why?

        What prevents Nvidia from doing the same thing.

          • chuckula
          • 2 years ago

          I can see it now: Nvidia CPU technology merged with Intel graphics!

          • smilingcrow
          • 2 years ago

          Having access to the right ‘glue’ and oh yeah, having someone that wants to partner with you that has a CPU.
          So maybe they go with VIA and UHU.

        • Sahrin
        • 2 years ago

        It’s Intel v. nVidia, now, in the HPC space.

    • NoOne ButMe
    • 2 years ago

    I’m guessing this is a 24-32CU part, at most.
    so safely targeting a market that Raven Ridge cannot.
    Under a market Vega 10 can target.

    If a Vega 11 chip exists that would be sold as a dGPU, than this could steal some sales, but given AMD’s laptop dGPU+CPU market share is probably measured in the <10,000 digits, I don’t think that’s an issue.

      • smilingcrow
      • 2 years ago

      As this is seemingly primarily aimed at Apple and other lovers of anorexia the major appeal of this is just how small it is compared to using a motherboard with a dGPU and Graphics RAM.
      Plus it is meant to be power efficient so it’s in a separate class than a dGPU solution and aimed at space constrained products; basically anything Apple makes.
      Tim Cook will use one in his home baked mini Apple crumbles no doubt.

    • chuckula
    • 2 years ago

    So apparently this part is officially the long-rumored but never actually seen Kaby Lake-G series.

    Looks like it’s time for AMD & Kaby-G to [url=https://www.youtube.com/watch?v=aYD3gLCXXuU<]regulate.[/url<]

    • Kougar
    • 2 years ago

    So is Intel going to join the fray fabricating HBM2 stacks, or are they still sourced from Samsung/Hynix?

    Given the supply limitations and multiple price hikes on HBM2 modules in the last year, why does Intel think it can release a large number of products with HBM2 unless it was fabbing them in-house?

      • chuckula
      • 2 years ago

      I doubt that Intel is manufacturing the memory. I think they are assuming more favorable HBM supply conditions in 2018.

      Also, if that simplified graphic is accurate then this is a single-stack solution, which will help with reducing the memory cost in the SoC.

        • Kougar
        • 2 years ago

        Conditions should be more favorable, though more products will undoubtedly be using/shipping with it so I think it’s not an easy forecast for Intel to make.

        The kicker is Hynix went on record saying customers were willing to pay up to 2.5x more for HBM2. Then in August, it came out that prices on HBM2 rose another 30% beyond previous levels. This is an elegant solution for thin form-factors, but those are some drastic price increases on HBM2 that I am sure customers will be paying for in products using this solution, probably on top of Intel’s usual premium for mobile chips and extra to cover the AMD GPU.

    • Philldoe
    • 2 years ago

    I told you. You all doubted me. But I told you.

      • chuckula
      • 2 years ago

      YOU’RE RIGHT!

      Bud Light is less filling [b<]AND[/b<] it tastes great!

        • JustAnEngineer
        • 2 years ago

        Wrong beer.
        [url<]https://www.youtube.com/watch?v=argdPEmD9bI[/url<]

          • chuckula
          • 2 years ago

          We wouldn’t have doubted him if it had been about Miller Light!

            • Redocbew
            • 2 years ago

            I still would have. At least I think so. That dude seems so eminently doubt-able anyway. In the end, I guess I’m not sure.

    • Hattig
    • 2 years ago

    Amazing.

    Clearly a high-end mobile product that won’t get in the way of AMD’s own Raven Ridge plans.

    It is supposedly semi-custom, but I would be willing to bet it’s just a smaller Vega (Vega 11, or Vega 12 that appeared in one leak). It’s possible the EMIB can transparently replace an interposer, so no changes are needed at the die level. Other theory is that AMD gets a license to EMIB for their GPUs, and Vega 11 will ship with HBM2 and no silicon interposer.

    Obviously Apple are a potential major customer here, one of the few customers that might be interested in, say, a six-core mobile CPU and >3 TFLOPS of GPU in as small an area as possible for MacBook Pros and iMacs. Microsoft too might wish to use it in high-end surface type devices.

      • derFunkenstein
      • 2 years ago

      If the number in the Vega name indicates the number of CUs like it does for Vega 56 and Vega 64, Vega 10 would be a pretty small GPU. I would hope this would be approx a “Vega 32” that sucks down considerably less power both because it’s a smaller chip and because it’s probably clocked down a ways. Big Vega is kind of hungry at its default clocks, but clocking it down a little and undervolting it has been [url=https://techreport.com/review/32391/amd-radeon-rx-vega-64-and-rx-vega-56-graphics-cards-reviewed/11<]shown to be effective[/url<] while giving up minimal performance.

        • Hattig
        • 2 years ago

        Yeah, AMD screwed up by having the internal codenames be Vega 10 and Vega 11, and then using numbers for the release SKUs as well. So Vega 11 codename is likely something like Vega 32 marketing name.

    • AnotherReader
    • 2 years ago

    I wonder if this could be the first sighting of Vega 11.

    • NTMBK
    • 2 years ago

    Funny to see Intel changing their tune about gluing processors together.

      • DancinJack
      • 2 years ago

      Well, the EMIB is a pretty superior alternative. That wasn’t how it was done before.

    • WaltC
    • 2 years ago

    It simply means that Intel is tapping AMD for a high-performance GPU in a market segment where Intel cannot compete with its own hardware. I mean, that’s always been the case between Intel and AMD’s integrated CPU/GPU products–the integrated GPUs from AMD have been superior for years. With Intel becoming as diversified as it is–I hear that in the EU Intel no longer even does PR–it’s only natural. AMD is really the only 100%-tech focused company left between the two, imo.

    • Andrew Lauritzen
    • 2 years ago

    Nice, this is public now πŸ™‚ Was one of the last things I worked on a bit while at Intel. Looking forward to it coming out!

      • AnotherReader
      • 2 years ago

      Are you allowed to talk about what semi-custom means in this context?

        • Andrew Lauritzen
        • 2 years ago

        Definitely not, but I don’t blame you for asking.

        Especially since I’m no longer at Intel I pretty much can’t talk about any details unfortunately, just share my excitement πŸ™‚

          • chuckula
          • 2 years ago

          VE HAVE VAYS OF MAKING YOU TALK!

            • Mr Bill
            • 2 years ago

            Tick
            Tick
            Tick
            Tick…

            • MOSFET
            • 2 years ago

            Tick
            Tock
            PAO

      • davidbowser
      • 2 years ago

      I feel you. Being active in TR in discussions, and then NOT talking about stuff that you actually know is tough.

        • meerkt
        • 2 years ago

        What is it that you can’t talk about? πŸ™‚

          • derFunkenstein
          • 2 years ago

          For real. I want to know interesting stuff I can’t talk about. All the stuff I can’t talk about is boring as hell to the average TR reader. Also the exceptional ones.

            • CampinCarl
            • 2 years ago

            Hear, hear!

          • davidbowser
          • 2 years ago

          Ha! LOTS. The worst is that there are so many seemingly innocuous things, that I honestly can’t remember what is public and what isn’t.

          disclaimer – I work for Google. My opinions are my own.

          There was a recent post about Android phones and I started and stopped writing a comment at least 3 times.

          Whenever there are articles about companies I have worked for (VMware, EMC) or partnered with (pretty much every major software company), I have to check myself. A while back, I almost posted a link to something that I thought was a press release and realized it was an internal site. I had to search for the actual press release and customer facing info before posting.

    • ptsant
    • 2 years ago

    Between Raven ridge and this, expect dGPUs to only keep their place in the $400+ market. It has taken a really, really long time, but we may really start to see gamer-level GPUs integrated into the SoC.

      • AnotherReader
      • 2 years ago

      Hmm, looking at the progression of Iris Pro, I doubt your prognostication. More than 4 years after introduction, Intel processors with eDRAM are a drop in the bucket in terms of market share. However, this should take care of a lot of laptop discrete GPU sales.

        • ptsant
        • 2 years ago

        The general tendency is to migrate almost everything on the chip. There will always going to be a benefit to having a discrete GPU, but this will be progressively shifted to the higher end.

        Iris Pro was very expensive and was missing the key ingredient, which is AMD drivers and IP. If Iris Pro were successful, Intel wouldn’t have asked for AMD hardware πŸ˜‰

    • ludi
    • 2 years ago

    * checks headline *
    * checks date *
    * Not April 1 *

    * Checks headline again *

      • chuckula
      • 2 years ago

      It was the end of Daylight Saving Time over the weekend.

      TIME TRAVEL!

      • Anonymous Coward
      • 2 years ago

      My first reaction was that the image looked way too fake. Also the april 1st thing. “What is going on?”

        • just brew it!
        • 2 years ago

        Given that it is just an announced future product (not a shipping one) at this stage, that picture probably [i<]is[/i<] a 3D render, or possibly a mechanical mock-up without real silicon on it.

          • DavidC1
          • 2 years ago

          The dimensions seem correct.

          The earlier leaks said the package size is 58.5mm x 31mm. The picture matches. Q1 2018 release means they must have the product almost ready to go.

      • freebird
      • 2 years ago

      You forgot line after: * Checks headline again *

      * Sh!!!ts a NUC Br!ck *

    • maxxcool
    • 2 years ago

    This is not shocking. Someone big needed a REAL gpu married to Intel silicon CHEAPLY… and Nvidia was not he right choice.

    • freebird
    • 2 years ago

    There are numerous pluses here for AMD: The FreeSync issue dpaus mentioned, this should fill a market AMD wasn’t currently breaking into against Nvidia (gaming laptops) and should help increase production of HBM2 (vendors will do it for Intel) which will help lower the cost and BOM for AMD HBM2 filled GPU devices. I was hoping to see AMD come out with something levering an APU with HBM first; I really wanted an APU paired with HBM/HBM2 for a HTPC with next gen graphics performance.

    • dpaus
    • 2 years ago

    You know, if this chip comes with FreeSync enabled by default, it might be the beginning of the end for G-Sync.

    It could also do a lot for AMD’s HPC efforts.

      • Ryu Connor
      • 2 years ago

      Aye.

      Who would have imagined that it happened in the same year that Linux took over the desktop and unicorns and dragons once again roamed the Earth?

        • DancinJack
        • 2 years ago

        lol

        • the
        • 2 years ago

        Well Microsoft did release SQL Server for Linux this year.

        It is like we’re living in some sort of bizzaro world.

      • MrJP
      • 2 years ago

      Do high-end laptop buyers represent a significant proportion of the monitor-buying market? Or are you thinking that if the laptop panel-makers start to make it a cheap feature to include, it’s likely to make it into more monitors as well?

      In that sense, I think it would prove a bigger tipping point if AMD can persuade OEMs to include FreeSync on the majority of Raven Ridge systems.

        • DancinJack
        • 2 years ago

        I think all OEMs need to do to “include” Freesync on Raven Ridge systems is make sure there is a DP port or HDMI, right? I mean, it’s got to be built into that GPU.

        edit: I suppose you mean on laptop display panels. My bad.

        • RAGEPRO
        • 2 years ago

        FreeSync (more accurately, VESA Adaptive Sync, but…) is built into the eDP connection that almost all laptops use for their built-in panels now. There’s no real excuse for every Raven Ridge laptop not to have FreeSync in some measure.

          • Firestarter
          • 2 years ago

          even Nvidia uses it for their ‘G-Sync’ laptops

          • derFunkenstein
          • 2 years ago

          When will Intel start including VRR support? Back at Skylake launch, Intel confirmed to TR that they’d do it, but the iGPU basically hasn’t changed in the two-plus years since then. That’s the point when you’ll expect every laptop display panel to support it, but I wouldn’t expect it before that. It might be in the spec but my guess is that it’s not implemented in every panel yet.

            • chuckula
            • 2 years ago

            I’m not saying that CannonLake actually does support it, but CannonLake does have an updated graphics architecture. So it’s at least possible from 2018 onward.

            • tsk
            • 2 years ago

            My little birds at Intel whisper about Ice Lake and adaptive sync.

    • wingless
    • 2 years ago

    Intel and AMD fanboys can now come together in peace and harmony. Let there be peace! (except for Nvidia….)

    • WhatMeWorry
    • 2 years ago

    To quote Steve Jobs at the keynote where Apple switched to intel CPUs , “Hell has frozen over”.

    edit: oops, Skywarrior beat me to it. Read all comments before posting…Read all comments before posting…

    • ET3D
    • 2 years ago

    Nice to see this finally confirmed by Intel. I’m sure some web reporters are now patting themselves on the back.

    This indeed limits the appeal of Raven Ridge, and I imagine that’s Intel’s goal in announcing this at this point in time.

    I do wonder what kind of AMD GPU will be in this. Certainly the HBM2 suggests something Vega based, but there’s still the question of how many CU’s and which video processor will be included, Intel’s or AMD’s (I’m guessing Intel’s).

    Edit: according to Tweaktown, it’s 24 CU, 4GB HBM2, Vega architecture.

      • chuckula
      • 2 years ago

      There’s a good chance that the Intel IGP will still be used for lower-power scenarios and I agree that the Intel video-acceleration hardware will likely be used for video playback (it’s actually ahead of AMD’s in high-bit depth HEVC decode support).

        • ET3D
        • 2 years ago

        Including an Intel IGP seems extremely wasteful for this product. That’s a very large chunk of silicon that would go unused.

        Far as HEVC bit depth goes, don’t both Intel and Raven Ridge support 10 bits for both encoding and decoding?

          • chuckula
          • 2 years ago

          [quote<]Including an Intel IGP seems extremely wasteful for this product.[/quote<] At only 122 mm^2 for a quad-core mobile part with the IGP, it's more wasteful to redesign a chip that completely drops the IGP just for these limited-production parts than it is to drop in a mass-produced die that's probably noticeably smaller than the GPU.

            • ET3D
            • 2 years ago

            That again raises the question of the number of CU’s. If they have 16 CU’s or less, which I think is reasonable for a low power target, and if the display block and video block are on the Intel side, then that could end up smaller than 122mm^2 (the RX 460’s die is 123, and that does include these blocks; although Vega has larger CU’s, assuming Vega is used).

            In the end, it depends on scale of production, and on the size of the Intel IGP. If this does go into the next MacBooks, then the scale may be large enough to be worth cutting down the IGP.

            (But that’s all speculation. If it’s a big, low clocked Vega then it might be quite a bit larger than the Intel die.)

            • NTMBK
            • 2 years ago

            The 3D rendering in the press release shows a GPU that is almost twice as large as the Intel die. I think “big and low clock” is the way they’re going here.

            • ET3D
            • 2 years ago

            Tweaktown says 24 CU, so that solves this question, assuming they’re right.

          • Andrew Lauritzen
          • 2 years ago

          In general if you want it in anything connected to a battery, you’re going to want the iGPU enabled πŸ™‚ The reality is that mobile dGPUs have been designed to assume that an iGPU is present to do anything that cares about power use, so they simply haven’t been optimized to handle desktop and media stuff in a particularly power-efficient manner like SoCs/APUs have.

          As chuckula notes as well, the iGPU’s media capabilities exceed the dGPU here by a fair margin (not just format support but stuff like the Netflix ultra HD stuff) and at much lower power.

          In Win10 there’s not much of a disadvantage to running with multiple adapters enabled in any case.

            • Hattig
            • 2 years ago

            If it’s a custom piece of silicon, it may not include uncore GPU features that are already in the CPU die – most specifically video decode/encode.

            However I’m sure that the media capabilities of Raven Ridge and Vega support the same set of formats and resolutions as Intel’s. 10-bit HEVC at 4K decode is a fairly standard feature these days for video decode blocks.

            Yes – β€œVega” offers hardware-based decode of HEVC/H.265 main10 profile videos at resolutions up to 3840×2160 at 60Hz, with 10-bit color for HDR content. Dedicated decoding of the H.264 format is also supported at up to 4K and 60Hz. β€œVega” can also decode the VP9 format at resolutions up to 3840×2160″ – [url<]http://radeon.com/_downloads/vega-whitepaper-11.6.17.pdf[/url<]

            • Andrew Lauritzen
            • 2 years ago

            Has Netflix/AMD enabled UltraHD streaming on their hardware yet? Last I checked it was still only Kaby Lake+ and Geforce 10+. I’m not saying this is because of format capabilities, but it’s obviously pretty relevant to consumers.

            Also note that there’s a wide range of power efficiency in video decode support, and dGPU vendors typically rely more on a mix of shaders/fixed function/CPU to support the newer formats and combinations since power is not quite as important. Unfortunately they don’t tend to advertise in which cases they do this so you pretty much have to know how to measure the power use to figure it out. HEVC is becoming fairly standard to do “fully” in hardware, but VP8 and VP9 are still often implemented with a hybrid scheme on some hardware.

            Anyways the high level point is just don’t make assumptions on this stuff… realize that dGPUs are still designed assuming an iGPU is present and thus do not make power efficiency of desktop and media workloads a priority. Thus unsurprisingly they tend to lag somewhat in that space.

            • Anonymous Coward
            • 2 years ago

            I broadly agree with you, but at the same time, AMD really should have power usage during media decode nailed down. There is more than a little in common between their latest dGPU and their soon to arrive latest iGPU, and also this new Intel-AMD-on-a-stick. I have no inside knowledge, but I think they should have solved the decode problem correctly and in a uniform way.

            • Andrew Lauritzen
            • 2 years ago

            Yes it’s not usually an issue of the tech not being available, it’s just different design points are optimized in different ways. I fully expect their RR parts to be every bit as efficient in desktop/media stuff if not more, but that’s why power/efficiency/media features typically actually lead on those SKUs. The engineers know the target market for the different design points and optimize accordingly.

            • ET3D
            • 2 years ago

            AMD has been offering iGPU for a while, including at 15W and below. So calling that ‘dGPU’ is meaningless. An AMD part of the die can easily do all GPU related stuff, and it can do it with reasonable efficiency. Using the AMD part seems to imply HBM2 use (although I think the design could allow for it to be turned off and data come from main RAM directly), and that could affect power use, but it’s possible that running HBM2 at low clocks doesn’t draw a lot (and it does reduce draw from main memory). If the cores are Vega, the HBM2 would be a cache, i.e., small and therefore low power.

            As for media capabilities, it would be strange if AMD can’t offer PlayReady 3.0 support with its current hardware, but I guess we’d have to wait and see if a driver update solves this (as it did for NVIDIA). Technically, I think that AMD’s hardware block does everything Intel’s does.

            It’s certainly possible that Intel’s stuff will be what ends up being used. It even makes some sense to include both Intel and AMD media capabilities. I just don’t find the explanations against removing Intel stuff totally convincing. It’s certainly simpler to have just one GPU and media block. It’s true that we’re at a point where having two GPU’s doesn’t have major downsides, but single GPU is still a more robust solution.

            • dodozoid
            • 2 years ago

            I guess a huge part of power penalty for using dGPU for multimedia would be the out-of-the die interconnect. And I am practically sure this uses run-of-the-mill PCIe via copper traces to communicate with dGPU.

      • dpaus
      • 2 years ago

      These chips are going to be in a price range that doesn’t affect Raven Ridge’s market prospects.

        • ET3D
        • 2 years ago

        People don’t always go by price range. This chip pushes Raven Ridge out of the high end gaming spot for thin and light. People looking for the best gaming solution at that form factor would now have something better to look forward to.

          • derFunkenstein
          • 2 years ago

          Raven Ridge has so far been [url=https://techreport.com/review/32743/amd-ryzen-7-2700u-and-ryzen-5-2500u-apus-revealed/3<]announced as 15W parts[/url<]. The Intel processors being paired up with AMD graphics are HQ parts, which have 45W TDPs.

            • Anonymous Coward
            • 2 years ago

            I wonder why AMD has only talked about low wattage versions of Raven Ridge. I imagine it would be doing pretty fine with 45W watts. Also, cost effective at the cash register. Just my kind of mobile processor really.

      • ptsant
      • 2 years ago

      I don’t expect to compete in the same segment as this. I would put RR at max $250 and this looks like a $500+ chip.

    • Phartindust
    • 2 years ago

    The sky is falling! lol totally caught off guard by this. They really played this one close to the vest. Guess those rumors a few months back were true after all.

    • SkyWarrior
    • 2 years ago

    Hell’s frozen now!

    • dpaus
    • 2 years ago

    From a technical point of view, I thought that this (and by ‘this’, I mean EITHER an Nvidia OR an AMD GPU on an Intel chip) was a no-brainer, but assumed that it would never happen due to Intel’s arrogance (and I don’t necessarily mean that in a negative way – sometimes, ‘arrogance’ is simply the visible part of ‘extremely successful performance’).

    I am pleased to be shown wrong in this case, and good on Intel for doing so.

      • tipoo
      • 2 years ago

      I think Nvidia doubling Intel GPUs perf/watt has something to do with it. And old Jen Hsun would probably never work with Intel on this.

        • dpaus
        • 2 years ago

        Well, that and the fact that Jen gleefully pissed on Intel’s conflakes for years. AMD just sued them – something that Intel totally understands and can deal with.

        • qasdfdsaq
        • 2 years ago

        Nvidia didn’t double Intel’s perf per watt, not even close. Intel doubled Nvidia’s perf per watt with the first Iris Pros (15w iGPU equalling the performance of a 25w-30w dGPU). Nvidia took two years to retake the crown with Pascal, but at 80-90% more performance with 50% more power, claiming they have double the perf per watt is total rubbish.

          • tipoo
          • 2 years ago

          The Iris 650 runs at about 15 watts under combined load in the rMBP 13 and similar systems, the MX150 with a TDP-Down to 15W very nearly doubles the performance (your 80-90% is about right). Where do you see 50% higher power?

          The MX150 CAN run with a TDP-up to 25W, but it doesn’t increase performance proportionally, so I’m using the 15W perf/watt basis.

            • chuckula
            • 2 years ago

            [quote<]Where do you see 50% higher power?[/quote<] That's easy and your own numbers show it. CPU + GPU under load: 15 watts. GPU by itself: 15 watts. 50% higher may not be a precise figure but it's probably not that far off.

            • tipoo
            • 2 years ago

            No, the GPU part of the package was using 15 watts, the whole chip with the Intel cores was using 30. Both the 15 and 29 watt ULVs can pretty well stay at ~30 watts consumption with decent cooling.

            After the boost phase both rMBP 13s settle in at 15 watts to the GPU, 30 for the whole package, most similar systems offer the same split.

            • Andrew Lauritzen
            • 2 years ago

            Are you basing this on the power counters from the chip (Intel XTU or similar)? Note that the CPU/GPU SoC power estimate is very approximate and even at the best of times can’t properly break out power from any shared resources (caches, LLC/eLLC, etc).

            For the sort of comparison you’re trying to make you do unfortunately need probes into the specific pins of the chip and a deep knowledge of the layout and power sharing capabilities. You can’t just hand wave TDP numbers beyond saying “they’re roughly in the same order of magnitude” πŸ™‚

            From a consumer point of view thought what ultimately matters is running a workload with a fixed performance target and see how long the battery lasts πŸ™‚ There’s no need to get fancier than that in terms of the impact on the end user.

            • tipoo
            • 2 years ago

            Yeah, the power counters. If that ignores shared resources then the Iris could certainly be lower, though whatever wattage we remove from that count only partially makes up for the 80-90% performance lead.

            So double was optimistic on my part. But if I’m Apple and I see MX150 laptops beating rMBP performance by that much in similar form factors, I probably want to set Intel and AMD up on a blind date, right? πŸ˜›

            • Andrew Lauritzen
            • 2 years ago

            Sure, I think it’s fair to say based on consumer benchmark-level information that they the two are “roughly in the same ballpark” now. Anymore more precise you really need silicon-level measurements and specific workloads.

    • ronch
    • 2 years ago

    Oh my goodness. Like the USA giving rocket plans to the Russians.

    Edit – to clarify, my point is, giving away one of your crown jewels is like giving the Russians your rocket plans.

      • tipoo
      • 2 years ago

      Who is giving who plans?

      Intel isn’t fabbing the Radeons, are they? Just assembling the end package together? Need more details atm. Fabbing Radeons would be equally massive news if not more, but I don’t think it’s that.

        • dpaus
        • 2 years ago

        I’m sure Intel will be fabbing them onto a single piece of silicon, which would be the only way to get all the potential benefits.

          • tipoo
          • 2 years ago

          Hm? The press image shows a multi chip module, multiple pieces of silicon. In past implementations of this, they don’t need to come from the same fab.

          The point is all the chips are connected with the EMIB.

            • dpaus
            • 2 years ago

            D’Oh!!! Read acronyms CAREFULLY before posting…. :-O

            Also:

            [url<]https://www.pcworld.com/article/3235934/components-processors/intel-and-amd-ship-a-core-chip-with-radeon-graphics.html[/url<]

            • tsk
            • 2 years ago

            No, only the HBM and GPU is connected via EMIB.

      • dpaus
      • 2 years ago

      Except that the Russians were far ahead of the Americans in rocket technology throughout the late 50s and early 60s – but only because they had pursued the technology they’d gotten from the Germans more aggressively than the Americans pursued the technology they’d gotten from the Germans…

        • chuckula
        • 2 years ago

        Russian, American, IS ALL MADE IN TAIWAN!

          • Phartindust
          • 2 years ago

          Favorite scene in that movie.

          • NTMBK
          • 2 years ago

          Funny, I don’t see Taiwan on the list of GlobalFoundries fabs.

          EDIT: Ohhh, it’s a reference. Ignore please πŸ™‚

        • Krogoth
        • 2 years ago

        Acutally the Soviets were only ahead the USA in rocketry until 1961 when USA quickly caught up and overtook the Soviets in 1964-1965. They were unable to get their heavy lifter to work (N1).

          • w76
          • 2 years ago

          If you’re going to talk about the N1, lets not forget the Buran, which might’ve proven to completely outclass the Shuttle. They just didn’t get much of a chance to shine.

            • ludi
            • 2 years ago

            For anyone who doesn’t know this bit of history…it’s pretty fascinating. There’s even a couple half-complete units deteriorating away, one of them in a ruined factory in Kazakhstan along with an engineering mock-up:

            [url<]https://news.nationalgeographic.com/2016/04/160412-soviet-union-space-shuttle-buran-cosmonaut-day-gagarin/[/url<] [url<]https://www.urbanghostsmedia.com/2015/12/abandoned-buran-space-shuttles-kazakhstan-baikonur-cosmodrome/[/url<]

            • Krogoth
            • 2 years ago

            Space Shuttle was a joke because NSA wanted to have the shuttle have the ability to deploy satellites and possibly “snatch” Soviet units. Buran was built around doing the same thing to USA during late Cold War. USSR fell apart before Buran could protake in its intended mission.

        • psuedonymous
        • 2 years ago

        [quote<]but only because they had pursued the technology they'd gotten from the Germans more aggressively than the Americans pursued the technology they'd gotten from the Germans...[/quote<] Not at all, almost all of Russia's domestic rocket technology came from indigenous development (a lot of it stemming back to GRID alumni). Russia had no Von Braun, but they instead had Korolev who was even more of an engineering force than Von Braun when it came to getting rockets from design to production. While Von Braun was mired in the inter-agency infighting over rocketry (Von Braun being linked to the US Army's efforts until NASA was formed to consolidate them), the R7 was developed and launched Sputnik, and was a completely separate design from the V2, and massively more advanced. The R7 architecture is still in use today (the Soyuz' characteristic core stage with parallel strap-on liquid boosters), which is a testament to Korolev.

          • Krogoth
          • 2 years ago

          Yep, R7 is the father to all ICBMs and light lifters.

        • frogg
        • 2 years ago

        Except it’s the Russians who provided the RD180 engines to the NASA … [url<]http://spacenews.com/ula-orders-20-more-rd-180-rocket-engines/[/url<]

    • tipoo
    • 2 years ago

    Interesting development for sure…So maybe we could see 13″ ultrabooks with Radeon graphics that challenge the MX150 first?
    I’m down for a 13″ rMBP with the 8th gen ULV quads, and these Radeon graphics.

      • Jeff Kampman
      • 2 years ago

      This is going to be way more powerful and way more expensive than an MX150, I suspect.

        • dpaus
        • 2 years ago

        And waaaay more expensive than a Raven Ridge; AMD gains lots and loses virtually nothing in this deal. (and Intel gains a bit and loses absolutely nothing….)

        In fact, the big loser here looks to be Nvidia…

        • ptsant
        • 2 years ago

        Even if it isn’t much more powerful than the discrete MX150, it enables thinner and lighter form factors, which apparently are the most important feature of a computing device, after the absence of measurable bezels.

          • tipoo
          • 2 years ago

          Consolidating the motherborad space between CPU and GPU too, hopefully used for more battery.

            • ptsant
            • 2 years ago

            Hopefully. Experience has shown otherwise. Especially with respect to premium devices, that are defined by their thinness.

            • derFunkenstein
            • 2 years ago

            There’s also a practical limit to the size of the battery in a device. The FAA doesn’t allow infinitely expanding batteries. Pretty sure sub-100 watt-hour juice boxes are the limit.

        • mtruchado
        • 2 years ago

        not to forget, MX150 uses Optimus tech, which implies you still have the crappy intel IGP for 2D

      • Kraaketaer
      • 2 years ago

      Raven Ridge is for 13″ ultrabooks with Radeon Graphics, at 15-25W. This is not for that market, given that it has a 45W-class H-series CPU and a large dedicated GPU. Leaked benchmarks of pre-production chips peg these at 65W and 100W, which is far more likely given the specs and market. 65W might be possible to cool in a fat 13″ chassis, but not an Ultrabook – and 100W needs 14-15″ chassis and beefy fans – but still far less than a H-series CPU + a GTX 1060.

    • kvndoom
    • 2 years ago

    Next thing you know we will have cats and dogs living together under the same roof!

      • dpaus
      • 2 years ago

      There you go again, totally ignoring the fact that dogs consume far more energy per day than any cat!

        • PrincipalSkinner
        • 2 years ago

        I dunno man, lions are cats too.

      • xeridea
      • 2 years ago

      I have a cat and a dog…. used to be 3 cats and a dog about 6 years ago. Only issue is that dogs love cat food, so need to keep it off ground level.

        • Mr Bill
        • 2 years ago

        My cat’s food bowl is at the top of his cat tower by the front window. Primarily because the dog would eat it otherwise. He still slobbers in the water dish on the floor, much to the disgust of my cat.

    • derFunkenstein
    • 2 years ago

    I know there’s a lot of talk about Apple in the comments here, and maybe Apple is at least partly responsible for the development (discrete graphics in the 13″ MacBook Air, for example), but it seems like Apple would be the ones trumpeting this thing if it was an Apple-only technology, right? At least holding out hope this makes it into other brands of PCs.

      • K-L-Waster
      • 2 years ago

      Apple doesn’t talk about other people’s technology though — they only talk about their own.

      E.g. when was the last time you heard Apple talk about how a newly announced CPU will be in their next gen of… well, anything?

        • chuckula
        • 2 years ago

        The unlaunched iMac Pro is one exception that touts both Xeonized Skylake-X and Vega graphics.

        But that’s an exception more than the rule and it was also done because people were (rightfully) complaining that Apple had effectively abandoned Mac development.

          • the
          • 2 years ago

          Random little tidbit about the Sky Lake-X chips: some dies support a total of 64 PCIe lanes. Only 48 of them can go toward socket 2066 or socket 3467. The other 16 PCIe lanes are for in package communication. Intel currently uses the in-package lanes for their Omnipath controller but this does open some interesting possibilities for things like the iMac Pro.

        • derFunkenstein
        • 2 years ago

        Looking at it another way, if this was a Mac-only product, why would Intel and AMD make a big fuss?

          • K-L-Waster
          • 2 years ago

          Intel always announces their new chips.

          AMD can use all the news of additional business they can get.

          • techguy
          • 2 years ago

          Because if Intel is an 800-lb gorilla, Apple is King Kong.

          • AnotherReader
          • 2 years ago

          Wasn’t the first Iris Pro pretty much a Mac only product?

            • the
            • 2 years ago

            Mostly. Intel used Iris Pro in a few NUCs and some business/workstation class machines from the big four OEMs (Dell/HP/Lenovo/Toshiba) offered it. Intel did release the i7-5775C on the desktop, though many skipped it in favor of consumer Sky Lake that launched a few months later.

            It would be nice if Intel was more aggressive about releasing parts with that 128 MB of eDRAM. Some desktop workloads loved that extra cache.

            • chuckula
            • 2 years ago

            Now that EMIB is starting to filter down into the consumer market don’t be shocked if you see future iterations of that eDRAM cache that provide high-speed L4 caches using separate chunks of silicon.

      • tipoo
      • 2 years ago

      Apple basically single handedly asked for Intels eDRAM GPUs, with the first Iris Pro 5200. They’re still one of the Plus lines biggest users. Since Nvidia trumped Intels performance per watt on graphics, I think it’s very possible Apple pushed for this solution.

      And just because they pushed for it doesn’t mean it’s not available to the mass market, like the Iris Pro as well.

        • tay
        • 2 years ago

        Yeah good points. What do you mean when you say Nvidia doubled intel’s perf/W? I thought intel was doing well in this regard. Anyway, Apple’s choice of OpenCL means that they are stuck with AMD GPU’s for the time being so this could be a way to get the perf/W down.

          • tipoo
          • 2 years ago

          They were doing well until Pascal, with the MX150 and Iris 650 both allowed 15 watts (to the GPU alone for the latter, not the whole package), the MX150 is about 80-90% ahead.

      • MrJP
      • 2 years ago

      [url=https://newsroom.intel.com/editorials/new-intel-core-processor-combine-high-performance-cpu-discrete-graphics-sleek-thin-devices/<]Intel's release[/url<] certainly implies more than one OEM will be producing systems: [quote="Intel"<] Look for more to come in the first quarter of 2018, including systems from major OEMs based on this exciting new technology.[/quote<]

    • Dposcorp
    • 2 years ago

    WOW!

    • Forge
    • 2 years ago

    Waaaaaaaaaaaa??

      • chuckula
      • 2 years ago

      [url=https://www.youtube.com/watch?v=BNyDjkPO8l0<]WHA HAPPEN?!?![/url<]

    • the
    • 2 years ago

    Whoa this is big news.

    So Intel and AMD are indeed teaming up and that’s only the first big message here.

    The second is EMIB being used for HBM. So far all implementations have used an interposer.

    This raises several questions for me. The first is what foundry is the GPU coming from? The usage of EMIB would point toward Intel’s fabs but that is an assumption at this point. Is the first of many products like this? Lastly considering its mobile focus, why didn’t Intel/AMD go with WideIO instead of HBM?

      • NTMBK
      • 2 years ago

      Why would the GPU need to be fabbed on Intel to use EMIB? Surely so long as the chip is designed with the EMIB interface in place, it shouldn’t matter where it’s fabbed.

        • the
        • 2 years ago

        Intel would at least have a hand in packaging since EMIB is their invention.

          • NTMBK
          • 2 years ago

          Oh in packaging, sure. But it’s the same idea as a GPU and HBM stack being fabbed in different places, then packaged together.

            • DavidC1
            • 2 years ago

            In fact, that’s what EMIB is planned for. Using 3rd party IP blocks. Not only that, having multiple different process chips on a single package.

            10nm CPU
            32nm chipset
            45nm FPGA
            29nm memory specific process for HBM2

            For example.

            • the
            • 2 years ago

            Again, the shocker is that AMD is leveraging this.

            The potential isn’t just for GPU and memory but splicing multiple GPU dies together to permit a massively sized GPU. The era of being limited by die size look to be over if this proves economical and yields well. The question is if AMD can leverage Intel (or get a license) for this technology for use in pure AMD GPU products.

            Power and heat are the remaining big limitations.

      • chuckula
      • 2 years ago

      [quote<]The second is EMIB being used for HBM. So far all implementations have used an interposer. [/quote<] That is a big deal since EMIB is intentionally designed to be a lower-cost packaging solution compared to a full-bore interposer.

    • DancinJack
    • 2 years ago

    For as much hate as Intel might get, they deserve props for going to AMD and asking about this. Good on them.

      • psuedonymous
      • 2 years ago

      It’s not as if they really had a choice of vendor: AMD have been surviving off the custom chip market, while Nvidia have shunned it entirely because they have the much higher margin market cornered designing their own chips for sale. It’s possible they could have brought up Imagination, but PowerVR has had effectively no x86 presence since some of the older Atoms, and they have no ready-to-go high performance solution.

        • DancinJack
        • 2 years ago

        It’s not like they HAD to do this to survive. I don’t necessarily disagree with your deduction though.

          • Anonymous Coward
          • 2 years ago

          Yeah its purely optional for Intel, they could easily have pretended AMD doesn’t exist and been fine.

        • Klimax
        • 2 years ago

        PowerVR has much bigger problem for everybody: Drivers…

    • Leader952
    • 2 years ago

    Intel Press Release:

    [url<]https://newsroom.intel.com/editorials/new-intel-core-processor-combine-high-performance-cpu-discrete-graphics-sleek-thin-devices[/url<]

      • chuckula
      • 2 years ago

      AMD did its own release too: [url<]http://www.amd.com/en-us/press-releases/Pages/semi-custom-graphics-chip-2017nov6.aspx[/url<]

      • AnotherReader
      • 2 years ago

      Perhaps you didn’t read the article. The second sentence links to that press release.

        • Leader952
        • 2 years ago

        I did read the article.

        Didn’t see the link just the words.

        What exactly is your position here – the unofficial corrector?

          • AnotherReader
          • 2 years ago

          I just dislike redundant information

    • DancinJack
    • 2 years ago

    My next NUC-style PC is begging for a 45W version of this. C’mon.

      • Wirko
      • 2 years ago

      Apple makes NUCs. Kind of. Basically. Your hope isn’t in vain.

        • DancinJack
        • 2 years ago

        I’d own a Mac Mini if its parts weren’t fifteen years old.

        • tipoo
        • 2 years ago

        Don’t make me start believing Apple will make a Mac Mini with this!

          • K-L-Waster
          • 2 years ago

          For that you would need the ghost of Steve Jobs.

    • nico1982
    • 2 years ago

    Wow! It was true after all! :O

    • weaktoss
    • 2 years ago

    wat.

    • AnotherReader
    • 2 years ago

    Wow, this is.. Words fail me. With HBM2, this will be probably much faster than AMD’s own Raven Ridge. What is the benefit to AMD of doing this?

      • chuckula
      • 2 years ago

      They sell graphics silicon in niches [i<]COUGH APPLE NOTEBOOKS AND MAYBE SOME IMACS COUGH[/i<] where Raven Ridge was never going to be sold at all and where using a standard mobile GPU layout would work against thin-n-lightiness.

        • Fursdon
        • 2 years ago

        I think chuck is spot on here. I also wonder how large of a GPU part Apple is putting in here.

        Part of AMD’s problem is I don’t believe they can afford some of the stuff we’d like them to try. As we’ve seen all too frequently with them, running into busts can cripple their divisions/company.

        This way, Apple (I’m assuming) foots the bill.

          • TEAMSWITCHER
          • 2 years ago

          If Apple paid for this .. we would not be hearing about it today. Maybe Apple is gearing up an assault on Intel’s “bread-and-butter” products … mobile core processors. A new line of high performance, ARM-powered, MacBooks could be disruptive technology, especially if it solved the Laptop Gaming Problem (BIG FAT PLASTIC LAPTOPS WITH EXCESSIVE WEIGHT, HEAT, NOISE, AND RIDICULOUSLY SHORT BATTERY LIFE).

          This “Franken-chip” could be an attempt to provide PC makers with something that might be able to compete. ARM has already won the day in Smart Phones and Tablets – I can’t help to think that Laptops would be the next territory to conquer.

        • K-L-Waster
        • 2 years ago

        Makes way more sense than the MSM saying it’s a way for both of them to go after NVidia….

          • chuckula
          • 2 years ago

          This is the part of the comic book where Intel and AMD join forces to fight their common nemesis!

            • shank15217
            • 2 years ago

            Nvidickula!!

            • MOSFET
            • 2 years ago

            Yeah, the “dick” part is about right…

          • ludi
          • 2 years ago

          Why not both? Intel doesn’t have a reason to build this product unless a major OEM or ODM is pushing for it, and only Apple has that kind of clout right now; and at the same time, Nvidia is the biggest long-term threat to Intel’s marketspace and this is a great way for Intel to push a fork in Nvidia’s eye. It’s only an olive fork but it sends a message.

          I would also throw in the obligatory “and keep the FTC investigators off our back for another year,” but I don’t think the current administration is much interested in antitrust. Albeit some of these decisions could have been made before the end of 2016, so that consideration may have been in the mix.

        • tipoo
        • 2 years ago

        I can totally see Apple pushing for this. The MX150 near doubled the Iris 650s performance and it was starting to look bad, but they havn’t done Nvidia for a while for various reasons. This may be the push to match it.

          • WaltC
          • 2 years ago

          Apple is certainly a good bet. But Intel presently doesn’t play in the high-end mobile GPU space, so we shall see.

      • Waco
      • 2 years ago

      This. I was thinking Raven Ridge was going to be my next laptop…but if I can get Radeon + HBM2 + Intel cores…ugh.

      Indeed, I don’t understand why AMD would do this. It completely undercuts their own APU launch – fingers crossed they have an HBM2 APU hidden away?

        • chuckula
        • 2 years ago

        It only partially undercuts Raven Ridge because… despite the internal benchmark hype… Raven Ridge is not pursuing the high-end of the mobile market.

        These things [i<]do[/i<] compete against a hypothetical Raven Ridge + discrete AMD GPU product, but I think AMD was willing to take a sure thing with this product.

          • Waco
          • 2 years ago

          The benchmarks will sort that out, but I’d be willing to bet Raven Ridge w/ HBM would be pretty epic. I don’t believe the latencies are so terrible they couldn’t share a pool of HBM as well, assuming Zen is capable of addressing it.

            • chuckula
            • 2 years ago

            [quote<]The benchmarks will sort that out, but I'd be willing to bet Raven Ridge w/ HBM would be pretty epic.[/quote<] Guess who has the packaging technology to implement an HBM graphics chip integrated into a CPU/GPU SoC? Intel (thank you EMIB). That's why you are seeing this product and not a giant silicon-interposed mobile RyZen. Oh and you misspelled Epyc there.

            • AnotherReader
            • 2 years ago

            EMIB is very interesting; it is also interesting that HMC wasn’t used.

            Now, AMD’s divorce from its fabs, was sold as a way of allowing AMD to choose among foundry partners. Of course, AMD being AMD, it didn’t turn out that way. Now that Intel is willing to partner with others, would we see a Vega or a future AMD GPU on Intel’s 14 nm? That would be an even better way for Intel to get back at Nvidia.

            • Andrew Lauritzen
            • 2 years ago

            I think there’s approximately a 0% chance that Raven Ridge will use HBM for the CPU…

            • Waco
            • 2 years ago

            Agreed, but I’m ever hopeful.

          • DeadOfKnight
          • 2 years ago

          How do you know they don’t intend to pursue the high-end mobile market?

            • chuckula
            • 2 years ago

            I never said AMD didn’t intend to pursue the high-end mobile market. They just did by integrating their GPUs into Intel’s high-end mobile platform.

            I said [b<]Raven Ridge[/b<] isn't targeting the high-end mobile market. And AMD agrees with me.

            • DeadOfKnight
            • 2 years ago

            Haha, fair enough.

          • mtruchado
          • 2 years ago

          Not even partially. One solution is over 1.400$, the other well under 1.000$, the target market for both products are different. I’m sick of intel IGPs, their constant screen flickering, drivers problems, poor performance, infinite bugs when using multi screens and so on, and this is nothing that Nvidia can fix with the MX150 solution because they use the Optimus tech for 2D in order to save battery life, this means, they are still using the Intel crap IGP for 2D. I don’t play, but I do want to have a decent integrated 2D/3D solution, hence Raven Ridge is for me. If you do care for gaming, then this new thing is your new toy, as simple as that.

        • the
        • 2 years ago

        This maybe premature. Pricing has yet to be disclosed for both this part and Raven Ridge.

      • NTMBK
      • 2 years ago

      It’s also going to be much more expensive than Raven Ridge- this needs HBM2 on top of the DDR4 that the CPU will require, along with the CPU + GPU adding up to much more die area than Raven Ridge.

      The real question is why AMD didn’t just make something like this themselves… my guess is that without EMIB, it’s tough to get HBM2 cheap enough/thin enough to work in a system like this.

        • the
        • 2 years ago

        I didn’t think that an interposer increased z-height that much in a package to much of much concern.

      • Fursdon
      • 2 years ago

      Since this is 2.5D chiplet design (i.e. the separate memory, CPU, and GPU are manufactured separately, then packaged together), I’d have to imagine both Nvidia and AMD were considered in some fashion, if not both approached and gave back offers.

      From the rumors at HardOCP and others, this sounds like it’s been a few years in the making.

      • WaltC
      • 2 years ago

      It’s going to cost a bundle more than Raven Ridge, too…;) But that’s not really the point. You can believe that AMD will have it’s own similar products–probably been working on them for awhile.

        • chuckula
        • 2 years ago

        Try reading the article and realize that the only reason this is happening is because of EMIB.

          • NTMBK
          • 2 years ago

          Not the [i<]only[/i<] reason... I think Intel's crap GPU tech has something to do with it too.

      • Theolendras
      • 2 years ago

      Sell a ton of IGP, getting developpers attentions, getting mindshare, getting marketshare, but then lose an integrated technological edge they have over Intel. They both gain out of it I would say since I doubt AMD could have maintain that edge for very long if they claimed back interesting market share.

      The developper attentions is very interesting to me tough, Nvidia was seemingly pushing more and more middleware cementing a lead they already had with the hardware. AMD historically couldn’t compete with Nvidia even when they peaked over Nvidia in hardware every now and then. Now this platform could be a somewhat similar baseline in performance with much of the console market with Intel onboard they can might very well sell a lot of them, providing pricing is somewhat reasonable.

      • muxr
      • 2 years ago

      This thing is going to cost 3 times as much as a Raven Ridge and will also use about that much more power as well.

      • Kraaketaer
      • 2 years ago

      RR is a 15-25W chip series. These will be 45W+. Intel uses the term “H-series CPU”, which are 35-45W designs, and “dedicated” and “performance” about the GPU, i.e. not iGPU-level. I’d be shocked if the combined TDP was less than 65W. There’ll be no cannibalization between these two.

    • chuckula
    • 2 years ago

    ARM in Macbooks Conf… ah crap.

Pin It on Pinterest

Share This