Rumor: Intel NUC with integrated AMD graphics spotted in the wild

The big news in PC hardware this week was obviously the unexpected union of an Intel CPU and an AMD graphics chip with HBM2 memory within a single package, seemingly for a "new class" of thin and light gaming laptops. It looks like that combination may also be coming to desktop gamers in the form of a future NUC. An image leaked to Chinese rumor forum Chiphell gives a first glimpse of what the package could look like in the flesh. A presumption that the pictured SSD is an M.2 2280 unit combined with a bit of pixel-counting confirms that the leaked board is very close to the 140×147 mm dimensions of existing NUCs.

Source: Chiphell

The image gives us a chance to guesstimate the size of the individual chips and of the package itself. By using the same methods outlined above, the chip we assume to be the GPU die looks to be about the same size as the Polaris 20 chip found in the Radeon RX 580. The chip most likely to be the HBM2 stack looks large enough to contain a single stack of the wide-bus memory. The entire package appears to occupy a 2.5 cm² patch, considerably larger than a CPU alone.

The size of the graphics chip could suggest performance in line with a midrange desktop graphics card, and the single stack of HBM2 suggests a capacity of 4 GB or less and about half of Radeon RX Vega 56's 410 GB/s memory bandwidth. WCCFTech's digging in the 3DMark database suggest that graphics performance lies between that of Nvidia's GeForce GTX 1050 Ti and the GeForce GTX 1060, which is about where the Radeon RX 570 falls. According to the site, entries in Geekbench's database say that the graphics chip reports 24 compute units, a sizeable markdown from the Radeon RX 570's 32 CUs. 

The leaked picture shows two SODIMM memory slots along with a single M.2 slot and a pair of SATA connectors for adding bulk storage. The I/O area is fairly cluttered with connectors, but we can't make out their identities from this photo.

The relationship between the clock speeds of the chips in the database entries and the final shipping parts remain to be seen, but the future NUC appears to offer potential graphics performance far in excess of that in Intel's last performance-focused NUC, the 2016-vintage Skull Canyon system. Also to be seen are the thermal management strategies Intel's engineering team will apply to cooling a high-performance CPU, a graphics chip, and a memory stack in such close proximity to one another.

Comments closed
    • Kraaketaer
    • 2 years ago

    Doesn’t seem like anyone here has spotted it, but over on the Techpowerup forums someone spotted that this board has a model no. (between the CPU and RAM on the board) that matches some Zotac Zboxes.

    This would also make sense as Intel has never made a NUC with vertical RAM slots, and tend to favor low Z-height (even for Skull Canyon, to its detriment).

    • mcarson09
    • 2 years ago

    I really want to see those I/O ports!!

      • EndlessWaves
      • 2 years ago

      I don’t see any internal power connector, so this looks like it’ll be another 200W external power brick affair.

      • tay
      • 2 years ago

      Hey baby, can I scan your ports?

    • ronch
    • 2 years ago

    I wonder if AMD just killed its APU lineup by giving Intel access to their Radeon graphics.

      • derFunkenstein
      • 2 years ago

      Yeah because this is going to be sub-$200

        • ronch
        • 2 years ago

        Because this may not be the only Intel chip with Radeon graphics.

          • derFunkenstein
          • 2 years ago

          There might be a whole family but you think they’ll be in the same price range as APUs? Don’t be dumb. There’s the whole Intel CPU, the whole AMD GPU, the whole memory chip(s), and this EMIB. The costs involved will dwarf any Raven Ridge APU around.

            • ronch
            • 2 years ago

            Wouldn’t it be possible for Intel to offer a package with fewer CPU cores, no HBM, and a smaller Radeon chip? Intel can certainly do that. That could probably prompt AMD to stop supplying them the graphics part but we don’t know what their agreement is.

            • derFunkenstein
            • 2 years ago

            No that’s still a lot of materials

            • Anonymous Coward
            • 2 years ago

            Sounds like a potentially inferior product to Raven Ridge at a certainly higher price.

            Besides that, the power budget is still a major concern here, and if Intel can go to market with say 45W or 65W then AMD could make a version of Raven Ridge the has the same TDP, and do pretty well with that.

            The advantage of this Intel-Radeon board is that HBM and apparently a large die clocked conservatively. Expensive stuff.

            • Kraaketaer
            • 2 years ago

            Inferior to Rave Ridge? At 4-8x the TDP, higher IPC, and more than twice the GPU CUs? No, definitely not inferior, just for another market entirely. And undoubtedly really expensive.

            • AnotherReader
            • 2 years ago

            He’s talking about a hypothetical package with a GPU sans local memory and a CPU with access to garden variety DDR4. Such a part wouldn’t be served by a GPU larger than Raven Ridge’s and may even have poorer performance than that if Raven Ridge’s L3 is shared with the GPU.

            • Kraaketaer
            • 2 years ago

            Such a product would make absolutely no sense for Intel to make – its own high-end Iris graphics would probably outperform it (due to eDRAM), and be cheaper to manufacture too (possibly similar assembly costs, but no added costs for buying chips from AMD). I also very, very seriously doubt AMD would sell GPUs to Intel to compete in the ULV space, where RR is bound to gain some serious market share.

      • NoOne ButMe
      • 2 years ago

      costs are way to high.
      *requires EMIB, even cheaper than an interposer is much more expensive than on die
      *requires HBM2 more expensive

      Unless Intel can EMIB GDDR5+Polaris 12 class (~80mm^2 i believe) plus their 120-140mm^2 QC for less than AMD can make a ~200mm^2 APU.

    • NTMBK
    • 2 years ago

    This thing could be fantastic for a living room PC!

    • ermo
    • 2 years ago

    Intel: “Hey, Apple, look what we can do! You don’t need to go shopping for an APU for your next Mac Mini over at AMD anymore now — we’ll even test it for you for free! Keep buying from us, pretty please?”

      • NoOne ButMe
      • 2 years ago

      even more interesting if Apple has base Mac Mini with AMD APU, and this as a GPU+CPU upgrade.

      These are different class of performance after all 🙂

        • Anonymous Coward
        • 2 years ago

        How much can the Intel-Radeon hybrid achieve when there is a major thermal limit in place? I expect at 15W it would actually loose to RR. I doubt the additional performance amounts to much interesting until its climbed rather higher. Even 65W isn’t a lot for a RX570-ish GPU and HDR to justify all those transistors. I am struck by how much money this must cost, compared to how few watts it will be living on.

          • NoOne ButMe
          • 2 years ago

          hm. I think it would depend on how much extra performance the HBM2 buys you in each usage case.

          provided the dGPU of this was turned on in 15W TPD, I think it would win. 300mhz is lowest clock I believe. And I think in 15W, Raven Ridge would be in the 600-800mhz range.

          So about equal raw performance, but way more bandwidth.

            • Anonymous Coward
            • 2 years ago

            At what point to the fixed costs of simply having so much hardware (and external busses) outweigh the efficiency that can be gained by bottoming out the voltage? I imagine there is a minimum voltage that is reached before 300mhz. 15W has to be way beyond the point this makes sense.

            • NoOne ButMe
            • 2 years ago

            but this part isn’t targeted at 15W, i believe.

            I believe this is probably at minimum targeting 25-30W of TPD, and at max up to around 130-150W if CPU/GPU are fully loaded.

            • derFunkenstein
            • 2 years ago

            Apple has crammed 45W Sandy Bridge quad-core CPUs into a Mac Mini alongside a Radeon HD 6000-series (6530 I think). That right there is pushing 90W anyway. Seems like if they could do it before they could find a way to do it again.

            • NoOne ButMe
            • 2 years ago

            that was in the larger Mac Mini though, or is it that now it’s larger because they integrated the PSU, I forgot how it has changed.

            Either way, that would be very nice.

            • derFunkenstein
            • 2 years ago

            It is the same size as the current Mac Mini. We’ve got one driving two projectors (both showing the same thing) and a monitor in the sound booth.

      • Thresher
      • 2 years ago

      Except that Apple has been using low power chips for the last few iterations.

      It would be fantastic if they moved to something like this. I’d buy one.

        • tay
        • 2 years ago

        Apple uses -HQ 45W chips in a bunch of places. Seems like just enough power for this combo.

    • tipoo
    • 2 years ago

    This’ll be an interesting one. Shared system memory space + HBM2 all on EMIB. Might be a GPGPU champ if the work needs to be touched by the CPU.

      • Goty
      • 2 years ago

      EMIB is likely only used between the HBM2 stack and the GPU. The CPU/GPU link is likely PCI-Express over traces in the package.

        • mesyn191
        • 2 years ago

        HBM2 latency-wise is also not that good vs. system RAM.

        Putting it behind a PCIe bus would add even more latency which CPU’s care lots about. Even if its on package.

        It’ll help heaps for the GPU performance vs a typical iGPU but I’d expect no benefits at all for CPU performance.

        • ikjadoon
        • 2 years ago

        This makes no sense. Why would AMD need or even prefer a proprietary Intel technology to connect an AMD GPU to an AMD-bought HBM2 stack?

          • Goty
          • 2 years ago

          Because the other option is a silicon interposer?

      • Ninjitsu
      • 2 years ago

      There’s system RAM on the board too, though.

    • DPete27
    • 2 years ago

    That’s a hell of a VRM circuitry.

      • mcarson09
      • 2 years ago

      That means lots of heat!! I’ve love to see load temps on this thing. The case will be day glow orange….

        • mesyn191
        • 2 years ago

        Nope.

        A complex VRM can be very efficient and put out very little heat if the total power delivery requirements are low. Given this seems targeted at the NUC form factor total power required for the package is probably under 65W and definitely under 90W.

        Total heat output from that sort of VRM with those types of power loads is almost certainly under 10W and probably closer to 5W which is a heat load that is dissipated passively trivially from the SMD’s themselves. No HSF would be needed.

        Here is a video from Buildzoid talking about the VRM for a different motherboard, but one that similarly complex to give you an idea of what to expect: [url<]https://www.youtube.com/watch?v=_ic5d5qAhBU[/url<] tl&dw: @ 480A power delivery load (which will flat out never happen) that VRM will put out 105W of heat. @ 100A (typical overclocked power delivery load for a ~4Ghz 8core Ryzen 7 according to him) it'll put out ~13W of heat. @ 240A power delivery load (which would require over 1.8v, which is insane even for LN2 suicide overclocking on Ryzen 7) it'll put out 35W of heat. All numbers assume you're keeping the VRM @ 125C operating temp, which is a reasonably achievable temp to maintain under load with either a HSF or just passive cooling which is why its used as a reference. Changing the operating temp will also effect the power efficiency of the VRM.

      • mesyn191
      • 2 years ago

      I bet AMD had a hand in designing it. They tend to heavily over engineer the VRM specs for all their products

      Intel tends to focus more on cost cutting and tries to do the bare minimum.

        • Beahmont
        • 2 years ago

        Well, it is the VRM for both the GPU/HMB and the CPU because of the integrated nature of the MCM.

        I’d be more shocked if there wasn’t a robust VRM setup for all of that. Though the amount of juice going through the socket is going to be insane…

          • mesyn191
          • 2 years ago

          Its probably 2 separate VRM’s side by side in a L shape around the package instead of 1 big VRM. AMD already does that with AM4 mobo’s for instance.

          If you look at the cheaper or smaller form factor Intel mobo’s you’ll see how they typically optimize their VRM’s for cost vs AMD’s cheaper mobo’s. Its harder to tell the difference on the mid and high end mobo’s because both of them tend to be somewhat overbuilt in that price range. To be fair though Intel’s CPU’s are usually more tolerant of less than the best power delivery so its not a problem per se unless you’re overclocking.

          NUC platforms are very heat limited. 90W TDP for the entire package is probably the max short of blowing out the size of the case so much that it’d no longer be a NUC and slapping a big tower HSF on it to make it work.

          If it was a typical ATX form factor mobo sure they could allow it to have a 200W TDP if they wanted but they don’t seem to be trying to go that route.

            • DavidC1
            • 2 years ago

            They have 100W TDP NUCs coming. That’s for the CPU+GPU.

            • mesyn191
            • 2 years ago

            96W apparently. Didn’t know that. Hades Canyon is the code name.

            [url<]https://www.extremetech.com/gaming/258837-details-leak-intels-upcoming-radeon-powered-hades-canyon-nuc[/url<] They're either going to have to made the NUC case bigger or deal with "dust buster" fans to make it work well I guess.

            • DavidC1
            • 2 years ago

            100, actually. The author has put out a very hasty article. If you look at the image, it says 100W.

            He also says 3 new ones, when the pic shows 2. Again he’s not paying attention. The third is actually Skylake based Skull Canyon.

            If they use a 35W cTDPdown H chip, it’ll have 65W for the GPU and memory. HBM2 probably takes only 5W.

      • NoOne ButMe
      • 2 years ago

      6 phase for a RX 470Light
      1 phase for HBM2
      6 phase for 4C CPU
      1 phase for system RAM.

      probably could be 3-4+1 for GPU and 3+1 for CPU. not that much extra.

        • TwistedKestrel
        • 2 years ago

        I can’t remember which one it was, but there’s a Macbook with a Kaby Lake + discrete AMD GPU and I think it had like 5 phases total. I don’t think a mobile CPU needs 5 phases

          • NoOne ButMe
          • 2 years ago

          that Macbook at peak is probably near 90W. sustained is probably 75-80W.

          Think of this as a CPU between 65W CPU + 65W GPU.

          Each one by itself can pull near that wattage. So each needs a TPD to handle that. Even if the package in total never pulls over 100w (this is rumor for 100W part right?)

        • psuedonymous
        • 2 years ago

        [quote<]6 phase for a RX 470Light 1 phase for HBM2 6 phase for 4C CPU 1 phase for system RAM.[/quote<] 1 phase for the Dark Lord on his dark throne In the Land of Mordor where the Shadows lie.

      • Mr Bill
      • 2 years ago

      Maybe just need a lot of VRM’s to supply a large number of discrete voltages and loads; but not adding up to a large load? Ah NoOneButMe said it already.

    • not@home
    • 2 years ago

    If this has an HDMI output, I would be very tempted to buy it to replace my aging HTPC. If it also has DVI output, I will buy it, so that it goes with my 2004 era IPS monitors that I am still rocking.

      • Chrispy_
      • 2 years ago

      Willing to spend probably around $750-1000 on the latest example of technology integration in the PC space, yet holding onto a 1600×1200 yellowing CCFL-driven IPS panel with 25ms pixel response and an outdated connector?

      Some things will never make sense to me.

        • Bauxite
        • 2 years ago

        Yeah really, I’m only holding onto my U3011 for secondary computers because its big, 16:10, displayport and still looks great for desktop use. The monster truck bezel and heat it puts out, not so great. Work literally dumpster-recycled at least 100 old school 5:4 and 4:3 lcds this year, about time.

          • JustAnEngineer
          • 2 years ago

          There’s a secondary market for 4:3 displays. Many old applications require that form factor. Also, if you can get your hands on a 20″ 1200×1600 monitor, it would pair perfectly beside your 30″ 2560×1600 monitor. Until my recent upgrade to a high refresh monitor, I was using an UltraSharp 3007WFP and 2001FP in that arrangement.
          [url<]http://www.ubergizmo.com/reviews/dell-3007wfp-30-inch-lcd-monitor-review/[/url<] P.S.: For resolutions above 1920x1200, you need a [i<]dual-link[/i<] DVI connection, so watch which active adapter you purchase.

      • tsk
      • 2 years ago

      It will have DP 1.2 and HDMI 2.0 outputs.

      • smilingcrow
      • 2 years ago

      It definitely won’t have DVI but as you can use a HDMI to DVI cable that is irrelevant.

      • JustAnEngineer
      • 2 years ago

      DVI isn’t a problem:
      [url<]https://www.amazon.com/gp/product/B00HJZQGVQ/[/url<]

        • MOSFET
        • 2 years ago

        Then combine [url=https://www.amazon.com/gp/product/B072JK37LV/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1<]this[/url<] with two of those!

    • brucethemoose
    • 2 years ago

    Looks like the CPU and GPU have alot of their own personal space.

    Why not sit them closer together and make the package smaller? Heat density?

      • Chrispy_
      • 2 years ago

      If I understood Intel’s slides on the matter, they are as close as they can possibly be. Although the exposed dies on the surface of the organic package do not touch, their traces in the many subsurface layers take up far more space.

      Think of a CPU package as a pyramid with the PCB connectors/pins/BGA contacts at the base and the exposed die at the top. The further down the pyramid you go, the wider it gets.

        • MOSFET
        • 2 years ago

        This could be a heat monster in the regular boxy NUC skus. The 15W parts are fine, the 28W parts 2C/4T parts already get some complaints. Otoh, I have no idea how well the Skull Canyons cooled, and so far that’s where the -HQ models have shown up.

        This might be the first NUC capable of gaming, imho. Even the eDRAM i5’s just…don’t cut it, even though they benchmark better. I think one needs (at least) ~GTX 960 levels of performance for the “gaming” moniker.

      • mesyn191
      • 2 years ago

      Yeah heat density and trace space for the on package PCIe bus.

      Making it relatively large also leaves room for potential future upgrades (ie. more HBM packages).

      Realistically its still much smaller than a typical CPU + dGPU. Even one that has been integrated into the mobo so its still a win in terms of mobo real estate.

      I just hope the price isn’t too high.

        • ikjadoon
        • 2 years ago
          • Ummagumma
          • 2 years ago

          A laptop with this kind of heat generation potential might end up giving buyers another Samsung “flaming phone experience”.

            • NoOne ButMe
            • 2 years ago

            Well, if the power management works fine, this would be 45W or so total output.
            that is easily manageable, if they put it in notebooks with decent fans/thermal design.

            even one small fan running loud can keep 80-90W of heat down.

            • Kraaketaer
            • 2 years ago

            45W? Intel’s H-series CPUs with iGPUs are 45W. Are you saying a 24 CU GPU uses no more power than an Intel iGPU? Or are you thinking they’re going to clock it so low it doesn’t actually add any performance?

            If the leaks around this are accurate, this will come in 65W and 100W SKUs, and I bet the 65W will be significantly power limited at that.

            • NoOne ButMe
            • 2 years ago

            I imainbe 45W limit for thinner laptops
            65w for “regular” laptops.

            100W for larger, typically now known as ‘”gaming” &perfomance laptops”‘

            • Kraaketaer
            • 2 years ago

            100W for a gaming laptop is actually rather low. A H-series (45W) CPU and a GTX 1050 (laptop) is around that power draw. Anything with a GTX 1060 or above significantly exceeds this. As an example, Notebookcheck measured the GTX 1060 Razer Blade Pro at 163W max (full CPU+GPU load, though only 110W while playing The Witcher 3 at Ultra settings). The MSI GF62VR (also GTX 1060) was very similar at max, but drew ~120W while gaming). The MSI GE63VR with a GTX 1070 was 224/185W under the same loads. GTX 1070 Max-Q is around GTX 1060 levels (yeah, that’s crazy impressive). An ASUS ROG with a 1050 was 110/95W.

            There’s a reason the size difference between an ultrabook and a gaming laptop are what they are. What this chip/these chips do is shrink the board size, so that there’s more room for cooling in a small chassis. The Razer Blade (GTX 1060, non-Pro, non-Stealth) limits its power draw to around ~110W even under power virus loads due to limited cooling capacity (again, see Notebookcheck review) – with this chip on board, they could possibly squeeze in another fan, or just increase heatsink sizes significantly in that slim 14″ chassis. Or fit the same cooling and power draw into a smaller chassis, or shrink the cooling to fit the 65W version of this, for an even smaller chassis. The point is: this opens up a lot of opportunities.

    • chuckula
    • 2 years ago

    [quote<]The image gives us a chance to guesstimate the size of the individual chips and of the package itself. By using the same methods outlined above, the chip we assume to be the GPU die looks to be about the same size as the Polaris 20 chip found in the Radeon RX 580. [/quote<] [url=https://techreport.com/forums/viewtopic.php?f=2&t=120272<]I estimated about 188mm^2[/url<] based on the known 122.4mm^2 die size of the Kaby Lake chip that you see on the board.

      • DavidC1
      • 2 years ago

      Calculate it again based on the package dimensions of 58.5mm x 31mm.

      • NoOne ButMe
      • 2 years ago

      expected result from Polaris 10:
      minus 4 GDDR5 controllers, 6.5mm^2 each, 232-26
      minus 8 CU, <4.25mm^2 each, 206 – 34
      add 1024b HBM2, 172 + 5mm^2

      Technically 8-32b GDDR5 controllers at around 3.25^2 each

      So about 180-190 seems reasonable range, as extra CU for yield, or just I overestimate CU size for Polaris. Or maybe is a Polaris-Vega fusion.

Pin It on Pinterest

Share This