Intel’s eighth-gen Core processors with Radeon RX Vega M graphics revealed

Cheese and crackers. Coffee and cream. Intel and AMD wouldn’t normally fit onto lists of two great tastes that taste great together, but the eighth-generation Intel G-series processors and their blend of blue-team CPU cores and red-team graphics might force a rethink as they launch this evening.

The way Intel sees it, the G-series CPU and GPU combo bridges an implementation divide that’s long existed between its 15 W U-series CPUs for thin-and-light notebooks and its 45 W H-series CPUs for high-performance notebooks. The obvious benefit of U-series CPUs in 15 W TDPs is that buyers can get the highest possible Intel CPU performance in ultra-portable machines, while H-series CPUs in 35 W and 45 W thermal envelopes provide the necessary muscle to chew through demanding computing tasks and feed high-end discrete graphics processors. Notebook buyers who want a machine that can do it all have generally had to accept higher noise, lower battery life, thicker chassis, and lots of heat production thanks to the power demands of high-performance discrete components.

The complete eighth-gen Core G-series package. Source: Intel

Intel’s vision for its G-series CPUs, then, is to give OEMs a way to build a thin-and-light notebook that’s closer in performance to bulkier systems with discrete graphics cards that typify the homes of current H-series CPUs. To do that, Intel G-series CPU packages join a high-performance, eighth-generation H-series CPU die with a semi-custom AMD Radeon graphics processor built for Intel called Radeon RX Vega M. Vega M is remarkable not only because it’s integrated on-package with an Intel CPU, but also because it’s the first midrange graphics implementation of the Vega architecture that we’ve seen so far.

  GPU

base

clock

(MHz)

GPU

boost

clock

(MHz)

ROP

pixels/

clock

Texels

filtered/

clock

Shader

pro-

cessors

Memory

path

(bits)

Memory

bandwidth

Memory

size

Peak

power

draw

GTX 1060 3 GB 1506 1708 48 72 1152 192 192 GB/s 3 GB 185 W
GTX 1060 6GB 1506 1708 48 80 1280 192 192 GB/s 6 GB 120 W
RX Vega M GH 1063 1190 64 96 1536 1024 205 GB/s 4 GB
GTX 1050 1354 1455 32 40 640 128 112 GB/s 2 GB 75 W
GTX 1050 Ti 1290 1392 32 48 768 128 112 GB/s 4 GB 75 W
RX Vega M GL 931 1011 32 80 1280 1024 179 GB/s 4 GB
RX 560 (1024 SP) 1175 1275 16 64 1024 128 112 GB/s 2 GB/4 GB 80 W
RX 570 1168 1244 32 128 2048 256 224 GB/s 4 GB 150 W

Intel will ship Vega M graphics processors in two forms. The entry-level member of this family, Radeon Vega M GL, will accompany the trio of Core i5 G processors launching today. This chip features 20 Radeon Vega compute units, for a total of 1280 stream processors. Alongside that shader engine, the Vega M GL offers 80 texturing units and four geometry engines, and its ROPs can output 32 pixels per clock. The M GL will run at a base clock of 931 MHz and a boost clock of 1011 MHz.

The higher-performance Radeon Vega M GH will accompany the duo of Core i7 CPUs in the G series. This potent GPU offers 1536 stream processors across 24 Vega compute units, 96 texture units, four geometry engines, and it can output 64 ROP pixels per clock. Intel specifies this part for a base clock of 1063 MHz and a boost clock of 1190 MHz.

Both Radeon Vega M chips are joined with a single stack of HBM2 RAM with a 1024-bit-wide memory bus. Until recently, joining HBM2 with another chip has involved placing those components together on a silicon interposer—a design requirement that’s more expensive, more complex, and more restrictive than joining components together on the same PCB.

The Intel G-series CPU package takes a different approach for joining HBM2 with the GPU. It’s the first Intel consumer product to use the company’s Embedded Multi-Die Interconnect Bridge (EMIB) technology, a data-only connection that allows two chips to be placed extremely close together on package with high signal integrity and high bandwidth.

An EMIB is, in short, a small piece of silicon that’s embedded into the underlying package. This sliver of silicon has a number of routing layers inside that can be used to connect two discrete components over very short distances and with very high interconnect density. In this case, the HBM2 RAM uses a single EMIB to communicate with the Vega M GPU. On top of its less-complex implementation, using an EMIB reduces the overall height of the processor package—a critical point in the never-ending race toward thinner and lighter notebooks. Intel says building this same chip on an interposer would result in a significantly thicker package.

Intel clocks the 4 GB of HBM2 RAM paired with the Vega M GL chip at an effective 1.4 Gbps for peak aggregate bandwidth of 179 GB/s. The Vega M GH chip gets a higher-speed 4-GB slice of HBM2 running at 1.6 Gbps for aggregate bandwidth of 205 GB/s. Unlike the custom memory controller of AMD’s Raven Ridge APUs, Vega M retains the Vega architecture’s high-bandwidth cache controller for communication with the HBM2 stack.

The RX Vega M GPU, in turn, is connected to the CPU by eight PCIe 3.0 lanes from that chip. Platform engineers can then choose to allocate the remaining eight PCIe lanes from the CPU to platform devices like M.2 slots or Thunderbolt controllers.

 

Smarter power leads to higher efficiency

With the basic building blocks we just outlined, Intel will offer five versions of these eighth-generation G-series CPUs in BGA packages only for now. Three of those chips will have 65 W power envelopes in Core i5 and Core i7 flavors, and two more will slot into 100 W envelopes in Core i7 form only. Just bringing the Intel and AMD pieces of the puzzle together on-package isn’t enough to ensure the best performance from this idea, though. To allow each component on the package to perform at its best within those power envelopes, Intel says it did a lot of work on the software layer of the G-series platform to dynamically allocate just the right amount of power to each component on the package under a given workload.

That dynamic power management is critical to getting the most out of G-series CPUs’ thermal envelopes. Whereas manufacturers integrating a CPU and a discrete GPU in a notebook might have had to build a laptop around a combined system design point (SDP) in the past that was unrepresentative of the power demands of those components in their typical use cases, Intel says its “Dynamic Tuning” software control lets it allocate power in real-time based on actual workloads. Intel claims this dynamic power-sharing arrangement can result in a lower real-world TDP for the whole package that lets designers extract similar levels—if not higher levels—of performance than those same components operating from a static scenario design power assumption.

For example, the company shared figures showing the kinds of efficiency gains that can arise from using Dynamic Tuning versus a system without the technology enabled. Dynamic Tuning results in a higher “frames per watt” figure than the same chip running without the power-management feature. The benefits of Dynamic Tuning end up allowing manufacturers to design thinner notebooks with the same performance of a thicker system that might have been designed with a higher SDP.

Not only is the G-series package smarter about power usage than discrete components without such dynamic power-monitoring capabilities, it’s also just plain smaller than a CPU and a discrete GPU with GDDR RAM that have to be mounted on the same motherboard. Intel touts a savings of 1900 mm² (or three square inches) of board area with a G-series package compared to today’s discrete layouts. Intel says system designers can use that precious space to include larger batteries or larger fans, or simply to build a smaller chassis overall.

  Base

clock

(GHz)

Boost

clock

(GHz)

Cores/

threads

L3

cache

(MB)

Memory

type

Discrete

graphics

IGP vPro Package

power

Unlocked
i7-8809G 3.1 4.2 4/8 8 Dual-channel

DDR4-2400

RX Vega M GH UHD 630 No 100 W Yes
i7-8709G 4.1 No
i7-8706G RX Vega M GL Yes 65 W
i7-8705G No
i5-8305G 2.8 3.8 6

Despite the eighth-generation naming scheme, the CPU side of the G-series package is essentially Kaby Lake-H carried over. The same Skylake core we’ve known since August 2015 is playing another set here, and the five G-series parts launching today share more in common than not from their CPUs. Like Kaby Lake Refresh chips, all of these chips offer four cores and eight threads of processing resources, and they’ll all have Intel’s HD Graphics 630 IGP on board if users want to take advantage of capabilities like Quick Sync or the extra display outs that manufacturers can tap from them. Within thermal envelopes, their differences come down to clocks, L3 cache, and enterprise management capabilities.

The two Core i7 parts with 100 W TDPs will likely be of most interest to enthusiasts. Both come with 8 MB of cache, RX Vega M GH graphics with 4 GB of HBM2, and 3.1 GHz base speeds. The Core i7-8809G is the top-end member of the entire G-series family. It’ll be the only such part with fully-unlocked CPU, GPU, and HBM2 RAM clocks, and it has the highest stock boost clock speed of the bunch, at 4.2 GHz. The Core i7-8709G loses the unlocked multipliers and drops 100 MHz of boost-clock speed. It’s otherwise identical to the i7-8809G. Neither part offers vPro support.

The 65 W family of G-series processors offers three chips to choose from. The Core i7-8706G has vPro support, while the i7-8705G does not. Both chips have 8 MB of L3 cache and share 3.1 GHz base and 4.1 GHz boost clocks. The Core i5-8305G gets relegated to the bottom of the 65 W stack with a 3.8 GHz boost clock and a 2.8 GHz base clock. All three chips share the same Radeon RX Vega M GL graphics processor and 4 GB HBM2 RAM capacity.

 

A marriage of convenience

Some might wonder why Intel tapped a competitor’s GPU technology to make G-series processors a reality when it already has a fine internal GPU team of its own. The way Intel sees it, the on-package integration it’s performing is no different, philosophically, from what notebook makers themselves are doing in the enthusiast graphics space when they hook up an Nvidia or AMD GPU to an Intel H-series CPU in an enthusiast notebook. The move to bring Radeon graphics on package, then, seems at once a stopgap and a warning shot.

Intel’s most powerful integrated graphics processor so far, the Iris Pro 580, very broadly tops out at about the point where entry-level discrete graphics chips from Nvidia and AMD start to take the stage. Intel can likely scale its Gen architecture (or another new microarchitecture) into a discrete chip to fulfill its ambitions to compete in discrete graphics again, but if it does so, that work could take a fair amount of time—time that Nvidia will continue capitalizing on with its dominant lineup of mobile enthusiast graphics processors. By bringing a Vega GPU on package today, Intel offers notebook makers a ready alternative that literally leaves no room for an Nvidia GPU in a system.

  Peak pixel

fill rate

(Gpixels/s)

Peak

bilinear

filtering

int8/fp16

(Gtexels/s)

Peak

rasterization

rate

(Gtris/s)

Peak

FP32

shader

arithmetic

rate

(tflops)

GTX 1060 3 GB 82 123/123 3.4 3.9
GTX 1060 6 GB 82 137/137 3.4 4.4
RX Vega M GH 76 114/57 4.8 3.7
GTX 1050 47 58/58 2.9 1.9
GTX 1050 Ti 45 67/67 2.8 2.1
RX Vega M GL 32 81/40 4.0 2.6
RX 560 20 82/41 2.6 2.6
RX 570 40 159/80 5.0 1

If Intel is developing graphics processors of its own for use in integrated packages like these G-series parts, the expected performance of the Radeon RX Vega M GL and Vega M GH might point to where Intel’s own chips might begin drawing battle lines against the green team.

Intel claims that a 65 W implementation of a Core i7-8705G and its Vega M GL generally ekes out wins against a Core i7-8550U in 15 W trim alongside a GeForce GTX 1050 4 GB in the three titles it chose to share data for at 1920×1080 and high setttings. Running Hitman and Deus Ex: Mankind Divided in their DX12 modes might disadvantage the GTX 1050 more than DX11 might, and we’d still want to see 99th-percentile frame times for these titles to really get a sense of whether either chip is truly delivering playable performance. Still, head-to-head results like these seem promising for the RX Vega M GL.

Intel also matched up a 100 W implementation of the i7-8709G with a notebook configuration typical of many gaming laptops nowadays: a Core i7-7700HQ paired with a GeForce GTX 1060 Max-Q graphics chip. (Max-Q essentially means a lower-TDP variant of a given Nvidia mobile graphics chip.) With the same caveats I raised for the RX Vega M GL-vs.-GTX-1050 comparison, the i7-8709G seems poised to deliver 1920×1080 gaming performance similar to that of the Max-Q notebook in a similar system design point (the Max-Q GTX 1060 slots into 60 W or 70 W, while the Core i7-7700HQ in Intel’s test rig was configured at 45 W).

We don’t know how expensive notebooks with Intel G-series processors will be yet, but these early performance numbers would seem to promise that sparks will fly once we do get our hands on those systems and start testing. However those real-world numbers shake out, make no mistake: Intel is gunning for a slice of Nvidia’s lucrative mobile-graphics pie, even if it has to enlist another ostensible foe to get there.

 

The most muscular NUC yet

We can’t talk about the full details of Intel’s notebook design wins with eighth-gen G-series processors just yet, but we can say that systems from Dell and HP are coming down the line. That lack of detail is fine for the moment, because Intel is revealing an exciting product of its own this evening: the next iteration of its high-performance NUC, called the NUC 8 Enthusiast family.

Code-named Hades Canyon, the beefiest next-gen NUC will serve as a showcase for the 100 W version of the Core i7-8709G. Despite its tiny 1.2 L volume, this NUC8i7HVK version of the NUC 8 Enthusiast will still sport fully-unlocked multipliers on its CPU cores, GPU, and HBM2 RAM, though it’ll remain to be seen just how far tweakers will be able to push the chip in practice.

With all that processing power inside, Intel thinks there’s little that Hades Canyon can’t do. The company says the NUC 8 Enthusiast is ready for premium VR, high-resolution gaming, and demanding productivity application across as many as six monitors. The NUC 8 will sport copious peripheral I/O, including two Thunderbolt 3 ports, dual M.2 slots, a front HDMI 2.0b out, a rear HDMI 2.0b out, two Mini DisplayPort 1.3 connections, two Gigabit Ethernet jacks, seven USB ports, an SD card reader, and TOSLINK digital audio output.

For folks still looking for plenty of processing power from a small-form-factor system at a more affordable price point, Intel will also offer a NUCi7HNK version of the NUC 8 with a Core i7-8705G CPU inside. The cherry on top of both of these systems is, of course, an RGB LED-illuminated skull logo emblazoned on a removeable lid that enterprising builders can customize to their liking with a 3D-printed replacement.

Intel’s NUC 8 Enthusiast system will begin shipping to enthusiasts in late March. The company says systems from its partners will arrive sometime in the spring of this year. We’re eagerly awaiting an opportunity to put these systems through their paces.

Comments closed
    • Mr Bill
    • 2 years ago

    Wonder if we will eventually see something like this as compute units to be slotted into racks to compete against NVIDIA’s version.

    • DoomGuy64
    • 2 years ago

    Once upon a Time, Chrysler made a economy car with a Hemi V-8 MDS. This car directly competed with other economy cars that were 4 cylinder, and performed pretty well against them.

    This car was widely praised among car reviewers, especially for being as fuel efficient as other 4 cylinder cars, while still having a V-8.

    Consumer Reports later does an investigative article on the engine, which finds out that the engine is missing parts, and doesn’t actually use 8 cylinders. It’s a fake V-8, and Chrysler is falsely advertising it.

    Years later, the same car reviewers are still calling this engine a V-8.

    Should car enthusiasts still accept this as a V-8, or trust those reviewers after reading Consumer Reports?

    No? Then stop pretending the 1060 is a 48 ROP card, because it can only rasterize 32 pixels/clock. Bare minimum put an asterisk next to the 48 ROP specs. Somebody here already mentioned amazement that this can outpace a 1060. I’m not surprised though, because I know the 1060 doesn’t utilize those ROPs, and it’s performance is more dependent on raw clockspeed. This is especially more noticeable in mobile, where it isn’t as easy to brute force the clocks.

    • One Sick Puppy
    • 2 years ago

    Well, these sure look impressive. Hopefully the price makes sense and they are quiet. As much as I like small, I enjoy silence and value much more.

    • carlot
    • 2 years ago

    Hi Jeff,
    do you think it’s technically possible to use EMIB to build a “Radeon SSG pro” SOC integrating OPTANE memory with Vega and HBM2 ?
    Intel could sell professional card with this SOC, exactly like they sell this hybrid processor.

    • synthtel2
    • 2 years ago

    1536 SPs and 64 ROPs? That’s a bit weird.

    • Wirko
    • 2 years ago

    This processor looked so huge in all the pics … as it turns out, it’s just 31 x 58.5 mm, or 29% bigger than a socket 115x CPU by surface area.

    • RdVi
    • 2 years ago

    A correction from page 2:

    [quote<]For an example, the company provided average-FPS numbers for three games that were actually higher from a 45 W chip with Dynamic Tuning enabled than that same chip could provide in a 62.5 W SDP with no Dynamic Tuning capability.[/quote<] Sorry to nitpick, but the chart states "Efficiency (Frames/Watt)". So taking the best result there which roughly looks to be about an 18% Efficiency improvement, that still means lower performance overall. Eg. if The 62.5W system measuring 1 on the Frames/watt scale would translate to 62.5fps, then a 45w system measuring 1.18 on the Frames/watt scale would result in 53.1fps.

      • DavidC1
      • 2 years ago

      Some reviews state that the 45W enabled with Dynamic Tuning is able to get same frame as a 62.5W SDP part but with 18% higher efficiency.

      The possibility is that they do indeed get similar FPS, but the 62.5W SDP uses 53.1W equivalent of power, which makes it 18% above 45W.

      Also they said its up to 18%. The two other games are in the 10-13% range.

    • deruberhanyok
    • 2 years ago

    I liked the Skull Canyon NUC, Hades Canyon looks to be a worthy successor. The idea of a small box with that much available power is VERY appealing.

    At those prices, though ($800 for the 65W one with Vega GL and $1000 for the 100W Vega GH version), it’s even more niche than Skull Canyon was at $600. I wound up getting an open box deal on one of those for around $500 and still felt it was a bit much.

    I’d love to have one to replace the Shuttle XPC in my entertainment center, but that’d be a lot of money to pay for something only marginally more capable than an i5-4590s and a GeForce 1050ti. The reduction in footprint is nice but not that nice.

    Also curious to see if the 65W one is quieter than Skull Canyon. Looks like there’s a much larger cooling solution for Hades Canyon, so hoping that’s the case.

    • wingless
    • 2 years ago

    Vega is looking less and less like a turd now. I can’t wait to get some Vega in my life!

      • shank15217
      • 2 years ago

      Vega is very competitive in lower TDPs, its the higher TDP variants that really cant keep up with Nvidia

        • stefem
        • 2 years ago

        Looking at the TechReport review results normalized for battery and screen differences performance per watt of Vega doesn’t look that competitive

        • Chrispy_
        • 2 years ago

        RX480 and RX580 can run at around 120W just like the 1060 quite comfortably. 1200MHz at 1.0V is all you need for that and I’ve yet to encounter a card that couldn’t achieve that….

        Vega 56’s optimum speed appears to be a 1070-performing 165W, and that’s before you mess with undervolting (just change the power limit to -20%) I have barely undervolted mine at all and at 1432MHz with a -25% power limit I’d be surprised if it wasn’t under 150W…

          • stefem
          • 2 years ago

          You can undervolt Pascal GPUs too, I didn’t test the GTX 1060 since I don have it but for a GTX 1080 I lowered the power consumption by almost 30%. Undervolting is achieve by doing what you describe for Vega, you lower the power limit and by this you trick Boost to use the lowest possible voltage of the curve, AMD’s boost algorithm may do the same, I don’t know.

      • tipoo
      • 2 years ago

      The desktop ones were really unfortunately ruined in image by them pushing them well past the optimal perf/watt point, for fear of Nvidia performance most likely. A few performance percentage points down knocks off a /lot/ of wattage, and Polaris Radeon pro 560 was pretty good per watt already.

      Then again if the desktop parts launched at a good perf/watt but even lower performance than Nvidia, they’d probably lose more sales that way than to the number of people who get to the power consumption pages. Well, or they would have outside of this mining economy I guess.

      • DavidC1
      • 2 years ago

      I don’t think its due to Vega running on lower clocks alone.

      I think the result is also because its on the same package, meaning they can do fancier power management like the Dynamic Tuning they described. They state up to 18% improvement in efficiency.

      • Krogoth
      • 2 years ago

      Vega 64 is quite efficient. You are willing to play around with undervolting and tune down the clockspeed a bit.

      I’ve managed to get my unit to be around 1080-levels of performance and power efficiency. It seems like AMD has to overvolt the crap of their stuff since GCN 1.2.

        • stefem
        • 2 years ago

        You can undervolt NVIDIA’s GPUs too, yielding around a 30% reduction in power consumption on a GTX 1080 FE

    • jarder
    • 2 years ago

    NUCs are cool I suppose, but I thought they were supposed to be for sub-mini-ITX form factors, these NUCs are much bigger. How about getting these MCMs in a mini-ITX form factor? Is anybody thinking about doing this? I for one would be interested.

      • Andrew Lauritzen
      • 2 years ago

      Uhh, this is still waaaay smaller than mini-ITX. From the pictures it looks to be roughly like a taller Skull Canyon, which would still make it smaller than most dual slot discrete GPUs.

      To give some intuition, this looks to be roughly the size of an 8 port desktop switch.

        • jarder
        • 2 years ago

        My mistake. I was assuming that this case was like a skull canyon case but thicker, but looking into it, it’s just over 1.2l in volume, so still quite small, especially for a computer with a decent spec.

        My point about the mini-ITX version still stands though, a small mini-ITX case can be as small as 3l or so, e.g.:
        [url<]https://www.overclockers.co.uk/in-win-bq696-chopin-mini-itx-case-silver-ca-060-iw.html[/url<] I'm sure a lot of people wouldn't mind a larger case in return for the extra flexibility of having a standard form-factor.

          • DancinJack
          • 2 years ago

          Most ITX cases aren’t that small, and I think we all know that. I don’t really see the advantage in a mini-ITX case that is that small though. The only CPU choices you would have are regular desktop LGA stuff, which means almost all Intel stuff (which means a “weak” GPU) or an AMD APU. In both cases you’re trading off some aspect of performance from these NUCs, while still being quite a bit bigger. The only advantage I can think of is being able to choose your motherboard.

          • Andrew Lauritzen
          • 2 years ago

          > I’m sure a lot of people wouldn’t mind a larger case in return for the extra flexibility of having a standard form-factor.

          Well, as you’ve pointed out those options are already available for people 🙂 But the NUC is ~3x smaller so if you want something that size with as much performance as possible, I can’t think of anything else even in the same league at the moment.

    • psuedonymous
    • 2 years ago

    Remember, manufacturer-provided benchmarks aren’t worth the paper they’re printed on, especially for mobile chips (that will more often than not be limited by their thermal design). Wait to see what they can actually achieve when crammed into the devices that turn up on shelves.

    • maroon1
    • 2 years ago

    Those 3 games they tested are known to favor amd gpu. This will be best case senario for vega m

      • NoOne ButMe
      • 2 years ago

      still comes away with a stomp against everything under the 1060 (for the top model) and a 1050 contender for the lower model.

      Which is still pretty impressive.

        • stefem
        • 2 years ago

        But the real question is: at which cost (power efficiency)?

          • Chrispy_
          • 2 years ago

          Well, at 65W for a 1050 competitor and 100W for a 1060 competitor it seems to match Nvidia if not beat them.

          Mobile GTX 1060 is 80W and the 100W Vega GH is going to have to share at least 15W of that with the CPU, probably more.

          Mobile GTX 1050 is 53W and the 65W Vega GL is going to have to share those at least 15W of that with the CPU too.

          So, under load, the Vega GH is probably working with about 75W of GPU budget, and Vega GL is probably working with 45W of GPU budget. I say probably, because nobody knows but Intel has stated that these Kaby-esque CPU cores will outperform their 15W Kaby-U equivalents.

            • stefem
            • 2 years ago

            The laptop maker could personalize TDP to its needs, so it’s hard to judge by those numbers.
            Historically Intel’s advertised performance comparison didn’t hold true when checked in the real world, especially when they where tiring to “steal the stage” from NVIDIA in the professional market.
            We have just to wait and see, hoping someone do a proper analysis 🙂

      • Nomasvirus
      • 2 years ago

      I like your answers genius of the world
      [url<]http://nomasvirus.com[/url<] [url<]http://mepsa-educa.com[/url<]

    • NoOne ButMe
    • 2 years ago

    Well, looks like my original comments were right… The full fledged model slaughters the GTX 1050 and 1050ti laptops. I expect as you add more games, it equals at best the 1060 mobile, probably falls 10-20% behind the top end ones for games which are purely GPU bound. (clarification, the Radeon graphics is 10-20% slower than max 1060 probably).

    If, and only if, Intel can get laptops onto the market in the $800-900, maybe $1000 dollar range will it be competitive.

    And “Ampere” will probably drop that target price by $100-150. and I’m already dubious it will be hit.

      • NovusBogus
      • 2 years ago

      I’d like to know how the driver situation shakes out. NV Optimus can be flaky and AMD’s equivalent is notoriously bad, but maybe having it all in one OEM package will be more seamless. Also, Intel historically likes Linux so it might actually be possible to get the hybrid IGP+GPU benefits without a cobbled-together third party solution that’s been dormant for five years.

        • Chrispy_
        • 2 years ago

        That only applies to Windows 7.

        Optimus doesn’t exist on Windows 10, which is likely the only OS that Intel, AMD, and Nvidia support with their latest hardware.

          • Andrew Lauritzen
          • 2 years ago

          Why the downvotes? Chrispy is absolutely right as I’ve explained in previous comments. On Windows 10 the GPU selection stuff is effectively handled by the OS; there are no differences there between AMD/NVIDIA anymore since the Microsoft killed the disgusting driver shimming nonsense the IHVs were doing, thank god!

            • K-L-Waster
            • 2 years ago

            The downvotes are because certain people are emotionally attached to any and all ways to talk smack against NVidia.

            That and because other people are attached to Windows 7 being “better” than Windows 10.

            • Arbiter Odie
            • 2 years ago

            Windows 7 is better than Windows 10.

            …buuuut in this case 10 rocks. Give yourself a cookie, and stop making me sad.

            • Chrispy_
            • 2 years ago

            The best OS is a non-existent hybrid of Windows 7’s control over your PC and ****-free interface but with all the under-the-hood improvements that have been made with Windows 8 and 10.

            Sadly, 7 gets harder to love with each new piece of hardware that doesn’t support it. I’d rather the anti-10 haters stopped whining and put their efforts into decrapifying 10 instead.

            For the record, I hate 10. Also, Nvidia and AMD are simultaneously both asshats and amazeballs. Just like Microsoft they succeed in some areas, fail at others, have dubious ethics and morals in some areas, and then give to the community in others. If people stopped taking sides and looked at everything objectively in isolation, perhaps fanboyism would finally die off 😉

            • Redocbew
            • 2 years ago

            I fully decrapified the Win10 install I use for gaming, and it was fine until the first not-a-service-pack update was pushed that turned all the crap back on again. I admit I haven’t spent much time attempting to decrapify it again since then.

            • Chrispy_
            • 2 years ago

            That behaviour right there is my single largest dislike of Windows 10.

            You think you have control over your machine and BAM, Windows update essentially revokes EVERYTHING.

            Motherfrickin’ Candy Crush came back after the first major version update too. “Massively unimpressed” is putting it mildly.

            • MOSFET
            • 2 years ago

            Sometimes, they aren’t as pushy on domain-joined machines, but only sometimes. And these days, the privacy settings (the big blue idiot radio buttons for Email, Tasks, Sync, etc) tend to stick more often than not. But the app-store-auto-install stuff does make me pull hair.

            And just to be sure, we [i<]are[/i<] talking about 65W and 100W [i<]ultrabook[/i<] APUs?

            • Chrispy_
            • 2 years ago

            Ultrabook might be pushing it. Thin-and-light laptops and AIOs is where these are aimed. The sub 1″ thick 13.3-15.6″ laptops that used to come with 35W HQ-series processors and a 65W GPU like the old GTX960 or 1050Ti.

            As for Windows 10, I’m seriously considering rolling out the LTSB edition to the main work fleet (would total around 850 machines).

            There’s no Windows Store client, no Microsoft Edge, no Cortana, no preinstalled universal apps, such as Mail/Calendar, OneNote, Photos, Groove Music, the MSN Weather/News/Sports/Money apps, and the Camera and Alarms & Clock apps. It’ll also never get gimmicky features designed for Microsoft Surface tablets, “mobile-first” software support, or any of the junk like the “People” button in the system tray, trying to do the job of your preferred communication and notification apps.

            It’s basically Windows 10: OS Edtion, rather than the standard versions which are more like Windows 10: Unsolicited Mail Edition.

      • psuedonymous
      • 2 years ago

      [quote<]The full fledged model slaughters the GTX 1050 and 1050ti laptops[/quote<] Careful of those Intel 1050 benchmark figures: they're comparing to a -U CPU (15W TDP) using games that are often CPU bound.

        • NoOne ButMe
        • 2 years ago

        Intel’s comparison is the cut Vega GPU against the 1050.

        The full Vega GPU in this lineup is approx. 30-40% faster overall [compared to the slower one, which is compared against the 1050].

        • DavidC1
        • 2 years ago

        Check against Notebookcheck results. The lower end Vega M GL is still 10-30% faster than the 7700HQ + GTX 1050. That makes it in the GTX 1050 Ti territory.

        The higher end Vega M GH performs within 5% of the GTX 1060 Max-Q.

        Intel did such comparisons because Vega M GL is a 65W part. 8550U is at 15W TDP while GTX 1050 is said to be at 50W. Combine the two, and voila, you get 65W. While pairing a 7700HQ would make it faster, it also exceeds the 65W TDP of the Vega M GL.

        Intel did comparisons against the GTX 1060 Max-Q part, becauses again, the TDP combined with 7700HQ is similar to the 100W for the Vega M GH.

      • stefem
      • 2 years ago

      It’s not hard to beat a laptops’s 1050Ti on pure performance, even with a “old gen” GPU. The point is that laptops run on battery so the power efficiency is a key metric when evaluting this kind of solutions

        • JustAnEngineer
        • 2 years ago

        Efficiency matters, but not because anyone seriously games for more than 20 minutes while running off the battery. You’re going to plug in for 3D gaming. Efficiency is very important because laptop cooling is so constrained. There just is not enough ventilation in a thin-and-light laptop to cool a 200 watt GPU.

    • venfare
    • 2 years ago

    Gtx 1060 3 GB have an impressive amount of vram it seems. Also it consume significant amount of power relative to 6gb version.

    • Firestarter
    • 2 years ago

    I’m mostly excited about what implications this has for future dedicated GPU designs. If Intel licenses EMIB to AMD, I guess that will make it cheaper and easier for AMD to produce HBM2 designs or even multi-chip GPUs. It’d be awesome if that allows AMD to grab back the single card performance crown with some crazy design without crossfire trickery

    • gerryg
    • 2 years ago

    Hmm, guess this explains Koduri leaving AMD for Intel. He rubbed elbows w/Intel while they worked on this (over past year? 2 years?), and they talked him into jumping ship.

      • the
      • 2 years ago

      First inkling of this came form [url=https://www.hardocp.com/article/2016/05/27/from_ati_to_amd_back_journey_in_futility<]Kyle at HardOCP nearly 20 months ago.[/url<] Since then, Raja has indeed left AMD for Intel and is apparently developing a discrete GPU for them. Things are indeed very interesting.

    • thecoldanddarkone
    • 2 years ago

    Did anyone else notice that they didn’t specify models. They also had a single channel memory in the gtx 1060 q max system. Anyways, it still looks to be nice. I guess some benches are on 3dmark.

      • nanoflower
      • 2 years ago

      I’ve seen various tests that show that for gaming having a single slot of memory doesn’t hurt performance enough to be noticeable. It may not be the same for every use but for gaming (which is what a dedicated GPU like the 1060 is often used for) it doesn’t seem to matter.

        • Anonymous Coward
        • 2 years ago

        I guess if the higher end models of Raven Ridge can do OK in games (GF-1030 level), it must mean that the bandwidth demands on the CPU side are fairly modest.

    • sumang007
    • 2 years ago

    Wait am I reading this right… this single chip solution can outpace a 1060?!

    That is pretty damn impressive… would love this in my next thin and light lappy…

      • dragontamer5788
      • 2 years ago

      [quote<]single chip solution[/quote<] Its a module, not a chip. Its probably 6 chips put together on one module. (4 HBM, 1 GPU, 1 CPU). I mean, its still impressive. Just not quite "one chip".

        • DancinJack
        • 2 years ago

        FWIW it’s 3 chips. Intel CPU + AMD GPU + 1 4GB HBM2.

          • NoOne ButMe
          • 2 years ago

          And a GTX 1060 is 7 chips, right? NVidia GPU + 1GB GDDR5 + 1GB GDDR5…

          actually I guess that the HBM2 would be 5 than, so it would be 7 (Intel+Radeon) against 8 (1060+Intel)

          • NTMBK
          • 2 years ago

          HBM2 is a whole bunch of chips stacked on top of each other.

          • dragontamer5788
          • 2 years ago

          The HBM2 stack is typically 4 chips high. Each HBM2 chip gives 256-bit bus, so it’d take 4 HBM2 chips (stacked on top of each other) to create the 1024-bit bus that AMD is specifying here.

          Although, I forgot about the [url=https://images.anandtech.com/doci/11690/hbm_678_678x452.png<]control chip[/url<]. So its really 5-chips for the HBM2 memory. For 7-chips in total (5xHBM2 chips, 1 GPU, 1 CPU chip)

        • the
        • 2 years ago

        In the future, single chips solutions will be a thing of a past as designs utilize techniques like EMIB and interposers to scale designs. Case in point, nVidia is currently [url=http://research.nvidia.com/publication/2017-06_MCM-GPU%3A-Multi-Chip-Module-GPUs<]exploring a MCM GPU themselves.[/url<]

        • bitcat70
        • 2 years ago

        Isn’t it a single chip with multiple dies though? As those pieces of silicon are not “chips” but rather “dies”?

      • Chrispy_
      • 2 years ago

      I doubt it will. You have to look at real-world results with existing products; The i7-8809G with its RX Vega M GH may have more shaders than a 1060 but it’s clocked 30% slower.

      In reality, the 2304 shaders @1266MHz of the RX480 is a very close match for the 1280 shaders at 1708MHz of the 1060. Essentially, 3 Polaris/Vega shaders have the IPC of 2 Pascal shaders. That rule works across the entire 400, 500, RX series and the entire 10-series.

      I think the RX Vega M GH will beat a 1050Ti, perhaps quite comfortably, but the gulf between the 1050Ti and the 1060 is vast. AMD and Intel will need some serious black magic to make a low-clock, 1536-shader Vega part reach the level of a 1060’s performance.

        • maroon1
        • 2 years ago

        I just noticed that Vega M GH has 64 rops (which similar to big Vega and twice as much as polaris and cut-down Vega M GL). I wonder of those extra rops help it

          • Chrispy_
          • 2 years ago

          Doubt it. Polaris with 32 ROPs isn’t even remotely ROP limited.

          I’m wondering if the GH’s 64 rops is a slip of the pen, or if it’s a conscious design choice knowing that the higher-end variant will likely go into a laptop with a 4K screen and it’ll have to perform a lot of downscaling on the GPU in games. It certainly isn’t going to be a 4K option for any AAA games of this decade 😉

            • Goty
            • 2 years ago

            [quote<]Polaris with 32 ROPs isn't even remotely ROP limited.[/quote<] I'm not so sure about that. I don't know enough of the detailed workings of GPUs to refute that statement, but AMD didn't touch the ROP count when cutting the RX 480 down into the 470 and performance dropped less than the reduction in shader count would imply for those two products. That would indicate at least some sort of ROP bottleneck.

            • Chrispy_
            • 2 years ago

            What do you mean, “performance dropped less than the reduction in shader count would imply”?

            TR’s own review showed an overall average framerate drop [url=https://techreport.com/r.x/rx470review/fpsperdollarbest.png<]from 76 to 67[/url<] which is a 11.8% decrease. Techpowerup results in a different selection of games concur with TR's results. 2304 to 2048 is an 11.2% decrease in shader count. So we're practically spot on with the scaling. Clock speeds were practically identical (1266 vs 1256) but actually I expected the 470 to do better than it did, because the reference model 480 that was tested couldn't maintain boost clocks all the way through, whilst the high power draw of that XFX 470 that TR tested (only 7W lower than the 480) means that it was likely boosting at 1256 without a single dip.

      • jihadjoe
      • 2 years ago

      Bear in mind that Intel/AMD slides compare it to the 1060 Max-Q, not the proper 1060.

      Max-Q has much lower clocks, ranging from 1.2-1.4GHz boost, whereas the 1060 has guaranteed 1.7GHz boost. 8809G is good, but it won’t beat a proper 1060.

    • tipoo
    • 2 years ago

    I’m pretty confident we’ll see this in the 15″ rMBP. If you remember Iris Pro which Apple nearly single handedly asked Intel for, I can see them being the ones to push for this as well.

    Wonder if that means a redesign, if not even just more batteries in the same space would be nice, or more room for cooling. And, cough, SD card. Most likely none of that, but the move up from the meager 80GB/s feeding the current GPUs would go a long way, even aside from the extra execution resources.

      • DancinJack
      • 2 years ago

      I’d rather have a better keyboard than a SD slot tbh. Less touch bar, more SD, more keyboard. The new keyboard is….atrocious.

        • tipoo
        • 2 years ago

        The jamming from motes of dust has gotten a lot of attention. Wonder if we’ll see a butterfly v3.

      • LocalCitizen
      • 2 years ago

      what? you want thinner and lighter? ok! we hear you loud and clear.

        • tipoo
        • 2 years ago

        Probably more likely lol.

          • willmore
          • 2 years ago

          If they don’t there’ll be the one guy “they made a thinner processor and Apple made a thicker laptop out of it, LOL.”

        • Arbiter Odie
        • 2 years ago

        I want to upvote and downvote this at the same time.

      • the
      • 2 years ago

      I’d also wager that this chip will find its way into the next iteration of the Mac Mini and even the low end iMac.

      Intel does have an extra 16 PCIe lanes dedicate for on package technologies on their Xeon chips. As weird as it would be Intel could use these GPUs from AMD on an even higher end part if they felt like it and not radically change their existing platforms.

        • Beahmont
        • 2 years ago

        [quote<] Intel could use these GPUs from AMD on an even higher end part if they felt like it and not radically change their existing platforms [/quote<] And that right there is the whole point of Intel making these chips in the first place. It's a warning shot to play nice or Big Daddy Intel can just integrate your product right on package and cut you out of the loop with vendors.

        • Anonymous Coward
        • 2 years ago

        A product like this at an attractive price [i<]might be[/i<] Intel's best tool to beat AMD in the Mini. Otherwise the Mini is really screaming for one of AMD's APUs. I can't see a drawback to AMD except whatever arm-twisting or deal-sweetening Intel can manage. Apple has historically been very into Radeon, the price is right, the wattage is right, the cores are solid, the volume should be high. I wonder about the wattage though, isn't 65W a lot for a mini? And taking this device down to say 45W starts to sound like an expensive option. Hmm.

    • Andrew Lauritzen
    • 2 years ago

    That NUC looks great! I already love my skull canyon (HTPC, steam streaming box, pocketable work/LAN machine and surprisingly beefy server) and this is clearly a step up. Being able to carry around a box as small as this that can drive a VR HMD is actually pretty awesome… it means I can literally just pack it in my Oculus box and bring it to a friend’s place to demo.

    I’m impressed that they fit so much hardware in a box that looks a bit taller, but otherwise the same width/length. Plus the connectivity on it looks great! More USB, lots more DP/HDMI and dual LAN! The latter means I might even press this into service for some network routing duties…

      • DancinJack
      • 2 years ago

      IKR?

      I’m gonna have to get one. I might even be able to use it as a main PC as I really game less and less these days. DO WANT.

    • RtFusion
    • 2 years ago

    The new NUC will be perfect for multi-media playback and doesn’t take a lot of space.

      • DancinJack
      • 2 years ago

      While I agree its capabilities would be great for a HTPC it seems like you could build something for way way way way cheaper than the minimum 800 those NUCs are going to cost. Mind you, that’s without RAM/HD etc.

      edit: unnecessary comma

        • derFunkenstein
        • 2 years ago

        yeah those things are gonna run $1100 or more once they’re configured, especially with today’s RAM prices. 16GB will probably bump it close to $1000 by itself, and then you need an M.2 SSD.

          • Andrew Lauritzen
          • 2 years ago

          Yeah I doubt it’ll be the cheapest option, but that’s not the point in such a box. The point is to maximize the performance per unit volume and power, which is something I’m personally happy to pay more for 🙂 There’s already tons of options if you’re more price sensitive than space sensitive.

            • derFunkenstein
            • 2 years ago

            Well yeah that’s always been the point if small boxes like this, going back to the first Shuttle XPCs. Get the most out of a space.

            • mudcore
            • 2 years ago

            What if I’m “I don’t want my GPU and CPU upgrades to be tied together when the space I have available for a stationary system isn’t so cramped it mandates something this much smaller than an compact mITX build” sensitive?

            At the same time I find that new NUC incredibly appealing. It is very small, quite powerful, and that I/O selection is something few high end motherboards can even compete with. On the other… isn’t the whole point of buying into this NUC for the GTX1060-like (supposedly) GPU performance? And won’t the need to upgrade that GPU component arrive much quicker than the CPU going by historic trends?

            IDK I guess that does ultimately mean I’m price sensitive but it just seems to be wasteful despite the implied “efficiency” of being ultra compact. Integration has a cost and here its how long the overall NUC package would remain usable for its intended use before requiring the whole thing be replaced, rather than an individual component.

            I find the chip itself much more interesting in what it may enable laptop makers to do versus the NUC form factor.

            • Andrew Lauritzen
            • 2 years ago

            It has somewhat better upgradability than a laptop, but the reality is if you really want something as powerful as it can be in a small package, it has to be heavily integrated. User-replaceable separate components take space (and power). As you noted, folks for whom this is an interesting product are generally happy to upgrade it in its entirety when a faster version comes out (case in point, me :)).

            That’s fine, this isn’t the product for everyone, but it has very little competition right now for the niche that it fills, so I’m happy to see more upgrades!

      • juzz86
      • 2 years ago

      I’ll take two!

      • HERETIC
      • 2 years ago

      GREAT product but TOTALLY OVERBOARD for a HTPC unless you want to game as well.
      You’d have to have more money than sense-“think sheep”
      A Ryzen 3-2200G ($100) on a $60 MB would be perfect for a HTPC.

        • Andrew Lauritzen
        • 2 years ago

        For sure, the regular 15W NUCs (or equivalents) are totally fine for HTPC duties, and can even steam stream from a more powerful gaming PC elsewhere. That’s definitely the route I recommend to most folks as it’s far cheaper, somewhat quieter and the only thing you sacrifice is some local performance on the box, which is not typically super relevant for HTPC duties.

      • DragonDaddyBear
      • 2 years ago

      Will it be able to play Blu-Ray or Netflix 4K? If not, it’s a no-go for a “full featured” HTPC.

        • Andrew Lauritzen
        • 2 years ago

        One can assume so considering Kaby-lake+ mobile is able to and this is basically one of those, right?

          • DragonDaddyBear
          • 2 years ago

          Depends on how they work the dual GPU situation. I honestly don’t know. Maybe someone more knowledgeable in this area can chime in.

    • derFunkenstein
    • 2 years ago

    So much for the predicted performance of Vega-on-a-Chip = Polaris 11 mobile (from some folks in these parts). This thing is gonna kick Polaris’s mobile ass up one side of the street and down the other.

      • willmore
      • 2 years ago

      Yay team purple? (blue + red)

      • NoOne ButMe
      • 2 years ago

      People really predicted that? Sigh. I mean, my comment got downvoted (hopefully for my “remember polaris mobile TPD?” not for the performance estimate I gave).

      Despite the product having about 2x the power (100W model) for the GPU core compared to the Polaris 11 mobile solutions.

      math:
      100-45 = 55 – 5 = 50W GPU core power
      35-10= 25W GPU core power

      • chuckula
      • 2 years ago

      Yeah, especially the people who saw the HBM stack and then insisted that it can’t be a Vega GPU in there… which would have meant that AMD went out of its way to design a Polaris + HBM solution only for Intel instead of simply cutting down Big Vega to make these parts.

        • NoOne ButMe
        • 2 years ago

        to be fair, some sites which were early to call that the partnership was happening besides hardOCP stood by “Polaris”

        So it wasn’t completely random. More likely it’s a Polaris-Vega hybrid kind of like PS4 Pro?

          • derFunkenstein
          • 2 years ago

          So if by hybrid you mean “Vega not Polaris” then yes. LOL

    • DancinJack
    • 2 years ago

    Jeff – Do you happen to know if Intel is fabbing the Vega chips or are they sourcing them from GF? Same with the HBM. I don’t think Intel has any HBM fab space as far as I know.

      • Jeff Kampman
      • 2 years ago

      I don’t know the HBM2 vendor but Intel is definitely not fabricating the Vega chips. They’re being produced by AMD and delivered to Intel for integration.

        • DancinJack
        • 2 years ago

        Cool, that’s what I figured but honestly was hoping Intel was fabbing them.

          • tipoo
          • 2 years ago

          That would have been even bigger news than the teamup frankly. If they went through the effort of porting over to the Intel fab, it probably wouldn’t have been for a one of product like this…

          But yeah, MCMs don’t need to be on the same fab, see even something like the Wii U.

            • DancinJack
            • 2 years ago

            Oh yeah I agree with all of the above, but it was just a pipedream I thought I’d at least ask about. Would be really cool.

      • flip-mode
      • 2 years ago

      This is actually the question about this whole affair that interests me the most.

      • IGTrading
      • 2 years ago

      On the more technical side, TechReport seems to have failed to notice that the standard height of this product class is the same as the height of AMD’s own Vega Mobile, which is set at 1.7mm.

      Intel clearly states its z-height is 1.7mm so where’s the advantage ?!

      Therefore, it appears that Intel’s EMIB talk is just talk (in this current implementation) and saves no “height” , as correctly pointed out by SemiAccurate.com :

      “note that the Z-height, a critical factor in modern notebooks, is the exact same 1.7mm as a Vega-M discrete GPU. Why is this important? It looks like EMIB saves ~0mm in Z-height versus a much simpler to manufacture interposer. Interesting, no?”

      Source : [url<]https://semiaccurate.com/2018/01/07/intel-kaby-g-not-amd-radeon-vega-m-graphics-fleshed/[/url<] and [url<]https://hexus.net/media/uploaded/2018/1/81819abc-2b8a-40ec-9d54-787bbbff5587.PNG[/url<] Later Edit : Funny how you can get 3 downvotes in a minute for asking a well documented question. 😉

        • Jeff Kampman
        • 2 years ago

        It’s a good question, but I wrote up this article well before the announcement of Vega Mobile (which is, for the moment, just an announcement.) I look forward to learning more about how AMD cut down the height of the package when Vega Mobile gets closer to launch.

          • chuckula
          • 2 years ago

          Anand’s twitter has the following explanation from Lisa Su: The newer mobile Vega parts push the entire interposer down into the PCB to reduce the total height.

          [url<]https://twitter.com/IanCutress/status/950395035463200769[/url<] So Intel was fully correct in making the statements that it did about an interposer on top of the PCB since that's the traditional way interposers are used. Furthermore, pushing the entire interposer into the PCB actually appears to be taking an idea from EMIB and applying it to a larger interposer.

          • IGTrading
          • 2 years ago

          Thanks Jeff, maybe popping a quick question to Intel and AMD would yield an interesting edit to the article.

          Edit : Good point cuckula. Thanks.

    • DancinJack
    • 2 years ago

    I’ll take an 8709G please. Stick it in a Mac Mini and a NUC and I’ll buy both.

    Really though. These look awesome. The package photo just looks so good. I just want to hold one.

      • DancinJack
      • 2 years ago

      BTW Anandtech says ~800 and ~1K for the barebones NUCs.

      Intel plans to price the NUC8i7HVK and NUC8i7HNK around $999 and $799 respectively. Fully configured systems will likely be $300 to $400 more, depending on the configuration. The products will be available for purchase in Q2 2018 (tallying with the leaked roadmap from September 2017). The NUC8i7HNK will be available first with the VR-ready NUC8i7HVK following a few weeks later.

      • derFunkenstein
      • 2 years ago

      Mac Mini, NUC, MacBook Pro…I’ll take any of them, but I really still have my eyes on a notebook.

      • adisor19
      • 2 years ago

      I swear, if Apple doesn’t shove these in a new mini, they’ve completely lost it.

      Adi

        • rnalsation
        • 2 years ago

        Now coming, the “Mac Mini PRO”

          • willmore
          • 2 years ago

          Only $5K.

            • Chrispy_
            • 2 years ago

            (adapters and connectors sold separately)

            • NovusBogus
            • 2 years ago

            The little Apple logo guarantees there would be a line out the door to buy them.

          • adisor19
          • 2 years ago

          I don’t care what they call it. What’s important is that this happens one way or the other.

          Adi

      • NovusBogus
      • 2 years ago

      I want it on a glorious corporate laptop, myself. As [url=http://www.sarna.net/wiki/Atlas_(BattleMech)#Quote<]the great General Kerensky might say[/url<], I want my notebook to be as powerful as possible, as heavy and durable as possible, and as ugly as conceivable so that fear itself shows up in the performance benchmarks. And it should be able to run both BT and MW5 in a Windows VM on CentOS host, because reasons. Of course, the question is whether such a thing will happen. With the 8xxx series Intel is all but forcing its OEMs to provide more cores and better graphics for a less embarrassing x86 experience, but that's still no guarantee that they will actually do it.

Pin It on Pinterest

Share This