Intel’s Core i5-8250U CPU reviewed

The notebook PC is undoubtedly the meat and potatoes for PC makers these days, but the basic processing resources that laptops have offered for the better part of a decade have been just as stagnant as they have been on the desktop. Ever since Intel’s Sandy Bridge microarchitecture made its debut, getting a thin-and-light Intel machine has meant getting a CPU with a TDP in the range of 15W. That chip has had two cores, as many as four threads, and a serviceable integrated graphics processor as the cherry on top.

Sure, power consumption has fallen as process sizes have shrunken, allowing for thinner and lighter laptops and longer battery life, among other benefits. But for folks who want more processing power from their mobile systems, period, it’s been necessary to step up from thin-and-light machines to beefier systems with bigger batteries, thicker chassis, and above all, more pounds on the scale.

  Base

clock

speed

Maximum

Turbo

speed

Cores/

threads

Cache

size

Memory

channels

Memory

type

Onboard

graphics

processor

Max

graphics

frequency

TDP
i7-8650U 1.9 GHz 4.2 GHz 4/8 8MB 2 Up to DDR4-2400

or

LPDDR3-2133

Intel UHD

Graphics 620

Up to 1150 MHz 15W
i7-8550U 1.8 GHz 4.0 GHz
i5-8350U 1.7 GHz 3.6 GHz 6MB Up to 1100 MHz
i5-8250U 1.6 GHz 3.4 GHz

2017 has been the hottest year for CPU performance increases in quite some time, though, so it’s only fitting that Intel would finally shake things up with its first eighth-generation Core mobile processors, code-named Kaby Lake Refresh (or Kaby Lake-R). The first wave of those CPUs, announced back in August, encompassed two Core i5s and two Core i7s. Intel’s latest 15W chips are quad-core parts with Hyper-Threading enabled, for a total of eight threads.

The Kaby Lake Refresh die. Source: Intel

Until now, four cores and eight threads in an Intel mobile CPU has been the exclusive domain of the company’s H-series chips. Those parts generally carry 45W TDPs that require big honking cooling systems and thick chassis to operate at peak performance, and it’s been rare to find one of those chips in a Windows machine outside of gaming laptops with dedicated graphics cards on board. (Apple’s MacBook Pros are one notable exception.)

A block diagram of the Skylake client core. Source: Intel

The basic CPU core for Kaby Lake Refresh is the same Skylake microarchitecture we’ve known since 2015. Intel’s 14-nm process has now gone through two cycles of improvement in that time, but Kaby Lake Refresh doesn’t rely on 14-nm++ to deliver its extra cores. Instead, these chips are still fabricated on the good old 14-nm+ process that underpins Kaby Lake parts.

A typical Kaby Lake Refresh CPU package. Source: Intel

Although its continuing encores are a bit of a let-down for chip nerds, Skylake has aged remarkably well as CPUs go. AMD’s Ryzen CPUs might not trail Skylake by that much on a clock-for-clock basis, but Intel can usually clock its chips much higher than AMD can. Skylake’s pair of wider SIMD units gives it a further edge compared to the Zen architecture in some tasks, too.

Future generations of Ryzen CPUs may require Intel to dig more single-core performance out of its labs somehow, but for now, Skylake is still the best thing going in x86 CPUs. More of those cores in a given CPU is nothing to get disappointed about.

AMD’s recently-announced family of Ryzen Mobile APUs poses another threat to Intel’s performance crown, though: the potential of integrated Vega graphics processors. Intel hasn’t changed the basic graphics architecture on board Skylake CPUs since 2015, either, so Kaby Lake Refresh uses what we might classify as Gen9.5 graphics execution units in a GT2 configuration. Gen9.5 made its debut with Kaby Lake, and it includes improved hardware support for encode and decode of 10-bit 4K HEVC content plus hardware decode support for 4K VP9 cat videos.

Ryzen Mobile APUs, on the other hand, boast much beefier IGPs with eight or 10 Radeon Vega compute units on board. It’s hard to tell how those integrated graphics processors might stack up, since we don’t have full specs for those parts just yet and their shared-memory architecture makes bandwidth calculations difficult. Still, the theoretical performance numbers we can tease out of AMD’s specs for its Vega integrated graphics processors so far put Vega IGPs leagues ahead of an Intel GT2 IGP.

The hope for AMD, then, is that the combo of competitive per-core performance from Zen CPU cores and the potentially class-leading performance of Vega processor graphics is enough to sway buyers to its corner. The company already boasts three design wins with Ryzen Mobile chips, and those systems generally have appealing thin-and-light designs and reasonable spec sheets to go with their APUs.

Source: Nvidia

Intel’s OEMs have a possible answer to Vega processor graphics, though, courtesy of Nvidia’s GeForce MX150 graphics chip. Including this Pascal GPU in a system may not be as elegant as the all-in-one Ryzen APU, but the GeForce’s 384 stream processors and 2GB of dedicated GDDR5 memory seem poised to deliver an experience similar to, if not better, than AMD’s parts. The discrete graphics chip has some advantages, too, namely that it’s not contending with the processor for memory bandwidth in one socket.

For entry-level gaming, the MX150 deserves to be taken seriously. At its 1532 MHz boost clock, the MX150 offers 1.18 TFLOPS of single-precision number-crunching performance, 23.5 Gpixels/sec of pixel fill rate, and 35.3 Gtexels/sec of texture-filtering capability. Those numbers more than make up for the shortcoming of the Intel GT2 IGP, and the MX150 does it in just 30W of board power. The performance-per-watt potency of the Pascal architecture cannot be denied. Of course, AMD’s Ryzen APUs deliver the sum of their performance from a 15W TDP, but one should game on AC power for the best experience to begin with. The extra power draw of the MX150-and-i5-8250U combo isn’t that much of a liability in that light.

The challenge for AMD’s partners, then, is to deliver Ryzen Mobile systems at the right prices. The HP Envy X360 15z Touch, the first Ryzen Mobile system to go on sale, offers a Ryzen 5 2500U APU with Vega 8 processor graphics. That SoC pairs up with a 1TB mechanical hard drive, 8GB of dual-channel DDR4-2400 RAM, and a 15.6″ IPS touch screen with Windows  and a convertible hinge. The whole package runs $749.99 at Best Buy right now, although HP’s Black Friday deals have driven that starting price down to as little as $575 lately. Considering the ephemeral nature of those discounts, we’ll use the $750 figure as a solid starting point.

 

Breathing life into Kaby Lake Refresh

So what does $700 to $800 buy you if you’re shopping Core i5-8250Us? The company sent me three of Acer’s 14″ Swift 3 systems to provide as even a playing field as possible for discerning the advances offered by its eighth-generation Core i5s.

If someone asked me to describe the typical laptop computer in 2017, I’d probably sketch out something like a Swift 3. For about a $650 e-tail price, the base Swift 3 I’m testing offers 8GB of DDR4-2400 memory and a 256GB Intel 600p NVMe SSD to go with its Core i5-8250U CPU. It also boasts a 14″ 1920×1080 IPS display, a backlit keyboard, and a Microsoft Precision Touchpad.  

Intel also sent over a GeForce MX150-equipped Swift 3 with slightly different specs. Aside from the upgraded graphics card, that system comes with 8GB of LPDDR3-1866 RAM instead of DDR4-2400. Considering the reduced pressure on the integrated graphics processor to perform, that lower memory spec makes sense to me. LPDDR3 likely lets Acer both include the discrete graphics card and keep prices down all at once. The MX150-equipped Swift 3 runs about $730 on Amazon right now.

The final system Intel sent us to test with is a slightly older Swift 3 with a Core i5-7200U CPU inside. That chip comes paired with 8GB of DDR4-2133 RAM and the same 256GB SSD as its more recent brethren. Since this is an older machine, it’s available at a discount from some retailers. Walmart has this golden wonder for $580 right now, for example.

While this review is primarily concerned with the performance of the CPUs inside the Swift 3s, I should take a moment to talk about the eighth-generation Swift 3 hardware itself. These machines are, in a word, solid. Their non-touch displays are neither the brightest nor contrastiest IPS panels I’ve ever laid eyes on, but their color accuracy seems reasonable out of the box.

The keyboards on the latest Swift 3s offer decent feedback with shallow travel, and there’s no unwanted flex or slop in their action. The Swift 3s’ all-metal chassis didn’t offer a hint of flex, either. The Microsoft Precision Touchpads in these systems are responsive and accurate, and they handled multi-finger scrolling and gestures without a hitch. The only downside to the trackpad on all of these machines is a bit of a hollow sound when they’re tapped, but that’s a minor complaint.

In short, the basic Swift 3 design does everything you’d want from an ultrabook, and it does it all well. For their prices, these systems are thin, light, well-built, responsive, and a general pleasure to work with. They’re pretty nice to look at, too. If this is the state of general-purpose computing devices in 2017, we’re living in good times.

 

Our testing methods

As always, we did our best to deliver clean benchmark numbers. We ran each benchmark test at least three times and took the median of the results.

Here are the specifications of our test systems:

  Acer Swift 3

(i5-8250U)

Acer Swift 3

(Nvidia MX150)

Acer Swift 3

(i5-7200U)

Alienware 13 R3
CPU Intel Core i5-8250U Intel Core i5-7200U Intel Core i7-7700HQ
CPU TDP 15W 35W
Memory 8GB (2x4GB) DDR4-2400 8GB (2x4GB) LPDDR3-1866 8GB (2x4GB) DDR4-2133 16GB (2x8GB) DDR4-2666
GPU Intel UHD Graphics 620 Nvidia GeForce MX150 Intel HD Graphics 620 Nvidia GeForce GTX 1060 6GB
Graphics memory N/A 2GB GDDR5, 6 GT/s effective N/A 6GB GDDR5, 8 GT/s effective
Storage Intel SSD 600p 256GB NVMe SSD Samsung PM961 512GB NVMe SSD
Battery 3220 mAh Li-ion 76 Wh Li-ion

Our thanks to Intel for providing the three Acer Swift 3 systems for our testing. The Alienware 13 R3 playing host to our Core i7-7700HQ is my personal system and was not provided by a manufacturer for evaluation.

Some additional notes regarding our testing methods:

  • All systems were configured to use Windows’ default “Balanced” power plan over the course of our testing. 
  • Unless otherwise noted, all tests were conducted with display resolutions of 1920×1080 and refresh rates of 60 Hz. Vsync was disabled using the graphics-driver control panel, where possible.
  • All drivers and system firmwares were updated to the most recent versions publicly available before testing.

Our testing methods are generally publicly available and reproducible. If you have questions regarding our methods, feel free to leave a comment on this article or join us in our forums.

 

Memory subsystem performance

Let’s kick off our testing with a quick look at the main memory performance from these systems using the built-in benchmarks from the AIDA64 utility.

Given the architectural similarities between these three chips, it’s no surprise that the differences between them mostly come down to memory speeds (and a higher TDP and boost clock, in the case of the i7-7700HQ). The i5-8250U has slightly higher memory speeds and unsurprisingly outpaces the i5-7200U in these bandwidth-focused benchmarks. 

Although we initially encountered a bit of weirdness when testing memory latency on the i5-8250U, further retests put this chip in the lead here by a few nanoseconds. The i5-8250U’s memory latency hasn’t suffered for its extra cores.

Some quick synthetic math tests

AIDA64 also includes some useful micro-benchmarks that we can use to sketch out broad differences among CPUs on our bench. The PhotoWorxx test uses AVX2 instructions on all of these chips. The CPU Hash integer benchmark uses AVX, while the single-precision FPU Julia and double-precision Mandel tests use AVX2 with FMA.

PhotoWorxx doesn’t offer a dramatic start for the i5-8250U and its extra cores versus the i5-7200U. The less-thermally-constrained i7-7700HQ can boost its clocks higher than the 15W chips in this test and holds those speeds under load, so it opens a wide lead.

CPU Hash lets the i5-8250U stretch its legs a bit. Here, the refreshed Kaby quad nearly doubles the i5-7200U’s performance.

In these AVX-intensive floating-point tests, the i5-8250U doesn’t quite double its predecessor’s numbers as we might expect it to given its extra cores and threads. For that kind of increase, you need to step up to the 35W version of the i7-7700HQ in my Alienware 13 R3. Still, these tests show how four cores and eight threads running at relatively low clock speeds can still offer a major performance boost over two Hyper-Threaded cores.

 

Dota 2 (Fastest)

Let’s kick off our gaming test with one of the most popular games on the planet today. Dota 2 has a reputation for running on everything, even Intel integrated graphics, thanks to a wide and forgiving range of graphics options. More importantly, Dota 2 has a powerful built-in replay system that allows us to precisely replicate the most intense moments in a match from a given player’s point of view, letting us reliably repeat the game’s chaotic battles for benchmarking.

We’ll begin our Dota 2 trek with the Fastest preset at 1920×1080, which cranks down the resolution scaling setting to 52% or so and disables virtually all of the game’s eye candy. It ain’t much to look at with these settings, but budding e-sports pros can at least get in on the fun with Fastest.


At Dota 2‘s least-demanding quality preset, both Intel IGPs deliver playable-enough experiences. Despite reasonable average frame rates, though, the IGPs turn in high 99th-percentile frame times that could be indicative of a rougher experiences than one might expect of 50-FPS or 60-FPS averages alone. To figure out just how bumpy an experience we’re talking, we need to look at our next set of charts.


These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid—or at least less than perfect. The formulas behind these graphs add up the amount of time our graphics card spends beyond certain frame-time thresholds, each with an important implication for gaming smoothness. Recall that our graphics-card tests all consist of one-minute test runs and that 1000 ms equals one second to fully appreciate this data.

The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS, or a 30-Hz refresh rate. Go lower than that with vsync on, and you’re into the bad voodoo of quantization slowdowns. 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame, while a constant stream of frames at 8.3-ms intervals would correspond to 120 FPS.

At the 50-ms mark, both Intel IGPs put some numbers on the board. Surprisingly, the UHD Graphics 620 turns in a significantly rougher experience than its predecessor by this measure. We’d expect the more recent IGP to outpace its predecessor, thanks to its DDR4-2400 memory and significantly larger pool of L3 cache. That time spent past 50 ms could result in perceptible, unpleasant hitching during gameplay.

At the 33.3-ms mark, which corresponds to time spent under 30 FPS—a widely-accepted definition of “playable” for this class of product—neither Intel IGP puts significant numbers of milliseconds in its bucket. The UHD Graphics 620 still performs worse than the HD Graphics 620 of the i5-7200U, though.

The UHD Graphics 620 finally trades places with its forebear at the 16.7-ms mark, where increasing time past that threshold would mean time spent below 60 FPS. Here, the UHD Graphics 620 spends five fewer seconds on tough frames than does the i5-7200U and its HD Graphics 620 processor.

Meanwhile, our GeForce MX150-equipped machine spends hardly any time past 33.3 ms, only a second of our one-minute test run past 16.7 ms, and less than half the time past 8.3 ms of even the UHD Graphics 620 part. If high frame rates at low Dota 2 settings are a must, so then is an MX150.

 

Dota 2 (Best Looking)

You can’t have fast without slow, and Dota 2 offers a preset called Best Looking that turns up all of the game’s eye candy and returns resolution scaling to 100%. Before I dive into these numbers, I should be clear that we’re not trying to slag any of these chips by going with this setup. We’re not expecting playable frame rates. Rather, the point of this testing is to figure out the point past which these graphics processors run out of gas, and how hard they stumble when they do.


Best Looking really puts the hurt on all of these chips. Maybe a notch off “Fastest” would have been fairer. Both Intel IGPs deliver frame rates that simply aren’t conducive to playing Dota 2. The MX150 loses half of its average frame rate compared to Fastest, and its 99th-percentile frame times climb to potentially worrisome levels. Dota 2 can have teeth when it wants to.


I’ll focus on the MX150’s results here, since both Intel IGPs basically need to ride the sag wagon. The MX150 spends enough time past 50 ms and 33.3 ms that you’ll notice it, but the hitches and bumps that show up there probably aren’t numerous enough to prove ruinous to the gameplay experience. Once we get to the 16.7-ms mark, though, it’s clear that a maxed-out Dota 2 is still a bit too much for a consistently fluid experience on the MX150 at 1920×1080. If you simply can’t abide lower graphics settings than all-out, though, the MX150 copes OK.

 

Rocket League (1280×720)
Rocket League is another wildly popular title with a runs-anywhere reputation. We dialed in some fairly high settings at a low resolution to see how it played on this trio. Like Dota 2, Rocket League has a wonderful replay interface that lets us precisely play back a given match in-engine and from a player’s viewpoint.


In the integrated-graphics-only bracket, the UHD Graphics 620 actually delivers a big boost over the plain HD Graphics IGP. A 42-FPS average moves the i5-8250U’s IGP from “plays the game” to “playable game,” and the eighth-gen IGP’s 99th-percentile frame time suggests it can keep the experience playable most of the time.


Indeed, only the HD Graphics 620 chip posts any time spent past 50 ms in our reckoning. The UHD Graphics 620 only spends a heartening third-of-a-second past 33.3 ms over our one-minute test run, too, meaning its 99th-percentile frame time is closer to an exception than a rule.

The 16.7-ms threshold shows that the eighth-gen IGP isn’t quite up to delivering a consistently fluid experience, though. While it shaves a whole 10 seconds off the result of its predecessor, the UHD Graphics 620 stlil spends nearly a third of our test run under 60 FPS. Compare that to the MX150 and its mere 48 ms spent past 16.7 ms. Still, if you want to play Rocket League and don’t mind low-res visuals, the UHD Graphics 620 can certainly do it at playable frame rates.

 

Rocket League (1920×1080)

Our second round of e-sports stress testing cranks up the resolution to 1920×1080 while leaving all of our previous graphics settings in place.


Try as it might to get there, the UHD Graphics 620 can’t quite clear the 30-FPS average that defines “playable” for many folks at these settings. Its 24.7-ms 99th-percentile frame time is what we might call downright cinematic on the i5-8250U if it spends too much time past that figure. Clearly the Intel IGPs could benefit from a pruning of the eye candy at this resolution.

The GeForce MX150 loses plenty of performance when the resolution rises, as well, but an average frame rate in the 50s is still nothing to complain about. Its 23.7-ms 99th-percentile frame time isn’t quite as low as we’d like, but that still translates to a 42.1-FPS instantaneous rate 99% of the time. Not bad at all, if resolution is important to you.


Our “time-spent-beyond-X” charts actually paint a happier picture for the UHD Graphics 620. Where the HD Graphics 620 puts enough time up past 50 ms to prove less than enjoyable, the UHD Graphics IGP only spends 50 ms past 50 ms on those tough frames. You still might notice a couple lengthy hitches with it, but Intel’s latest GT2 IGP keeps out of the 50-ms danger zone much better than its predecessor.

At 33.3 ms, the UHD Graphics 620 keeps up the good work. It spends over 70% less time on tough frames that might result in drops below 30 FPS than the plain HD Graphics IGP does. At 16.7 ms, though, both Intel IGPs are pretty much winded. Still, if you want to push a 1920×1080 laptop display with Rocket League, the UHD Graphics 620 might perform pretty darn well with less eye candy, considering.

The MX150, on the other hand, only has a vanishing few frames that need more than 50 ms or 33.3 ms to complete, and it only spends six-and-a-half seconds on frames that take more than 16.7 ms to render. It remains a major step up in performance.

 

Tomb Raider (1366×768)

In our final all-hands test, I dug the 2013 reboot of Tomb Raider out of the catacombs of my Steam library to serve as a general-purpose example of how an older AAA title might play on these systems. I started my expedition at 1366×768 using the game’s High preset.


Compared to plain ol’ HD Graphics, the UHD Graphics 620 takes Tomb Raider from “unplayable” to “exceeds expectations.” Given that this is a full-fat AAA game with graphics settings to match, clearing a 32-FPS average on integrated graphics isn’t anything to sneeze at. That 99th-percentile frame time might be cause for alarm, but we can see just how much it affects our test run before passing judgment using our time-spent-beyond-X graphs.


As that 99th-percentile frame time suggests, you’re going to have a bumpy ride in Tomb Raider with the UHD Graphics 620, even if it can outperform its predecessor. Still, the i5-8250U’s IGP cuts more than a second off the HD Graphics 620’s time spent beyond 50 ms, and it spends less than half the time past 33.3 ms that its integrated cousin does. We’re not talking awesome gaming experiences from either chip by any stretch, but I’d certainly burn up some time in an airport terminal with Tomb Raider on the i5-8250U.

Of course, one could also step up to the GeForce MX150 and enjoy a truly fluid and responsive gameplay experience at this resolution. Tomb Raider doesn’t look bad at all at 1366×768, and if its visuals aren’t badly hurt by running at borderline-HD resolutions, well, no complaints.

 

Tomb Raider (1920×1080)

One more round of torture testing. As we did with Rocket League, I bumped Tomb Raider up to 1920×1080 while leaving all of our graphics settings at 1366×768 unchanged. Let’s see how these graphics processors handle it.


In the case of both Intel IGPs, it’s back to the sag wagon. Tomb Raider is jerky and unresponsive at these frame rates, and it’s difficult to even point Lara Croft in the right direction with this much lag between frames. 1920×1080 is simply too much to ask of these IGPs, which is where the MX150 steps in. With a 48-FPS average and a 29.2-ms 99th-percentile frame time, Nvidia’s entry-level discrete chip certainly clears the “playable” bar by a wide enough margin here.


If you prioritize resolution over fluidity, the MX150 is happy to oblige with Tomb Raider. The chip posts next to no time past 33.3 ms, and its 50-ms chart is squeaky-clean. I’ll take a consistent 30 FPS at 1920×1080 over no gaming at all.

 

The Witcher 3 (1600×900)

After I finished testing the first three games we’ve discussed, I felt the need to find a game that the Intel IGPs couldn’t run well, period. The Witcher 3 did the job. Even at 1280×720 and the lowest possible settings, neither Intel IGP could produce playable frame rates with CD Projekt Red’s fantasy RPG. I then turned to the GeForce MX150 and found that it did surprisingly well with this title at a blend of medium and low settings at 1600×900, so I pitted it against the mobile GTX 1060 6GB in the Alienware 13 R3 that plays host to the Core i7-7700HQ in our testing for comparison.

For as pleasing as the MX150’s performance is next to integrated graphics, the GTX 1060 6GB in my Alienware 13 puts its performance in perspective a bit with nearly four times the frames per second, on average. Of course, the desktop GTX 1060 6GB is almost four times as expensive as the MX150’s desktop cousin, the GT 1030, and the mobile version I’m showing here is wrapped up in a $1900 laptop. Not exactly fair play.

At 1600×900, the MX150 falls just short of delivering 99% of its frames under the 33.3-ms mark that would resemble a “console-like” gaming experience. I still enjoyed playing The Witcher 3 on a machine barely half an inch thick, so let’s see how long the MX150 spent on tough frames.


Not much at all, it turns out. At the 33.3-ms mark, the MX150 spends just about two-tenths of a second on frames that take longer than that to render. My gut impression of a playable Witcher 3 experience on this tiny chip wasn’t just fantasy.

With that, we can conclude our gaming tests on these systems. Although Intel’s UHD Graphics 620 IGP certainly provides more performance than its predecessor in the titles we tested, that performance usually goes toward crossing the line from “unplayable” to “tolerable” instead of fulfilling some greater ambition. For truly satisfying entry-level gaming performance, you really need a discrete graphics card like the MX150 to get there.

Another good reason to consider an ultrabook with Nvidia graphics inside goes beyond performance. I was able to download and install the latest Nvidia Game Ready drivers for the GeForce MX150 from Nvidia’s site without any intermediary interference, while attempting to download and install the latest Intel graphics drivers from the horse’s mouth led to a frustrating message to the effect of “this driver is not certified for your device.” 

I have to wonder whether some of the performance issues I experienced with the titles I tested weren’t related to out-of-date graphics drivers. I turned to Acer’s official support site in hopes of finding an approved version of that driver for the two machines whose integrated graphics processors I was testing, but I came up empty-handed. Acer’s certified graphics drivers for its Swift 3 machines are months old in the case of the Kaby Lake-R Swift 3s and about a year old in the case of the i5-7200U-equipped system. If there are gaming performance improvements waiting in those driver updates, they’re locked behind Acer’s apparently glacial evaluation-and-approval process.

That tardiness in approving graphics drivers isn’t just of concern for gamers; in fact, it may not even be primarily of concern to gamers. The latest Intel drivers offer support for Netflix HDR and YouTube HDR content, as well as Windows Mixed Reality, the headlining feature of the Windows 10 Fall Creators Update. Given that Acer is now two versions behind in its driver-approval cycle, there’s no telling when Swift 3 owners whose machines only have Intel integrated graphics will be able to take advantage of those features.  

Now, let’s roll up our sleeves and see if these machines work as well as they play.

 

PCMark 10

We don’t usually include Futuremark’s wide-ranging PCMark benchmark in our test suite, but I’m making an exception for these general-purpose laptops. PCMark 10 tests some things that we don’t have tools for, like video conferencing, app launch times, and general productivity performance with common office apps like LibreOffice Writer and Calc. Considering that PCMark is a one-click test, it was hardly a burden at all to rotate it in for this review. Thanks to Futuremark for providing us with access to a PCMark 10 Professional license for these tests.

Along with the regular eighth-gen Swift 3, I’ve included the MX150-equipped model so that it can register results in the gaming portion of the PCMark test, as well as any OpenCL-enabled portions of the four benchmarks comprising the extended version of this test.

Let’s start off with these systems’ overall scores, including gaming results. These standings should come as no surprise, given the Alienware 13 R3’s potent processor and graphics card combo. The MX150-equipped Swift 3 pulls ahead of the system without a discrete GPU, and the i5-7200U ends up at the back of the pack.

The essentials benchmark tests web-browsing performance, video conferencing, and a range of app start-up times. The Speed Shift-enabled eighth-gen laptops might have an edge in this apparently latency-sensitive test. What’s surprising is that the most powerful PC of this bunch falls to the back of the pack.

The productivity test checks word-processing acuity and spreadsheet performance with both lightweight and heavy-duty number-crunching. Some portions of the test are even OpenCL-enabled, although it’s a bit odd that the i7-7700HQ-and-GTX-1060 combo doesn’t perform better as a result.

The digital-content-creation portion of this test covers three phases: photo editing tasks with large raw files from a range of DSLRs (some parts of which are OpenCL-enabled), video editing across both CPU and OpenCL code paths, and rendering and visualization of 3D models. Given those workloads, it’s no shock that the Alienware 13 R3 comes out on top again, but I would have expected more of a gap between the MX150-equipped Swift 3 and its IGP-only counterpart. C’est la vie.

Although we already know the story the PCMark gaming test tells, it’s nice to see it confirmed in yet another benchmark.

Overall, the PCMark suite paints a favorable picture for the performance of our i5-8250U-equipped machines in a range of common workloads. Gaming is, again, the only task where these chips struggle. Let’s see if our usual benchmark suite supports this broad impression of the performance of these systems.

 

Javascript

The usefulness of Javascript benchmarks for comparing browser performance may be on the wane, but these collections of tests are still a fine way of demonstrating the real-world single-threaded performance differences among CPUs.

These short, latency-sensitive benches show an interesting quirk of two of our test systems. You see, neither Kaby Lake system has Speed Shift enabled, while the i5-8250U does. Speed Shift allows the processor to ramp up to its peak clocks faster than chips without, and that pedal-to-the-metal approach lets the i5-8250U outpace the i5-7200U and shadow the i7-7700HQ for single-threaded performance and responsiveness.

In fact, I’m a bit annoyed that Dell didn’t decide to support Speed Shift for the Alienware 13 R3, given its high price and purported performance focus. Given that manufacturers don’t really talk about which processor power management minutiae like Speed Shift they support on spec sheets, though, it’s hard to figure out this kind of thing without a given machine in one’s hands.

Compiling code with GCC

Our resident code monkey, Bruno Ferreira, helped us put together this code-compiling test. Qtbench records the time needed to compile the Qt SDK using the GCC compiler. The number of jobs dispatched by the Qtbench script is configurable, and we set the number of threads to match the hardware thread count for each CPU.

Two more cores and four more threads give the i5-8250U a major boost in this multithreaded workload. It doesn’t quite halve the time the i5-7200U needs in this test, but a 35% reduction is still nothing to sneeze at.

File compression with 7-zip

Impressive. For file compression, the 15W i5-8250U is just about as competent as its 35W stablemate, and it leaves the dual-core i5-7200U completely in the dust.

Disk encryption with Veracrypt

Veracrypt, a continuation of the TrueCrypt project, offers a built-in benchmark that tests both the AES algorithm (which many of today’s CPUs can accelerate) and a variety of other algorithms that require good old CPU elbow grease to crunch. In the accelerated AES portion of the benchmark, the i5-8250U surprisingly comes out ahead of the i7-7700HQ.

The i5-8250U’s Twofish performance is no less impressive. It more than doubles the throughput of the i5-7200U and hangs right with the i7-7700HQ.

 

Cinebench

The evergreen Cinebench benchmark is powered by Maxon’s Cinema 4D rendering engine. It’s multithreaded and comes with a 64-bit executable. The test runs with a single thread and then with as many threads as possible.

The Speed Shift-equipped i5-8250U has a 3.4 GHz maximum Turbo speed, compared to the 3.8 GHz the i7-7700HQ can claim. Without Speed Shift and within a 35W TDP, however, the i7-7700HQ only ekes out a small advantage over the 15W part.

Cinebench is really about its multithreaded portion, though. Here, the higher TDP and beefy Alienware cooling system of the i7-7700HQ become an asset, as it allows the chip to boost higher than the i5-8250U can and for longer periods. The i5-8250U delivers a nice boost over its 15W cousin, but it can’t keep up with the i7-7700HQ.

Blender rendering

Blender is a widely-used, open-source 3D modeling and rendering application. The app can take advantage of AVX2 instructions on compatible CPUs. We chose the “bmw27” test file from Blender’s selection of benchmark scenes to put our CPUs through their paces.

Blender makes liberal use of AVX instructions, and that fact plays to the i7-7700HQ’s advantage again. As with Cinebench, the 7700HQ’s better cooling and greater thermal headroom allows it to run way ahead of the more constrained i5-8250U.

Corona rendering

Here’s a new benchmark for our test suite. Corona, as its developers put it, is a “high-performance (un)biased photorealistic renderer, available for Autodesk 3ds Max and as a standalone CLI application, and in development for Maxon Cinema 4D.”

The company has made a standalone benchmark with its rendering engine inside, so it was a no-brainer to give it a spin on these CPUs. The benchmark reports both a rays-per-second and time-to-completion figure, and we’re reporting the time-to-completion result.

Sorry to repeat myself, but the i7-7700HQ can’t help but run though the Corona benchmark much faster than the i5-8250U. Still, the 15W Kaby Refresh part handily outpaces the i5-7200U.

Handbrake transcoding
Handbrake is a popular video-transcoding app that recently hit version 1.0.7. To see how it performs on these chips, we’re switching things up from some of our past reviews. Here, we converted a roughly two-minute 4K source file from an iPhone 6S into a 1920×1080, 30 FPS MKV using the HEVC algorithm implemented in the x265 open-source encoder. We otherwise left the preset at its default settings.

The i5-8250U makes waiting for a transcode to finish much less onerous than it is with the i5-7200U.

CFD with STARS Euler3D

Euler3D tackles the difficult problem of simulating fluid dynamics. It tends to be very memory-bandwidth intensive. You can read more about it right here. We configured Euler3D to use every thread available from each of our CPUs.

This multithreaded and bandwidth-hungry benchmark yokes both the faster memory and extra threads of the i5-8250U to take a lead over the i5-7200U. The DDR4-2666 RAM and higher TDP of the i7-7700HQ still gives it the overall win in Euler3D, though.

 

Digital audio workstation performance

One of the neatest additions to our test suite of late is the duo of DAWBench project files: DSP 2017 and VI 2017. The DSP benchmark tests the raw number of VST plugins a system can handle, while the complex VI project simulates a virtual instrument and sampling workload.

We used the latest version of the Reaper DAW for Windows as the platform for our tests. To simulate a demanding workload, we tested each CPU with a 24-bit depth and 96-KHz sampling rate, and at two ASIO buffer depths: a punishing 64 and a slightly-less-punishing 128. In response to popular demand, we’re also testing the same buffer depths at a sampling rate of 48 KHz. We added VSTs or notes of polyphony to each session until we started hearing popping or other audio artifacts. We used Focusrite’s Scarlett 2i2 audio interface and the latest version of the company’s own ASIO driver for monitoring purposes.

A very special thanks is in order here for Native Instruments, who kindly provided us with the Kontakt licenses necessary to run the DAWBench VI project file. We greatly appreciate NI’s support—this benchmark would not have been possible without the help of the folks there. Be sure to check out their many fine digital audio products.


The DAWBench VI test at 96 KHz and 64 samples is the most punishing of this bunch, and none of our mobile CPUs can handle it. Even the i7-7700HQ delivered crackling and popping with no voices of polyphony in play. Relax the buffer size, and the i5-8250U more than doubles the performance of the i5-7200U.


The DAWBench DSP test is less about agility and more about pure multithreaded grunt. The i5-8250U triples the i5-7200U’s VST instances at 96 KHz and 64 samples. Relaxing the buffer size narrows the i5-8250U’s lead, but it still enjoys a pure doubling of performance versus the 7200U.


With our sampling rate duly reduced, we return to the VI portion of DAWBench. At 64 samples, the i5-8250U once again doubles the i5-7200U’s performance.  At 128 samples, the i5-8250U’s advantage over the i5-7200U shrinks, but a 50% increase in performance still isn’t bad.


Returning to the more straightforward DSP test at a 48 KHz sampling rate, the 8250U holds its wide lead over its 15W predecessor at both buffer depths. Open and shut.

Overall, the Core i5-8250U represents a major leap forward for mobile digital audio workstation performance with 15W CPUs. A thin-and-light laptop with one of these chips inside won’t replace a beefy mobile workstation, but it opens up greater possibilities for DAW work in the ultrabook form factor than ever before.

 

Battery life

To get a quick idea of how well the i5-8250U handles when it’s away from an outlet, we fired up the trusty TR Browserbench. Browserbench refreshes an older, Flash-heavy version of our website at one-minute intervals until the host machine shuts down. We’re working on a new version of Browserbench that more closely resembles modern browsing habits, but the old standby will have to do for now.

For consistency, each machine was charged to 100% before our test began. Display brightness was normalized relative to the 50% brightness setting of the eighth-generation Swift 3’s screen. Special power plans (like Windows 10’s battery saver mode) were disabled, and each machine was allowed to run the test until it hit 5% battery, at which point Windows automatically shut down. We then plugged the machine back in, fired it up, and recorded the final elapsed time that Browserbench indicated.

For our two Swift 3s with IGPs alone, battery life with Browserbench is practically identical—within 2%, generation-to-generation. That’s encouraging performance, given the i5-8250U’s two extra cores versus the i5-7200U. The MX150-equipped Swift 3 didn’t keep up with either of those machines in the battery life department, though. It conked out an hour earlier than the IGP-equipped notebooks. Even with Nvidia’s power-saving Optimus technology, the discrete graphics chip in the MX150-equipped Swift 3 still has a power cost when the laptop is away from a wall and in light use.

 

Conclusions

Watt for watt, the Core i5-8250U thoroughly outclasses its Kaby Lake predecessor in every test we can throw at it. Heck, Intel’s own 40% claim for generation-on-generation improvements from its 15W CPUs seems modest in light of our test results. The i5-8250U delivers a whopping 59% mean performance improvement over its predecessor when I take all of my tests into account. Intel can more than credibly claim a generational leap in performance from these chips, even if we’re “just” getting more Skylake cores in the same thermal envelope.

Heavy multithreaded workloads, and especially heavy AVX workloads, will drive the Core i5-8250U to its base clock faster than you can say “SIMD,” but even with its low 1.6-GHz base speed, Kaby Lake-R offers a major boost in performance over the i5-7200U in those tasks. For as much skepticism as that base-clock figure has gotten in some corners of the ‘net, I don’t think it’s worth worrying about. Folks who depend on heavy-duty applications like Blender and Handbrake to get their work done quickly shouldn’t throw away their mobile workstations just yet, but typical PC users will enjoy a nice helping of extra speed in heavy-duty work from ultrabooks with these chips inside.

Somewhat serious gamers are the only crowd I would expect to be unimpressed by the i5-8250U. By themselves, eighth-generation Core CPUs don’t do much to move the integrated-graphics bar. Low graphics settings and resolutions will be the order of the day with popular e-sports titles like Dota 2 and Rocket League on eighth-gen notebooks with integrated graphics alone.

If you want more graphics horsepower from an eighth-gen ultrabook, though, you can get one with a GeForce MX150 graphics chip and 2GB of dedicated GDDR5 RAM inside for not much more money than models without. If gaming performance matters, the Nvidia chip offers solid enough frame rates and frame times in some of today’s most popular games, sometimes even at 1920×1080 with some eye candy on. Lowering resolutions or relaxing the eye candy can result in downright smooth performance across the board.

Acer’s Swift 3 family of notebooks puts a good face on Kaby Lake Refresh chips in general. Although I’m spoiled by the premium laptops I have at hand in the TR labs, it’s hard not to be impressed by the plastic-free construction, rock-solid chassis, and responsive touchpads on these systems. If every $700-ish Ultrabook is as finely made, we’re truly living in a golden age for thin-and-light mobile computing.

All told, Intel’s eighth-generation mobile processors are a resounding win. We get a lot of the same world-class responsiveness from the Core i5-8250U that we do from Kaby Lake and Coffee Lake desktop chips, and we get more multithreaded power than ever before from a 15W Intel chip. Systems with these processors inside only seem to demand modest premiums over seventh-gen Core laptops, too.

Like our experience with the Core i7-8700K on the desktop, the i5-8250U’s improvements come with virtually no downsides. It’s a no-brainer to seek out an ultrabook with an eighth-generation Core chip inside if you’re on the hunt for a new thin-and-light PC.

Comments closed
    • NovusBogus
    • 2 years ago

    Impressive results; I’m starting to really anticipate seeing this silicon in next year’s pro series notebooks. It’d be great if that MX150 shows up there too, but probably wishful thinking. Personally, I don’t care about the thin-and-light foolishness but most non-workstations only use U-series chips these days and I outright refuse to downgrade from 4 cores in 2012 to 2 cores in 2017.

    • tootercomputer
    • 2 years ago

    Thanks for the review. This helped a lot with a Christmas purchase decision, and I just purchased a dell version of this, inspiron 7000 13.3, for my wife, for Christmas. It’s driving me nuts sitting there in the box, I want to open it up and play. I, for the first time, braved the various hazards of pre-Black Friday sales and went to Best Buy this evening where I made the purchase.

    The one thing I find odd about these chips is Intel’s labeling system. This chip is an i5, with 4 cores and 8 threads. When you look at that table describing the features, the only difference between i5 and i7 is the amount of cache (plus, of course, cpu speed, though that varies by model rather than iCore type). My Kaby Lake mobile core i7 7500U has 2 cores and 4 threads. I guess maybe there are additional underlying architectural differences between the core i5 and i7 chips.

      • UberGerbil
      • 2 years ago

      The whole i3/i5/i7 distinction is a marketing exercise, designed to capture all three types of shoppers (“I always buy the cheapest!” / “I always buy the best!” / “I always split the difference!”) — ie the underlying difference is primarily human psychology, not technical architecture. There are differences between the model lines, of course, but there’s nothing technical that [i<]defines[/i<] an i5 vs an i7; Intel Marketing is free to change the definitions on a whim, or as the market evolves.

    • Scott_G
    • 2 years ago

    Just wanted to comment on the benchmark results of which I’ve gotten better results. I’ve had the Swift 3 (SF314-52G) with 8250U & MX150 for about a month now.
    Of course I’ve done a clean install of Windows 10 and with the latest Intel & Nvidia drivers. Power plan is set to balanced, there’s no overclocking and with no other programs running.
    Just released BIOS has been updated to 1.07.
    Cinebench R15 Multi-Thread CPU test nets a result of 564 (vs 538 in this review)
    Corona renderer netted a result of 387 seconds (vs 432 seconds in this review) 45 seconds is pretty substantial.

    Corona 1.3 Benchmark Finished
    BTR Scene 16 passes
    Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz
    Real CPU Frequency [GHz]: 2.5
    Render Time: 0:06:27, Rays/sec: 1,252,970

    May do a few more benchmarks included in review to compare results. Otherwise, great review and the most in depth so far for sure, from what I’ve seen.

      • Chrispy_
      • 2 years ago

      Two identical models – in this case the i5-8250U can still have different default voltages based on how they’ve been binned. Each batch of silicon that comes out of the foundry will have slightly different properties – that’s just the nature of how these chips are made.

      The performance of these laptops is 100% dependent on their TDP restrictions, if you get a chip with a lower vCore, then you’re going to be running at higher frequencies within the same 15W TDP limit.

      • Jeff Kampman
      • 2 years ago

      Some of these differences are easy to explain as a consequence of thermal conditions. It’s easy to flip open a laptop that’s been off for a while and immediately run a benchmark to get a high result, since it’ll turbo higher and for longer when the chassis and heatsink are stone-cold than it might in more representative conditions.

      Our testing occurs under much less favorable conditions. Generally, I try to get a machine nice and warm from use before we test it so that the results are typical of a system that’s been on for a good part of the work day. We also don’t give our test rigs too much opportunity to cool down between any of several runs. If the average user doesn’t push their system as hard, they won’t see this near-worst-case performance (and they’ll marvel at the “extra” performance they’re getting versus our results, as you have).

      If you run Cinebench five times on your machine from a cold start, for just one example, I can almost guarantee it will drop into the 530s by the end of the final run.

        • Scott_G
        • 2 years ago

        Totally understand what you’re saying about the “normal average user use”. No funny business, I do run benchmarks while the unit is warm; not cool, not hot. I played WoW for 45min then immediately ran Cinebench R15 and got a score of 414 (Multi-threaded). Afterwards I left the laptop on and idle for 30min and then got a score of 567. If I run it 5 times in a row each subsequent score will lower, agreed.

        Tried Blender bmw27_cpu benchmark and got 888 seconds.

        Bottom line is the 8250U/MX150 combo has been impressive.
        Good battery life for work while unplugged and can decently casual game on the same unit, while affordable.

        Good job Acer, EXCEPT no backlighting timeout control :S

    • HERETIC
    • 2 years ago

    So Kaby Lake refresh i5-4 cores brings “race to sleep” kicking and screaming out
    of the anemic pit they were in-wonderfull………………………….

    Big question Jeff-
    When we going to get a review on the red headed stepchild-8600K??????????

    • hasseb64
    • 2 years ago

    Good news! I will buy a U processor next time and save money and weight

    • Anonymous Coward
    • 2 years ago

    Good review, and good hardware.

    So how many watts is that MX150 allowed to consume in this implementation? I saw 30W mentioned, but thats a lot for a thin laptop. Adding a GPU with double the power draw of the CPU is more than just a small change. Also, if so much cooling is at hand, why use a 15W CPU at all? I’d like to see Raven Ridge with 15+30 = 45W.

    On the subject of the CPU, I’m left wondering in how many applications it can sustain peak single-core turbo. 3.4ghz in this review is not so demanding I suppose, but what about the 8550U and 8650U with 4.0ghz and 4.2ghz turbo? Thats significant.

      • Andrew Lauritzen
      • 2 years ago

      Yeah I’m be curious about this as well. Clearly the combined TDP here is somewhere between doubling and tripling and while it’s easier to cool two chips than one (iso combined TDP), the heat still has to get out of the chassis somewhere.

      So it does indeed bring up the question: if the chassis is capable of cooling ~double/triple the TDP of the base model, why not a) make a separate, smaller chassis for the U chip or b) use a full 45W chip instead. I imagine the answers here are both that this is easier to SKU/manufacture for Asus as this isn’t a premium offering, but I’d be curious none-the-less if there are any technical issues or if this is (likely) just a cost saving measure.

        • dodozoid
        • 2 years ago

        Completely unrelated question – where did you move after leaving intel? Are you still in the silicon business?

      • IGTrading
      • 2 years ago

      The 45W AMD Ryzen Mobile will probably only happen on the desktop although I would really like a 17″ notebook with such a Ryzen 5 implementation.

      • tipoo
      • 2 years ago

      iirc Notebookcheck and others found 15 watt ULV chips can draw 24 watts sustained, post throttling peak, if the cooling is ok, that was specifically the rMBP 13 nTB. So it’s still adding a lot yeah, but it’s 24 watts – igp power + dGPU power now (unless the CPU would just fill the TDP void?). So not adding double the power of the main chip itself.

      [url<]https://www.notebookcheck.net/Apple-MacBook-Pro-13-Mid-2017-i5-without-Touch-Bar-Review.234282.0.html[/url<]

        • Anonymous Coward
        • 2 years ago

        Interesting. Still though, it looks like it should be doubling the power & heat .

        • Andrew Lauritzen
        • 2 years ago

        That’s a config option for the OEM/firmware generally. Apply tends to configure the PL1/2 turbo modes so that they can use extra power for a long period of time since they typically have very good cooling. Microsoft tends to do it based on a chassis temperature sensor on recent Surface Pros (since the chip cooling is often less limiting than the device comfort itself). Unfortunately we can’t really guess at how Acer has configured this machine… Jeff or someone would have to observe the behavior in each of the separate SKUs to see what’s going on.

        “24 watts – igp power + dGPU”
        It’s still ~15-25W for the SoC, even when the iGPU is not being used. Any game running full tilt will always be power limited on these form factors, so part of the performance advantage here is due to the CPU getting a larger chunk of the power budget as well.

        So as I said, depending on device config you’re likely looking at ~2-3x the total power for the chip parts of the dGPU config.

          • tipoo
          • 2 years ago

          It’s interesting that Apply (heh) of all companies unlocks the TDP and just lets these run to the tjunction, eh? That makes me even more excited for a 13″ refresh with a mobile quad, as they’d probably continue to uncap it and come closer to the big boys.

          Out of curiosity, would there be anything higher certified in their solder process to let the chips run up to the tjunction max so much, or is it purely the cooling? I’m sure after 2011 soldergate they must check these extensively before deciding to uncap the chips to a higher degree than most laptop manufacturers.

            • Andrew Lauritzen
            • 2 years ago

            There’s nothing particularly unsafe about raising the PL2 limit (or similar), it’s just that if your chassis can’t maintain the heat dissipation you can actually sabotage yourself with thermal runaway which will eventually result in *heavy* throttling, putting you in a situation worse than if you had just stuck to a more stable TDP.

            You can even mess with this yourself to some extent in Intel’s XTU utility in Windows. Of course the firmware has even more controls but suffice it to say with ULV processors these days it’s all about the specific implementation… as reviews show even the same exact CPU will have very different performance in different laptops/configs; that’s just the reality of the power/heat limitations unfortunately.

            • thecoldanddarkone
            • 2 years ago

            It’s actually not unusual to see 15w tdp processors configured at 25. Both my Lenovo’s are configured with the higher configurable tdp.

    • Shobai
    • 2 years ago

    The memory latencies thing is interesting, because at first blush DDR4 2133 @ CL15 should give 14.07 ns true latency and DDR4 2400 @ CL17 should give 14.17 ns true latency – there’s not much difference there.

      • Jeff Kampman
      • 2 years ago

      This was an odd result, but I went back and retested it again and now the figures are in line with expectations. I’ve updated the article accordingly.

        • Shobai
        • 2 years ago

        Glad to hear it! Do you need to produce another graph for that section?

    • tipoo
    • 2 years ago

    Any idea when these ULV quads will be combined with Iris Plus, which is presumably what Apple would be waiting on?

      • TEAMSWITCHER
      • 2 years ago

      I am very curious about this too. I haven’t seen a recent Intel road map for the mobile parts, official or leaked. With Cannon Lake delayed well into next year, and the Intel/AMD announcement a couple weeks ago, I’m left wondering .. what’s coming? And what about the ultra-low voltage Y series parts? My 13″ 2015 MacBook Pro is due for an upgrade, but I’m not gonna waste it on a mere Kaby Lake + Touch Bar upgrade. Doubling the core count would truly … seal the deal.

        • DancinJack
        • 2 years ago

        My 13″ 2015 rMBP is staying in my possession until I can’t stand it anymore. I’m not willing to live the dongle life yet.

        • tipoo
        • 2 years ago

        I think that RajaChip is definitely a shoe-in for the 2018 15″ rMBP update. Its the Iris Plus for the 13″ with ULV quads I’ve seen about no mention of anywhere. The 13″ with a quad and TB3 would be a pretty great combo for me and may talk me down from my 2015 15″.

        Or the Rajachip 15″ could be so tempting I keep clinging to 15″ers for another few years…

        • tsk
        • 2 years ago

        I can tell you that you should expect Cannon lake Y series parts [i<]very[/i<] soon™, it's one of the top priorities at Intel to get the chip to market at the moment. The initial problem with Cannon Lake-Y is that performance was lower than Kaby-Y.

    • tipoo
    • 2 years ago

    The MX150 probably won’t get praise from core gamers, but it is pretty remarkable in its wattage. That’s essentially XBO power (edit: actually closer to PS4 power with Nvidia vs AMD flops + fill rate/texture rate!) now in 6-700 dollar budget laptops, enabling the viable budget tier to drop down a few notches, where with the 940MX it wasn’t quite reccomendable.

    I’m very interested to see how mobile Vega and Vega APUs will stack up, though the bandwidth contention will be the hurdle for the latter.

      • Jeff Kampman
      • 2 years ago

      I’m really impressed with the MX150, considering that it creates no size or weight penalty with these ultrabooks and only seems to hurt battery life a bit. It’s surprisingly capable for entry-level gaming.

        • Ninjitsu
        • 2 years ago

        Considering I managed to struggle along while gaming on an i5 2400U and GT740M (DDR3 version) for a whole year, I can see the effectiveness of this combination.

        • mczak
        • 2 years ago

        I agree, it’s quite a capable little chip. Much better than the near pointless, but ubiquitous 940MX – well the ddr3 version of it was near pointless (because not really faster than HD 520 or 620 when the IGP got dual channel memory), the gddr5 version definitely is faster but you always had to wonder if it’s really worth the extra cost over the IGP (initially it was nearly always ddr3, luckily more recently it’s usually gddr5 at least).
        I’m definitely looking forward to comparisons with Raven Ridge Notebooks with Vega 8/10 graphics. Of course, I really really hope the NB manufacturers get the point and equip it with dual-channel ddr4-2400 (faster would be better of course, but probably too much to ask for).
        Also, some notebooks designs with increased APU TDP (25W instead of 15W) with RR would be nice. The MX150 has a huge advantage there, with its own ~25W budget (aside from the 15W the cpu gets). In other words, Raven Ridge IGP doesn’t really have much chance, but with the right conditions it could be reasonably close.

          • tipoo
          • 2 years ago

          Nothing got me more than some wal mart special that threw in a dedicated GPU that was weaker than the IGP, lol.

      • NTMBK
      • 2 years ago

      It’s nowhere near the PS4 on memory bandwidth, in all fairness; only got a tiny 64-bit memory bus. Still damn impressive though.

        • tipoo
        • 2 years ago

        That’s true enough, it’s not fully there in every regard. Though it’s nearer than the paper figures, we’ve seen Pascals super efficient memory compression punching way above the bandwidth figure, and the PS4 not only shares with the CPU, but sharing with the CPU reduces the total memory bandwidth figure for reasons.

        [url<]http://www.eurogamer.net/articles/digitalfoundry-2015-vs-texture-filtering-on-console[/url<] With a shared load the bandwidth in total can reduce to 110, the CPU takes 10, and Pascal is significantly better at compression than the GCN 1.1ish part in there. So it might be more like 70GB/s equivalent with compression vs 100, rather than 50 vs 170. Then again the unified memory also has advantages.

    • mczak
    • 2 years ago

    As far as I know it should be easily possible to install intel graphics drivers – you just have to uninstall the pointless acer repackaged old one first. (And afterwards you never have to bother with that crap again.)

      • Jeff Kampman
      • 2 years ago

      I did not think to try this, but it works. I’ll have to update my impressions accordingly.

    • trackerben
    • 2 years ago

    MX130?

      • DancinJack
      • 2 years ago

      You’re referring to the MX130 reference on page 1 I guess? It’s not that easy to tell what you mean by “MX130?”

      • Jeff Kampman
      • 2 years ago

      The MX130 is a thing, but not in this review. Presuming you’re referring to my error on page one (as DancinJack pointed out), it’s fixed now, thanks!

        • Chrispy_
        • 2 years ago

        MX130?

        I am aware of the MX150 being a rename of the desktop GT 1030, and thought that this was a good “lowest dGPU” product on the market. If Nvidia think that there’s a market for something weaker than an MX150, then this is a sad day.

        I was hoping that the MX150 would be the death of junk like the 940MX and Kepler rebrands like the GT 920. You’ve already shown that the MX150 is good in comparison to an Intel IGP, but it’s already compromised (you can’t have 1080p or nice details, and even at 720p you’re going to be turning things down to get 60fps). There doesn’t seem to be much margin/profit/scope/market for a worse dGPU than the MX150, which seems to hit the sweet spot in terms of acceptable performance at a low cost and low power use.

          • HERETIC
          • 2 years ago

          ” If Nvidia think that there’s a market for something weaker than an MX150, then this is a sad day.”

          [url<]https://wccftech.com/nvidia-geforce-mx130-mx110-maxwell-notebook-gpus/[/url<] Have to get rid of old broken dies somewhere.......................

            • Anonymous Coward
            • 2 years ago

            940MX?

            • NTMBK
            • 2 years ago

            Looks like Maxwell rebrands have replaced Kepler rebrands…

    • willmore
    • 2 years ago

    So, the advantage of this generation is that they’re calling the i7 an i5? That’s nice. Too bad these are lapotp chips, so we can’t really see how much the price has changed. If they’re still charging i7 prices for them than this is just a bit of a naming shell game.

      • DancinJack
      • 2 years ago

      If only there was somewhere we could look up prices that Intel charges people…

      [url<]https://ark.intel.com/compare/124967,124969,97537,97466,97541,97540,95451[/url<]

        • willmore
        • 2 years ago

        And we’ve known for years that these prices are before any kind of bundling, loyalty, etc. bonuses that Intel gives to their customers. The ‘list price’ doesn’t mean much.

          • Ninjitsu
          • 2 years ago

          These prices are bulk prices, i.e. pretty close to what their big customers pay (tbh they might even pay less, if the volume is high enough).

            • Bauxite
            • 2 years ago

            If you believe they are anywhere near reality, I have a bridge to sell you. Those of us watching the little details for years know those are basically BS, mobile even more so. There are also things like rebates and bundling with chipset/pch/etc to consider.

            One key hint is certain cpus with same or very close “tray price” have vastly different availability. Iris pro HQ basically don’t exist for example.

        • Beahmont
        • 2 years ago

        Okay so I looked at that, and the thing that shocked me the most was the performance differences on the watt curve.

        An 8350U gets only 800 mhz base at 10W. It gets 1.7 base ghz at 15W. But at 25W it only gets 1.9ghz base.

        Like Holy Frequency Scaling Batman!

        My best guess is that the actual CPU cores eat up very little of the watts used at 10W, but the extra 5W from 10W to 15W all goes to the CPU cores. And by the time we are talking about 25W the cores are all on the sharp end of the frequencey/Watt curve.

          • thecoldanddarkone
          • 2 years ago

          I use a p51 that has an h kaby lake. If you power limit the cpu, it performs very similar to these processors.

      • mczak
      • 2 years ago

      Err no, the difference is you get 4 cores with the 15W chips. The previous 15W i7 were not really any faster (well they were, but just barely) than the i5, both the new i5/i7 15W ones blow the old 15W i7 out of the water in multithreaded benchmarks due to having more cores.
      It actually looks like efficiency improved somewhat, otherwise the graphics benchmarks wouldn’t really show much of a difference.

        • willmore
        • 2 years ago

        That’s not what I asked.

      • Beahmont
      • 2 years ago

      No. I’m lead to believe from sources I trust that these are actually new chips made with different masks and not simply high quality old chips given a rename.

      I’m also told that the new masks have much better yields as well as creating chips that have better performance, even though the process itself is the same for Kaby Lake and Kaby Lake R.

        • willmore
        • 2 years ago

        That’s not what I asked at all. I don’t doubt that these are new chips, what I’m saying is are they just now calling a part ‘i5’ that they would have called an ‘i7’ in previous generations–and kept the i7 price.

        If they have really sold us what used to be an i7 for an i5 price, then that’s great. But, since the part is built into a laptop, we can’t know the actual per-part cost that the manufacturer pays.

          • NTMBK
          • 2 years ago

          The ULV i7 used to be a dual core. This is a quad core. You couldn’t get a part like this before.

    • dpaus
    • 2 years ago

    I just ordered three 17″ ‘convertibles’ for our Sales Team with the this chip’s big brother, the i7-8550U and MX-150s (and 16 GBytes of RAM and 2 TB harddrives). I can hardly wait to see how they perform with our software. Same 15W envelope, but the 1.8/4.0 GHz clocks give it a slight performance advantage over this.

    I’m also looking forward to benchmarking them against some Raven Ridge systems we’ll be getting after New Years. Should be interesting.

      • DancinJack
      • 2 years ago

      I guess a 17″ behemoth could use tent-mode and stand-mode with a touchscreen, but that all just seems rather odd to me. I’m not super sure why would you need a 17″ computer to be “convertible.” No way would I want to use a 17-incher in “tablet” mode.

        • dpaus
        • 2 years ago

        They’re sales reps; it’s a hell of a lot easier to give a demo to 3 or 4 people using a 17″ HD-res quasi-tablet than a 10″ true tablet with HD resolution.

        • dpaus
        • 2 years ago

        Also, generally speaking, it’s well-suited to any situation where more than one other person needs to see what you’re doing on the tablet. Small field teams in construction, emergency response, etc., etc….

          • DancinJack
          • 2 years ago

          Well I hope they been hitting the gym!

          Really though, thanks for an explanation. I wasn’t necessarily expecting one. I still am not sure a 17″ “convertible” is my game though. I’m still on the side of the 13, 14, and 15 inchers.

            • dpaus
            • 2 years ago

            We don’t hire no sissy sales reps!!

        • Chrispy_
        • 2 years ago

        These 17″ models weigh less than 13″ models we balanced on our laps six or seven years ago.

          • DancinJack
          • 2 years ago

          If we, and our activities didn’t evolve, that would be a good point.

            • Chrispy_
            • 2 years ago

            I still sit on the sofa and use a laptop the same way I did 20 years ago. All that has changed is that I’m typing into a web form instead of writing my thesis in Word ’97.

            Oh, and I’m fatter and slower than I was then.

            • Anonymous Coward
            • 2 years ago

            I don’t include myself in the “we” and “our” you’re throwing around there.

            • DancinJack
            • 2 years ago

            Congratulations?

            • Chrispy_
            • 2 years ago

            I guess a more pertinent question would be, “what have you and your activities evolved into that rule out a 17″ tablet yet allow smaller ones?”

            The only reason people used to avoid 17″ laptops was because they were too bulky and heavy to be practically portable. These days, that is not true – their slimmer bezels and lower weights mean they fit in a briefcase or typical rucksack, you can use them in tablet mode on in an economy airline seat (arguably the most awkward environment I can think of), and they can still be operated whilst standing, with the same sort of grip and position that you’d need for any tablet larger than a 10″ model anyway.

            • Anonymous Coward
            • 2 years ago

            I haven’t handled one of the 17″ Dell models, but my aged 17″ G4 is essentially identical in 2 of 3 dimensions to a Thinkpad W530 (which has a battery hanging partway out the back), and even the width isn’t hugely different. But when you open it and see that screen, its clear there is a difference in [i<]that[/i<] department.

          • UberGerbil
          • 2 years ago

          No kidding. I had a 9lb Dell with a Pentium 166 (with MMX!) that I schlepped through airports all over the country back in the late 90s. Probably a 12″ screen (but massive 1024x768x8bpp resolution!) and three spindles (floppy, HD, optical) or you could swap in extra batteries for the non-HDs. With both batteries in I think it was over 10lbs.

      • Anonymous Coward
      • 2 years ago

      Are these the Dell models? I just about had to buy one a couple months ago, but I resisted. I would really like to know how they work out!

      • IGTrading
      • 2 years ago

      What 17″ convertible did you order ?

      May we have a link ?

      Later Edit : What bozos would give 4 down-votes for asking a question ?! 🙂

      • dpaus
      • 2 years ago

      Here’s the link to what we ordered:

      [url<]http://www.dell.com/en-ca/shop/dell-laptops-netbooks-and-tablets/new-inspiron-17-7000/spd/inspiron-17-7773-2-in-1-laptop/ni177773_bt_h6114e[/url<] Haven't received them yet, but will post impressions and results from any test s we run when we get them.

        • Anonymous Coward
        • 2 years ago

        Yeah the same I had to work to avoid buying. I would be very interested to hear a little about how they are in person, and under the whip. Do they throttle, do they cook flesh?

        One of my favorite laptops is my old 17″ Apple G4 which I suspect has a lot in common when held in the hand. Huge screen, thin chassis. Its only now that such a form factor might offer good performance.

        • thecoldanddarkone
        • 2 years ago

        While It’s not something I have a use for. They look awesome even with how much they weigh. I could see a usage case based on what your saying. I have some design concerns, but ultimately without having one in hand, I don’t know if it’s justified.

        It’s heavier than my p51.

          • Anonymous Coward
          • 2 years ago

          I think people are overly fascinated with laptop weight. Everyone at my office seems happy to have tiny 2-cored models with whiny fans and crippled screens, as if they even carry them long distances. Irrational, IMO, like paper-thin phones. Someone travelling a lot via plane, OK I could sympathize, even though I would still go 15-17″ every time.

            • thecoldanddarkone
            • 2 years ago

            I can understand where you are coming from. In fact I just recommended [url<]https://www.newegg.com/Product/Product.aspx?Item=9SIA6ZP5D75347[/url<] to a friend of mine. It was 1k on newegg till they ran out of stock. His needs are different than mine. I prefer to stay in the 15 inch category and below 6 lb's. This said, if someone said hey this is your new work laptop, I would shrug and use it. I'd take it over my 3 lb's thinkpad 13 every day.

        • IGTrading
        • 2 years ago

        Thanks man !

        Never thought I’d see this one : serious 17-incher convertible into a tablet. 🙂

        I may be weird, but I like it a lot.

        It’s strange that they don’t include an interchangeable optical drive and a second HDD / SSD in such a large chases … I would really love to see those added.

        Surprisingly, the productivity performance seems to be quite limited.

        NoteBookCheck did a review of it and the CineBench result places it almost 50% beind AMD’s Ryzen Mobile 2500U.

        [url<]https://www.notebookcheck.net/Dell-Inspiron-17-7778-Convertible-Review.174322.0.html[/url<]

    • tsk
    • 2 years ago

    [url=https://images.anandtech.com/doci/12062/aidax264.png<]Here[/url<] is some more detailed power draw and thermals from the i7 8550u in the Zenbook 3. Max powerdraw 29W, average over 25mins 12,9W.

      • DancinJack
      • 2 years ago

      As far as I can tell the only difference between 8th gen i7s and i5s is a few MHz here and there (on CPU and GPU) and 2MB extra cache. Am I missing anything?

      edit: referring specifically to the mobile “U” chips.

        • dodozoid
        • 2 years ago

        Hasnt it always been the case?

          • DancinJack
          • 2 years ago

          Didn’t some i7’s have four cores previously? I could be off my rocker. I think it was just the HQ models or whatever, though.

            • DPete27
            • 2 years ago

            Correct, 4c/8t i7s of yore carried the HQ suffix. They were also 35-45w TDP

      • IGTrading
      • 2 years ago

      For 650 USD, Acer has put together a good config.

      Including that SSD helps the system a lot.

      I would still prefer the HP Envy for the more powerful AMD Ryzen Mobile CPU and the GPU which trounces the GF 940MX (2016 version) , but the battery life will be 25% shorter.

      I can live with that since I always have 2 battery bricks with me (one in the car 23,000 mAh and one in the suitcase 20,000 mAh) . Talk about overkill 🙂

        • maroon1
        • 2 years ago

        Trounces the GF 940MX ?! In gaming or 3dmark ? Do you have any proof ?

        The gap between MX150 and Ryzen APU is massive in gaming. 70% faster in far cry 2 and 80% faster than GRID Autosport
        [url<]https://hothardware.com/reviews/ryzen-mobile-benchmarks-and-performance-analysis?page=2[/url<] Also, I don't care about 3dmark score. If you noticed, the gap in gaming between MX150 and Ryzen 2500U is much bigger than 3dmark. So, the APU might beat 940MX in 3dmark, but we don't know if it can do that in actual games Not to mention that min fps on AMD APU (and intel iGPU) was horrible. MX150 has MORE than twice the min fps in GRID Autosport than 2500U

          • thx1138r
          • 2 years ago

          That review you point to is for the Ryzen 2500u, but the top end Ryzen has a considerably more powerful GPU. The 2700u has 25% more GPU cores and runs up to 30% faster, so I think it’s going to be a close run thing between the 2700u and the 940MX. I reckon it’s going to come down to the particular game and benchmark with some wins and loses for each side.

          And, minor gripe, I wouldn’t call the 2500u Ryzens GRID Autosport min fps score “horrible”, it is over the 30fps mark which I’d call passable. It was at the “high” settings, and from the Middle Earth SOW benchmark, the “high” settings put a lot of pressure on APUs.

            • NTMBK
            • 2 years ago

            It’s still crippled by the same shared DDR4 2400.

            • thx1138r
            • 2 years ago

            If this were a desktop card I’d agree with you without hesitation, but it’s not, it’s an super low-end laptop GPU. And that “crippling” dual-channel DDR4 2400, as you put it is, has almost 2.5x the bandwidth than the 940MX has available to it. In case you didn’t know, The 940MX uses a single channel of DDR3 at 1001/2002Mhz (yes, there was also a GDDR5 variant but that seems to have been less popular). So sure, APUs share their bandwidth, but in this comparison that’s not a bad thing as when the APUs GPU needs data it has much more bandwidth available to it. That fact that this bandwidth is shared will matter in some games/benchmarks more than others, so again, whether the APU is at a disadvantage or not will very much depend on the individual application/game/benchmark.

    • NTMBK
    • 2 years ago

    Came for the i5 review, stayed for the MX150 benchmarks. That little GPU packs a decent punch!

      • shank15217
      • 2 years ago

      I want to see a Raven-Ridge 2700U pit up against these

        • NTMBK
        • 2 years ago

        Judging by Jeff’s Twitter, it sounds like he’s reviewing the HP with Ryzen:

        [quote<]Any system shipping with a hard drive as its primary storage device in 2017 is an abomination.[/quote<] [url<]https://twitter.com/jkampman_tr/status/933035323319377921[/url<]

          • derFunkenstein
          • 2 years ago

          Here’s absolutely right. So few people benefit from 1TB hard drives compared to 256GB SSDs these days.

            • Ninjitsu
            • 2 years ago

            Not *really*. 128GB or 256GB is only adequate if you have either an external HDD or an existing desktop/laptop for bulk storage.

            That said, 512GB SSDs should probably be standard at this point, they’re not that much more expensive than a 256GB drive.

            • bthylafh
            • 2 years ago

            You may be overestimating the average person’s needs. Unless they’ve got a large Steam library or a lot of pictures or movies or iPad backups, for a lot of people 256GB is pretty adequate.

            • DancinJack
            • 2 years ago

            …by quite a bit. Even I, who I would self-categorize as a power user, can get away with 256GB on my laptop quite easily. I do have other computers and storage devices, but I could definitely get away with it for normal usage.

            • Ninjitsu
            • 2 years ago

            People generally have pictures/movies/music. Wasn’t even thinking about the gaming crowd.

            I myself have had 128GB on my laptop for years now, but i have a lot of stuff on external drives or PCs.

            There’s also a consideration that people tend not to delete stuff until they’re running out of space on C: …

            • thecoldanddarkone
            • 2 years ago

            I don’t know very many people who keep movies on their pc. While I’ve known a few, a majority of people I deal with are more than happy with a 240-256GB ssd. I worked on a pc this week that had a 1 tb hard drive. That was only 14 times more than they were using. This person would have had a much better experience if they had an ssd.

            • End User
            • 2 years ago

            My iPad has 512GB of storage.

            Phones can record in 4K now. The average person is kidding themselves if they think 256GB is pretty adequate.

            • derFunkenstein
            • 2 years ago

            The appropriate price comparison to make is to the hard drive, though. 500GB of flash, even AHCI M.2 cards, is still a pricey upgrade on entry level machines. They’re doing Ryzen no favors pairing it with spinning rust. People will say “this isn’t any faster than my old crappy laptop” and for most things, they’d be right.

        • thx1138r
        • 2 years ago

        HotHardware have a review of a laptop with a Ryzen 5 2500U:
        [url<]https://hothardware.com/reviews/ryzen-mobile-benchmarks-and-performance-analysis?page=2[/url<] The GPU performance seems to fall half way between intel UHD620 and the MX150. No doubt the 2700U will perform a bit better, but it will still be well shy of the MX150.

          • Pancake
          • 2 years ago

          Performance looks really good. More than enough to interest me. However, to quote HotHardware on battery life: “There’s no other way to slice it — in a word, “weak.””

          Not so good. We’ll see if other designs can get thin, light, small and with good battery life.

            • thx1138r
            • 2 years ago

            [quote<]We'll see if other designs can get thin, light, small and with good battery life.[/quote<] Reading the Hothardware review, this seems unlikely as the problem they encountered seems to be down to drivers. In particular, while the Ryzen machine could idle at the same 9watts as the new intel laptops, CPU usage was relatively high when playing back video with vlc. Indicating that the CPU was being used to play back the video and not a hardware decoder unit which is going to have a big effect on battery life. Also, yes, there were two other small issues with that particular machine, using a 7200rpm HDD and having the screen brightness set to 100% didn't help the battery life either, but these would not be as significant as the main problem. Until AMD address the problem Intel will continue to lead in video playback benchmarks, luckily for them their newest chips have barely changed the GPU hardware.

            • Pancake
            • 2 years ago

            I don’t even care about watching video on a laptop. Purely use it for coding in Eclipse and GIS work. Bring on some useful benchmarks!

            • thx1138r
            • 2 years ago

            No problem, notebookcheck has some preliminary results from their upcoming review:

            [url<]https://www.notebookcheck.net/Our-first-Ryzen-5-2500U-benchmarks-are-in-and-Intel-has-every-reason-to-worry.266618.0.html[/url<] Idle and Load power levels look pretty decent and would be more relevant to your use case.

      • tipoo
      • 2 years ago

      Samesies. ULV quads and the MX150 are a great combo on the cheap.

      • Marnox
      • 2 years ago

      It seems to perform a little better than the similar GT1030, so it might be interesting if someone put out a discrete card for desktops based on this chip (to satisfy my maximum gaming performance in a cheap passively cooled half height form factor fetish).
      Someone else would figure out it has the best Ethereum MHash/watt performance though, and ruin everything..

        • tipoo
        • 2 years ago

        “Someone else would figure out it has the best Ethereum MHash/watt performance though, and ruin everything..”

        Exactly what I was already thinking a third into your comment. It would be low hashrate, but so efficient at it that the hash per electricity dollar would probably hang out with the best if not beat them all, driving up the price for gamers.

        Edit: Looks like it’s actually a bit of a dud for compute, 100-200 hashes/s on monero and using 100% of the GPU. 24/7 for a month would be a few dollars.

        [url<]https://medium.com/@samnco/using-the-nvidia-gt-1030-for-cuda-workloads-on-ubuntu-16-04-4eee72d56791[/url<]

        • Anonymous Coward
        • 2 years ago

        You don’t think the GT1030 is similar enough? I haven’t seen a side-by-side comparison, but this is the same silicon righto?

          • tipoo
          • 2 years ago

          [url<]https://www.techpowerup.com/gpudb/2959/geforce-mx150[/url<] [url<]https://www.techpowerup.com/gpudb/2954/geforce-gt-1030[/url<] Looks like less TMUs and ROPs, but I don't think that would change mining speed, but it would be weaker for gamers. I thought I remembered them being the same silicon like you but this says no

            • Anonymous Coward
            • 2 years ago

            Well, they’re both GP108 that says, so we can say its the same silicon. Same memory and core clocks too, at least officially. Neat to see that they have essentially made a mobile part available on the desktop. Is there really not even a clock speed edge on desktop?

            Hadn’t realized they chopped parts off the GT1030 though. Sounds like they need a GT1040!

            • tipoo
            • 2 years ago

            I guess I should say, it appears the desktop part is from the same silicon but perhaps die harvested from partially defunct mobile parts, otherwise taking the worlds smallest axe to the ROPs/TMUs seems senseless. It wasn’t going to compete with the bigger cards regardless.

            • Anonymous Coward
            • 2 years ago

            Hah, apparently they also chop the PCIe bus from 16x to 4x. Crazy bastards.

            • NTMBK
            • 2 years ago

            Now we just need a “mining special” that puts 4 of them on a single card, with each GPU getting 4 lanes.

            • tipoo
            • 2 years ago

            Laptops: “What a great chip!”

            Desktops: “What a weird chip!”

Pin It on Pinterest

Share This