Intel’s ultrabook-bound Core i5-3427U processor

For months now, we’ve been hearing about how great Ivy Bridge ultrabooks are going to be. We’ve heard they will reach lower price points, use enclosures made of either metal or composite materials, and feature a slew of other goodies, both standard and optional, like touch-screen displays. Intel hasn’t been shy about adding to the hype, predicting price tags as low as $599 and touting 2012 as the year ultrabooks will go mainstream.

Well, we’re about to see if reality lives up to the hype, because today Intel unwraps its first ultrabook-bound Ivy Bridge processors: 22-nm chips with 17W power envelopes tailored for uber-slim, ultra-light systems.

Unlike the quad-core mobile Ivy Bridge CPUs that arrived earlier this year (which we reviewed not long ago), these newcomers are based on a new version of Ivy Bridge with half the cores and half as much last-level cache. Normally at this point, we’d tell you the basic specs of the dual-core Ivy Bridge chip, but strangely, Intel PR rep Thomas Kaminski refused to answer our questions about the chip’s transistor count and die size. Intel virtually always divulges such information up front, and its reticence here may be an indicator that the firm is recovering problematic quad-core chips by disabling half of their cores and cache. We’ll have to crack open our review laptop in order to confirm, and we haven’t had time to do so yet. We do expect Intel to produce a natively dual-core variant of Ivy Bridge eventually, though.

Ivy Bridge DC, as Intel calls it, retains some of the amenities of its quad-core sibling. Intel has outfitted the processor with the full-featured version of its HD Graphics 4000 integrated graphics processor, or IGP. (Graphics clock speeds are substantially lower on the 17W parts, though.) Functionality like hardware virtualization, AES, TXT, and vPro support all remain fully enabled. Ultrabook-bound Ivy Bridge processors feature Turbo Boost 2.0, so clock speeds will scale dynamically based on available thermal headroom, and Hyper-Threading, so you’ll see four graphs instead of two in the Windows Task Manager. Also, of course, these puppies benefit from all of the other architectural improvements of Intel’s Ivy Bridge microarchitecture. For more details on those, be sure to check out Scott’s review of the Core i7-3770K.

Here’s a full list of the first 17W, dual-core Ivy Bridge models:

For reference, the quickest previous-generation 17W chip is the Core i7-2677M, which has a 1.8GHz base clock speed, a 2.9GHz peak Turbo speed, 4MB of L3 cache, and a $317 price tag in thousand-unit quantities. The Ivy Bridge-based Core i5-3427U is largely similar, but it has a substantially lower price tag: just $225. The i5-3427U does have a lower peak Turbo speed and less L3 cache, as well, but remember that Ivy Bridge is faster clock-for-clock than Sandy Bridge. We’d expect the two chips to perform similarly, or perhaps for the Ivy model to have a slight edge.

You don’t have to take our word for it, though. Intel sent us a prototype ultrabook with a Core i5-3427U inside, and we’ve run it through our revamped mobile benchmark suite alongside a first-gen ultrabook based on the Core i7-2677M. You’ll find the results in the next several pages.

But before we do that, let’s briefly introduce our guinea pig.

Introducing Intel’s Ivy Bridge ultrabook prototype

Intel was adamant that this prototype isn’t a production machine and shouldn’t be treated as such. But… well, we couldn’t resist snapping a couple of pictures anyway. After all, this is the first Ivy Bridge ultrabook we’ve gotten to play with, and it’s a looker:

This bad boy has a 13.3″ display (with a 1600×900 resolution), measures about 0.79″ at its thickest point, and tips the scales at a scant 3.22 lbs. Don’t let the shiny gray palm rest fool you: there isn’t a single metallic surface to be found anywhere on the outside. Still, the system feels surprisingly tough and rigid. Intel didn’t get into specifics, but we figure this might be one of those ultrabooks with composite enclosures we’ve heard so much about.

Inside that chassis dwells 4GB of DDR3 memory, a 240GB Intel 520 Series solid-state drive, and a 49.4 watt-hour battery. Connectivity includes mini HDMI, analog headphone, and dual USB 3.0 ports. SuperSpeed USB is, of course, standard on all 7-series chipsets, including this system’s UM77 Express.

According to Intel, systems similar to this one will retail for $1,000-1,100 when they hit stores. “When will that be,” you ask? Intel tells us the Ivy ultrabook launch is scheduled for June 5, but a “couple of” systems will already be out by then. I guess you can consider this the pre-launch… or something.

In any case, we hope production systems are more polished. This one had a few issues, including a buggy touchpad, odd noises coming from under the palm rest when the system was running, and a bit of play between the the display bezel and LCD panel. We’ll have to forgive those oversights, obviously, since this doesn’t even look like a pre-production system from a major vendor. It’s plastered with Intel logos and has “Ultrabook” etched in large letters on the lid.

Our testing methods

We’d like to thank Asus for sending us a Sandy Bridge-based UX31E ultrabook to compare with the Intel system. We actually reviewed (and benchmarked) the UX31E back in October of last year, but we’ve since refreshed our benchmark suite, and so we couldn’t use the old data. This time, Asus sent us a faster model featuring the Core i7-2677M, which happens to be the most comparable chip to the Core i5-3427U in the Intel reference machine.

We ran every test at least three times and reported the median of the scores produced. The test systems were configured like so:

System AMD A8-3500M test system AMD A10-4600M test system Asus N56VM Asus N53S Asus UX31E Intel Core i5-3427U test system
Processor AMD A8-3500M APU 1.5GHz AMD A10-4600M 2.3GHz Intel Core i7-3720QM 2.3GHz Intel Core i7-2670QM 2.2GHz Intel Core i7-2677M 1.8GHz Intel Core i5-3427U 1.8GHz
North bridge AMD A70M FCH AMD A70M FCH Intel HM76 Express Intel HM65 Express Intel QS67 Express Intel UM77 Express
South bridge
Memory size 4GB (2 DIMMs) 4GB (2 DIMMs) 8GB (2 DIMMs) 8GB (2 DIMMs) 4GB (2 DIMMs) 4GB (2 DIMMs)
Memory type DDR3 SDRAM at 1333MHz DDR3 SDRAM at 1600MHz DDR3 SDRAM at 1600MHz DDR3 SDRAM at 1333MHz DDR3 SDRAM at 1333MHz DDR3 SDRAM at 1600MHz
Memory timings 9-9-9-24 11-11-12-28 11-11-11-28 9-9-9-24 9-9-9-24 11-11-11-28
Audio IDT codec IDT codec with 6.10.0.6277 drivers Realtek codec with 6.0.1.6537 drivers Realtek codec with 6.0.1.6463 drivers Realtek codec with 6.0.1.5677 drivers Realtek codec with 6.0.1.6612 drivers
Graphics AMD Radeon HD 6620G + AMD Radeon HD 6630M

with Catalyst 12.4 drivers

AMD Radeon HD 7660G with Catalyst 8.945 RC2 drivers Intel HD Graphics 4000 with 8.15.10.2696 drivers

GeForce GT 630M with 296.54 drivers

Intel HD Graphics 3000 with 8.15.10.2462 drivers

GeForce GT 630M with 296.54 drivers

Intel HD Graphics 3000 with 8.15.10.2559 drivers Intel HD Graphics 4000 with 8.15.10.2725 drivers
Hard drive Hitachi Travelstar 7K500 250GB 7,200 RPM WD Scorpio Black 500GB 7,200 RPM Seagate Momentus 750GB 7,200-RPM Seagate Momentus 750GB 7,200-RPM SanDisk U100 256GB SSD Intel 520 Series 240GB SSD
Operating system Windows 7 Ultimate x64 Windows 7 Ultimate x64 Windows 7 Professional x64 Windows 7 Home Premium x64 Windows 7 Home Premium x64 Windows 7 Home Premium x64

Thanks to Asus for volunteering a quad-core Sandy Bridge laptop, as well, and thanks to AMD and Intel for providing the other machines.

We used the following versions of our test applications:

The tests and methods we employ are usually publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Memory subsystem performance

Per our tradition, we’re going to start off by comparing the memory subsystems of our CPUs in a few synthetic tests.

Please note that the A10-4600M, Core i5-3427U, and Core i7-3720QM have higher-clocked memory than the other two offerings. Because of the discrepancy, the results below won’t paint a clear, unadulterated picture of memory controller efficiency. But they will show us something else. You see, the A10-4600M, Core i5-3427U, and Core i7-3720QM all support faster RAM than their predecessors. (They can accommodate DDR3-1600 memory, while the A8-3500M, i7-2677M, and i7-2670QM are limited to DDR3-1333.) So we’re going to be able to see what dividends the faster memory support pays from one generation to the next.

The Core i5-3472U has slightly less memory bandwidth than its higher-clocked, quad-core sibling, the 45W Core i7-3720QM. Nevertheless, the jump from DDR3-1333 to DDR3-1600 gives both Ivy Bridge CPUs an edge over their Sandy Bridge counterparts.

Next up: SiSoft Sandra’s more elaborate memory and cache bandwidth test. This test is multithreaded, so it captures the bandwidth of all caches on all cores concurrently. The different test block sizes step us down from the L1 and L2 caches into L3 and main memory.

The 17W Ivy and Sandy CPUs are closely matched in this test, which kind of flies in the face of what we saw in Stream. The Ivy Bridge chip does have higher bandwidth at block sizes under 128KB, presumably thanks to quicker L1 cache, but that’s the starkest difference, and it isn’t very large. We see a much greater leap from the quad-core Sandy Bridge CPU, the Core i7-2670QM, to its quad-core, Ivy Bridge-based successor. In that case, the quad-core Ivy has a ~500MHz clock speed advantage over its Sandy Bridge-based counterpart.

Sandra also includes a new latency testing tool. SiSoft has a nice write-up on it, for those who are interested. We used the “in-page random” access pattern to reduce the impact of prefetchers on our measurements. We’ve also taken to reporting the results in terms of CPU cycles, which is how this tool returns them. The problem with translating these results into nanoseconds, as we’ve done in the past with latency measurements, is that we don’t always know the clock speed of the CPU, which can vary depending on Turbo responses.

Here, the 17W Ivy chip has slightly higher cache and memory latencies than its predecessor, in terms of absolute clock cycles. Interesting.

Productivity

TrueCrypt disk encryption

TrueCrypt supports acceleration via Intel’s AES-NI instructions, so the encoding of the AES algorithm, in particular, should be very fast on the CPUs that support those instructions. We’ve also included results for another algorithm, Twofish, that isn’t accelerated via dedicated instructions.

7-Zip file compression and decompression

SunSpider JavaScript performance

The Core i5-3427U is quicker than the Core i7-2677M in these tests, though only by a slim margin—one that may or may not be palpable without a stopwatch on hand. Remember that the i5-3427U has a 100MHz lower peak Turbo speed than the i7-2677M, though, and that it’s $92 cheaper. Intel seems to be offering slightly more performance for less, and that’s a good thing.

It’s also worth pointing out how close both of these 17W CPUs are to AMD’s fastest 35W Trinity offering, the A10-4600M. We’ll look at graphics in a little bit, but for CPU cores, Intel looks to have a serious performance-per-watt advantage so far.

Image processing

The Panorama Factory photo stitching
The Panorama Factory handles an increasingly popular image processing task: joining together multiple images to create a wide-aspect panorama. This task can require lots of memory and can be computationally intensive, so The Panorama Factory comes in a 64-bit version that’s widely multithreaded. We asked it to join four pictures, each eight megapixels, into a glorious panorama of the interior of Damage Labs.

Video encoding

x264 HD benchmark

This benchmark tests one of the most popular H.264 video encoders, the open-source x264. The results come in two parts, for the two passes the encoder makes through the video file. I’ve chosen to report them separately, since that’s typically how the results are reported in the public database of results for this benchmark.

The Core i5-3427U pulls farther ahead of its predecessor in our x264 video encoding test. And, again, both 17W Intel processors outrun AMD’s top 35W Trinity chip.

Accelerated applications

Trinity, Llano, Sandy Bridge, and Ivy Bridge all dedicate a substantial chunk of their die area to graphics. With the exception of Llano, they all have special-purpose video transcoding logic, as well. We sought to unleash all of those extra transistors in a few general-purpose applications, to see if the competitive picture would change at all.

LuxMark OpenCL rendering

We’ve deployed LuxMark in several recent reviews to test GPU performance. Since it uses OpenCL, we can also use it to test CPU performance—and even to compare performance across different processor types. And since OpenCL code is by nature parallelized and relies on a real-time compiler, it should adapt well to new instructions. For instance, Intel and AMD offer integrated client drivers (ICDs) for OpenCL on x86 processors, and they both claim to support AVX. The AMD APP ICD even supports Bulldozer’s distinctive instructions, FMA4 and XOP.

A note about those missing bars in the graph. Sandy Bridge’s HD 3000 integrated graphics lack OpenCL support, so we couldn’t run LuxMark on the IGPs of the Core i7-2677M and Core i7-2670QM. Also, the AMD processors don’t support Intel’s ICD driver, so we were only able to run LuxMark on their integrated Radeon HD graphics and on their CPU cores using the AMD APP ICD. Ivy Bridge is the only processor that supports both AMD and Intel ICDs and has the ability to execute OpenCL code using its integrated graphics.

By combining its CPU cores with its OpenCL-supporting IGP, the Core i5-3427U can achieve substantially higher performance than the Core i7-2677M—87%, to be exact. The newcomer is even quicker than AMD’s A10-4600M.

WinZip 16.5

The latest version of WinZip features a parallel processing pipeline with OpenCL support. The pipeline allows multiple files to be opened, read, compressed, and encrypted simultaneously, all with hardware acceleration. Right now, though, WinZip’s OpenCL capabilities seem to be off-limits to Intel processors—again, regardless of what ICD is installed. The OpenCL switch in the WinZip settings would only appear on our AMD systems.

We tested WinZip by compressing, then decompessing, a 1.17GB directory containing about 150 small text and image files, a couple dozen medium-sized PDF files, and 14 large Photoshop PSD files. We timed each operation with a stopwatch.

The results of our WinZip compression test are roughly in line with what we’ve seen so far: Ivy beats Sandy, Intel quad-core beats Intel dual-core by a big margin, and the A10-4600M doesn’t really distance itself from the 17W Intel CPUs—not even with its IGP chipping in via OpenCL.

Things are a little more puzzling in the decompression test. There, the Sandy ultrabook is oddly slow, while the Ivy ultrabook nips at the heels of its quad-core big brother. One would suspect a storage bottleneck, but both ultrabooks have solid-state drives, while our other machines have 7,200-RPM mechanical hard drives. Strange.

CyberLink MediaEspresso

This user-friendly video transcoder supports AMD’s VCE and Intel’s QuickSync hardware transcoding blocks. Those are effectively black boxes without much programmability, so their output isn’t necessarily comparable—and neither is their performance, strictly speaking. From a practical standpoint, though, it’s helpful to see which solution will transcode videos the quickest. So that’s what we’re going to do.

For our test, we fed MediaEspresso a 1080p version of the Iron Man 2 trailer, and we asked it convert the clip to a format suitable for the iPhone 4. We tested with full hardware acceleration as well as in software mode. Where the setting was available, we selected encoding speed over quality. The A8-3500M was only run in software mode, since it lacks hardware H.264 encoding.

With QuickSync, the Core i5-3427U is nearly as fast as its 45W quad-core sibling. Pretty impressive. Encoding times balloon up in software mode, but the i5-3427U still beats its forebear, the i7-2677M, handily.

Note that the different encoding methods didn’t yield identical results. We didn’t see much of a difference in output image quality between VCE and QuickSync, but the output files had drastically different sizes. QuickSync spat out a 69MB video, while VCE got the trailer down to 38MB. (Our source file was 189MB.) Using QuickSync in high-quality mode extended encoding times slightly, but the resulting file was even larger—around 100MB. The output of the software encoder, for reference, weighed in at 171MB.

The Elder Scrolls V: Skyrim

Our Skyrim test involved running around the town of Whiterun, starting from the city gates, all the way up to Dragonsreach, and then back down again.

We tested at 1366×768 using the “medium” detail preset.

Now, we should preface the results below with a little primer on our testing methodology. Along with measuring average frames per second, we delve inside the second to look at frame rendering times. Studying the time taken to render each frame gives us a better sense of playability, because it highlights issues like stuttering that can occur—and be felt by the player—within the span of one second. Charting frame times shows these issues clear as day, while charting average frames per second obscures them.

For example, imagine one hypothetical second of gameplay. Almost all frames in that second are rendered in 16.7 ms, but the game briefly hangs, taking a disproportionate 100 ms to produce one frame and then catching up by cranking out the next frame in 5 ms—not an uncommon scenario. You’re going to feel the game hitch, but the FPS counter will only report a dip from 60 to 56 FPS, which would suggest a negligible, imperceptible change. Looking inside the second helps us detect such skips, as well as other issues that conventional frame rate data measured in FPS tends to obscure.

We’re going to start by charting frame times over the totality of a representative run for each system—though we conducted five runs per system to sure our results are solid. These plots should give us an at-a-glance impression of overall playability, warts and all. (Note that, since we’re looking at frame latencies, plots sitting lower on the Y axis indicate quicker solutions.)

Frame time

in milliseconds

FPS rate
8.3 120
16.7 60
20 50
25 40
33.3 30
50 20

We can slice and dice our raw frame-time data in other ways to show different facets of the performance picture. Let’s start with something we’re all familiar with: average frames per second. Though this metric doesn’t account for irregularities in frame latencies, it does give us some sense of typical performance.

Next, we can demarcate the threshold below which 99% of frames are rendered. The lower the threshold, the more fluid the game. This metric offers a sense of overall frame latency, but it filters out fringe cases.

Of course, the 99th percentile result only shows a single point along the latency curve. We can show you that whole curve, as well. With integrated graphics or single-GPU configs, the right hand-side of the graph—and especially the last 10% or so—is where you’ll want to look. That section tends to be where the best and worst solutions diverge.

Finally, we can rank solutions based on how long they spent working on frames that took longer than 50 ms to render. The results should ideally be “0” across the board, because the illusion of motion becomes hard to maintain once frame latencies rise above 50-ms or so. (50 ms frame times are equivalent to a 20 FPS average.) Simply put, this metric is a measure of “badness.” It tells us about the scope of delays in frame delivery during the test scenario.

The new, ultrabook-bound Ivy chip has a quicker IGP than its predecessor, but that doesn’t count for much in Skyrim at these settings. Both solutions trudge along with high average frame times and quite a bit of variance. From a seat-of-the-pants perspective, the two systems feel choppy, stuttery, and not really playable. Only the A10-4600M, with its Radeon HD 7660G integrated graphics, delivers a reasonably smooth, playable experience in this scenario.

Batman: Arkham City

We grappled and glided our way around Gotham, occasionally touching down to mingle with the inhabitants.

Arkham City was tested at 1366×768 using medium detail and medium FXAA, with v-sync disabled.

Looking at our frame time plots for Arkham City, we can see that the Core i5-3427U falls surprisingly close to the A10-4600M, and that its frame latencies exhibit almost as little variance.

Analzying the data more thoroughly confirms that the Core i5-3427U’s IGP performs much better here than it did in Skyrim, but it also shows the 35W AMD chips are faster overall by a decent margin. The 17W Ivy processor does spend less time above 50 ms than the 35W A8-3500M, but as we saw in our Trinity review, the A8’s score was skewed by a couple of big latency spikes.

Subjectively speaking, the Core i5-3427U seems to sit on the threshold of playability in Arkham City—the game is almost fluid enough, but maybe not quite. At least it’s an improvement over the previous-gen Core i7-2677M, which suffers from a larger number of latency spikes and higher average frame times.

Battlefield 3

We tested Battlefield 3 by playing through the start of the Kaffarov mission, right after the player lands. Our 90-second runs involved walking through the woods and getting into a firefight with a group of hostiles, who fired and lobbed grenades at us.

BF3 wasn’t really playable at anything but the lowest detail preset using these IGPs—so that’s what we used.

Even at the lowest detail preset with no antialiasing, the Ivy ultrabook struggles with Battlefield 3. Frame latencies aren’t terribly inconsistent—not compared to the A10-4600M, at least—but they’re high across the board, which makes the game feel sluggish and laggy. The game does render accurately, though, which is more than we can say for the the Core i7-2677M. With that processor, BF3 performs even worse and exhibits ugly visual artifacts.

Battery run times

We tested battery run times twice: once running TR Browserbench 1.0, a web browsing simulator of our own design, and again looping a 720p Game of Thrones episode in Windows Media Player. (In case you’re curious, TR Browserbench is a static version of TR’s old home page rigged to refresh every 45 seconds. It cycles through various permutations of text content, images, and Flash ads, with some cache-busting code to keep things realistic.)

Before testing, we conditioned the batteries by fully discharging and then recharging each system twice in a row. We also used our colorimeter to equalize the display luminosity at around 100 cd/m². That meant brightness levels of 70% for the Llano machine, 45% for the N53S, 40% for the Trinity system, 25% for the N56VM, 25% for the Ivy ultrabook, and 20% for the UX31E. The N53S and N56VM had larger panels than the other machines, though, which might have affected power consumption.

We should note one other caveat: these machines didn’t all have the same battery capacities. The batteries in the two quad-core Intel notebooks both had 56 Wh ratings. The Llano laptop had a 58 Wh battery, and the Trinity system’s battery was rated for 54 Wh. As for our ultrabooks, the Ivy system was rated for 49.4 Wh, and the Asus UX31E had a 50 Wh battery rating.

Our Ivy ultrabook managed to stay up 22 minutes longer than the Sandy Bridge ultrabook in our web-surfing test, but it shut down one minute earlier in our video playback test. Overall, this looks like a win for Ivy Bridge: even with slightly higher CPU performance and substantially better integrated graphics performance, battery life is just as good as or better than with Sandy Bridge.

Conclusions

So, there you have it. In its ultrabook-friendly incarnation, Ivy Bridge gives you higher performance and longer battery run times for less money than Sandy Bridge.

Not exactly a tough sell, is it?

That said, I’m a little disappointed Intel didn’t give the 17W Ivy Bridge variant a little more graphics oomph. The quad-core model’s higher IGP clock speeds seem to do wonders in the games we tested, but alas, the dual-core Ivy is quite a bit slower across the board—and that means games generally aren’t playable at the same settings. It’s a shame, really, and it rules out Ivy Bridge ultrabooks as compelling systems for on-the-road gamers—unless manfuacturers sneak in some discrete GPUs.

Our numbers have also given us a rough preview of how Intel’s 17W Ivy Bridge CPUs might compare to AMD’s upcoming 17W Trinity APUs. The AMD parts may have better graphics performance (though their GPUs will be clocked lower, too), but it’s looking like AMD has little chance of catching up on the CPU front—not when 17W Ivy Bridge matches or outruns the fastest 35W Trinity more often than not. AMD could end up having to offer its 17W parts at a substantial discount in order to be competitive. We’ll look at those when they come out.

For now, Ivy Bridge ultrabooks are the obvious choice for folks seeking solid performance in ultra-slim laptops. Users running previous-gen, Sandy Bridge ultrabooks probably don’t need to worry about upgrading, but I expect everyone else will at least want to take a look at the Ivy-based ultrabooks that come out over the next few weeks and months.

Comments closed
    • ShadowEyez
    • 7 years ago

    Part of the reason Macbooks are so popular is the “all-in-one” marketing and approach – software, hardware, “eco-system” that resonates with people. Not Giga-flops/watts/$ ratios.

    As intel is trying to compete here with the Ultrabook, make 2-3 REALLY good models, (not zillions of CPU skus with tons of OEM partners, and stressing tech numbers) and stress what one can do with them. And lose the plastic shells.

    If you can’t beat them, become them 🙂

    Or not.

    Make an ultrabook with wireless power (not a pad like those iPhone chargers a few years back, wireless like wifi) and people will come.

      • kamikaziechameleon
      • 7 years ago

      The PC market is run by business that don’t appear to really know anything about sales/marketing or the mainstream consumer. Seems all they do is cater to the frugal/cheap/poor consumer or the high end nerd and not the balanced person who represents the heart of the markets. Microsoft, is slowly loosing tons of market share to competitors who have a honest to goodness philosophy informing their business strategies.

    • jdaven
    • 7 years ago

    The gem in the whole Trinity mobile product line is the 25w part. This part has all SP and CPU modules with just a modest decrease in clocks. There were no rumors on this part (just the 17 and 35w parts). It seems that this SKU might have been designed specially for someone.

    • maroon1
    • 7 years ago

    17w trinity has horrible specs. It has only 1 module (with lower clocks than A10-4600m), 256 shaders (compared to 384 shaders of A10) and lower GPU clocks
    [url<]http://cdn.overclock.net/a/a8/582x283px-LL-a8587a27_TRINITY-3.jpeg[/url<]

    • My Johnson
    • 7 years ago

    Hmmm. 17W is all well an good, but I desire minimal discrete graphics also, or AMD’s IGP. So, to me, it’s not really a 17W device if it needs to be optioned with an extra component.

    • indeego
    • 7 years ago

    The Luxmark OpenCL rendering graph isn’t very clear at all.

    • NeelyCam
    • 7 years ago

    So, can we finally put to rest those silly claims that Trinity is somehow more power efficient than IvyBridge?

      • OneArmedScissor
      • 7 years ago

      If efficiency is a measure of work done, then it would be difficult for Trinity to beat Ivy Bridge. It still might in certain scenarios, so that just depends on what you’re doing.

      However, that ignores what you’re doing with a laptop to begin with. Battery life is not the same as efficiency. I think most people are more concerned with how long their laptop will last when the CPU isn’t really doing work.

      There should be a roughly equal floor there, which both Trinity and Ivy Bridge can potentially reach. Even Core 2 did it, without power gating or high-k gates for the northbridge.

    • barich
    • 7 years ago

    Why do so many Ultrabooks come with only 4 GB of RAM? RAM is cheap, and they’re usually not upgradable.

    • OneArmedScissor
    • 7 years ago

    Why does nobody test the 25w mode? That’s the most interesting part of ULV Ivy Bridge.

    You can look up how much the base clock speed changes, but I’m more curious about how that affects turbo boost. It could make “ultrabooks” roughly equal to full power laptops. The ULV parts are only a few hundred MHz slower than the full power parts at the stated peak for 17w.

    For example, the i7 3520M is 3.6 GHz peak, and the i7 3667U is 3.2GHz peak. But the 3667U goes from 2 GHz base to 2.5 GHz base in 25w mode!

      • shank15217
      • 7 years ago

      Yea I like to see the 25w Trinity take on the 25w Ivy as well.

    • link626
    • 7 years ago

    now we have a clearer picture about the HD4000 vs llano and trinity

    it will give way lower performance paired with core i3/i5.

    • chuckula
    • 7 years ago

    Is this the ultimate gaming machine?

    Heck no! No ultra[book|thin] ever will be the ultimate gaming machine simply because a bigger device can always have a discrete GPU crammed in.

    But at the same time the HD 4000 graphics are putting in a very strong show for such a small power envelope. If the rumors about Haswell are even half true then Intel’s mobile GPU is going to reach the “good enough” level for even a large fraction of gamers by next year. As this review shows, they are already at “good enough” levels for casual gaming now.

      • grantmeaname
      • 7 years ago

      Tell that to the Zenbook Prime UX32VD, which had a GTX 620M. Yeah, that’s somewhat low-end for a discrete mobile GPU, but it should be competent enough… and way better than the HD 4000 graphics at 350MHz.

    • DavidC1
    • 7 years ago

    Not creating flamebait, just putting it out there.

    Normalized battery life(Browsing/Video playback):

    Intel Core i5-3427U test system – 6.86W/8.98W
    Asus UX31E – 7.25W/9.09W
    AMD A10-4600M test system – 8.44W/12.6W
    AMD A8-3500M test system – 13.2W/14.5W
    Asus N56VM – 12.7W/15.6W
    Asus N53S – 14.7W/16W

      • deathBOB
      • 7 years ago

      TR, can you normalize the battery tests as this poster has done? Also, can you test the laptops both with and without displays (i.e. turn them off or use an external monitor)? No one seems to provide good hardware only battery performance which is probably the most useful stat considering the abundance of notebook sizes and options

        • Sunburn74
        • 7 years ago

        Agreed. At the very least, when discussing battery life post the size of the battery next to the laptop in the chart. Its very misleading otherwise.

      • derFunkenstein
      • 7 years ago

      Spectacular. Some sort of normalization is what I came here to suggest as well.

        • NeelyCam
        • 7 years ago

        The strange thing is that TR usually shows normalized values..

        And I’m still waiting for task energy… another one of their usual plots.

      • mattthemuppet
      • 7 years ago

      great job! Although the SB>IB improvement is there and both are clearly the winners, I’m more impressed by the changes from the A8 to A10.

    • HisDivineOrder
    • 7 years ago

    Looks like Haswell might finally get Ultrabooks up to gaming snuff with their improved GPU for modern games. That’ll truly be something to see.

    Of course, later next year when talk starts of the next gen consoles, there is the potential for the integrated GPU to once again become less than is required for the modern gaming experience, but there’s always the chance that MS and Sony are about to pull a Nintendo and try to make modern gaming mostly like it is today except at 1080p instead of sub-720p.

    Either way, I’d love to see the day when Intel integrated GPU’s (and equivalent AMD too) are enough to play modern games at 720p on your ultrathin, ultrabook, thin ‘n light, etc. Then again, personally I’d love it if nVidia and AMD would release a local game streamer so I could stream from my desktop/server powerhouse to my ultrathin and it be pushed about as hard as when it’s running a youtube video.

    Perhaps Steam could do it.

    I like to imagine a day when you could choose between playing a game on low or medium settings on your ultrathin or super high settings streamed from your Intel Extreme hexa-core, quad-SLI, 32+ GB RAM quad-channel behemoth over a Wireless AC connection. I imagine I’d sit outside with my matte screen, prop my feet up, listening to the birds and the distant sound of lawns being mowed as I mowed down a new group of Russian ultranationalists in a Call of Duty game. Pausing for a moment, I’ll glance around and wonder, “How ever did I live when I could only game indoors like a pale-skinned shut in?”

    Of course, the obsession with the THIN part of the ultrathin/ultrabook doesn’t do much for improving the battery life we should expect.

    • elmopuddy
    • 7 years ago

    Great CPU’s, but confusing naming again.. I wish they would have stuck with i3=2/4 cores. i5=4 and i7=4/8 , now there are i5 and i7 dual cores, only difference being speed and cache size. Marketing staff win once again. Sigh.

      • LoneWolf15
      • 7 years ago

      Mobile i5 was 2/4 for SB even previously, Mobile i7 could be 2/4 as well.

    • novv
    • 7 years ago

    Nice review but one important point is missing: The battery lifetime test is not a fair comparison between Intel and AMD. The AMD was using a hard drive at 7200rpm and the Intel one was using an SSD. Other tests where SSD performance is crucial are in Intel favour at a much larger gap than normal. So when AMD sent the A10-4600M powered laptop they sent it with an SSD (this is what Techreport is saying) but for fair comparison in the review was fitted with an 7200rpm hard drive. But when Intel is sending a new platform I guess “fair” is out of the question!

      • UberGerbil
      • 7 years ago

      When it comes to power consumption there’s actually not as much difference between 2.5″ mobile HDs (even 7200rpm) and SSDs as many people think.

      • brucethemoose
      • 7 years ago

      The power consumption test doesn’t tell us much anyway. At idle, the power consumption of the LCD is significant, and a huge unknown.

      • DavidC1
      • 7 years ago

      You do have a point about SSDs and power consumption, but not all SSDs are created equal. From Anand’s test with the A10-4600M: [url<]http://www.anandtech.com/show/5831/amd-trinity-review-a10-4600m-a-new-hope/8[/url<] "One set of tests we alluded to earlier: the charts show Trinity with a Samsung 830 SSD, but we also ran tests with an Intel 520 SSD. Idle battery life dropped to 476 minutes (an 8% decrease), Internet battery life checked in at 371 minutes (down 8% again), and H.264 battery life stayed nearly the same at 217 minutes (down less than 3%). If battery life is one of your primary concerns, remember: all SSDs are not created equal!" Sandforce controller based SSDs aren't known for good battery life, and in mobile, 5400/7200RPM drives might be pretty efficient. You do gain with some SSDs, like the Intel and Samsung controller SSDs, but the Intel SSD 520 isn't an Intel controller based one, its Sandforce. The SSD 520 is there for performance, not battery life.

    • Alchemist07
    • 7 years ago

    You know you can normalise the battery life tests by dividing the rating of the battery by the number of hours (or vice versa)?

    i.e. 56 Wh battery lasts 8 hours. 56/8 = 7 Wh consumption per hour (lower is better)

      • Hattig
      • 7 years ago

      Probably should use an external monitor and disable the laptop’s screen too if you’re doing a CPU assessment rather than a system assessment. That’ll get rid of screen differences (larger/brighter screens use more power, etc).

        • Arag0n
        • 7 years ago

        techreport has a long way to go reviewing laptops and mobiles. Their area of expertise is CPU and GPU mostly for desktops. They should be learning soon those tricks.

          • grantmeaname
          • 7 years ago

          I think Cyril understands division.

        • grantmeaname
        • 7 years ago

        Yeah. And switch in all of the same brand RAM at the same settings, and the same storage in every computer. And so on.

        That wouldn’t add very much to the review for the ton of work that it would be. Note that Cyril only draws qualitative conclusions from those graphs anyways, noting that the i5-3000 series has slightly better power consumption than the i7-2000 series it replaces and going no further.

    • Visigoth
    • 7 years ago

    Now I’d like to see a comparison of Asus’ Zenbook Prime vs Lenovo’s X1 Carbon, the flagship notebooks of these fierce competitors! These benchmarks (build quality, performance, battery life etc.) should be far more interesting, as they’re built for an audience with high expectations.

    • Hattig
    • 7 years ago

    Interesting that a 35W Trinity on web browsing gets nearly the same battery life as a 17W IB. Then again it’s hardly testing the system.

    But I’m not impressed with the battery life for video playback on the Trinity, given that a 720p stream isn’t exactly taxing on a modern video decoder, and should leave the majority of the CPU and GPU idle.

    Where’s the gaming battery life test?

    In the end if Trinity 17W shows up in $600-$700 “Ultrabooks” and IB 17W is only in $900+ Ultrabooks, then the Trinity systems will be very attractive, offering more than enough CPU performance for most uses, decent battery life (should compare very favourably with this IB, given that the 35W Trinity compares pretty well), and good graphics performance.

    Why do I need good graphics performance in a laptop? Well, I’ve just become a father, so I need to spend a lot of time when I’m at home in the room with the baby, rather than being in the room with the PC!

      • bitcat70
      • 7 years ago

      Congratulations!

        • Hattig
        • 7 years ago

        Thanks 🙂
        Life’s very busy nowadays, by the time I get some free time I’ll probably be buying 14nm ARM based MacBooks or something! :s

        • crabjokeman
        • 7 years ago

        I was going to offer my deepest sympathies…
        Must be a matter of perspective.

      • shank15217
      • 7 years ago

      Why was he downrated?

        • barich
        • 7 years ago

        Perhaps because the tested Trinity system has a bigger battery than the Ultrabook?

          • sweatshopking
          • 7 years ago

          Who cares? If it performs similar, what’s the problem.

            • travbrad
            • 7 years ago

            Because there will be a a lot more than 1 laptop available using this CPU? This was above all else a CPU review, not a laptop review.

            • Sunburn74
            • 7 years ago

            Anandtech normalizes the results to battery size (an time/battery size ratio). The ivy bridge chip when battery size is normalized overwhelmingly gives better battery life.

      • demani
      • 7 years ago

      Heh-you don’t yet realize that being in the room with the baby won’t mean “have time to game on my laptop” 😉 You will have plenty of other non-digital things to do (including emptying a trashcan that is going to stink to high heaven).

        • Hattig
        • 7 years ago

        I’ve purchased a Nappy Disposal System, that so far has been keeping the smells to a minimum and being quite convenient. It didn’t cost that much either. Maybe it’s the AMD of Nappy Disposal Systems, but it’s good enough for now.

    • Arag0n
    • 7 years ago

    17W AMD APU should have around 60% +/-5% performance of top tier APU model, so both IGP may end at similar performance points. That would be a WIN for AMD, since they only need to compensate with price the lack of CPU performance, but it challanges next APU IGP performance crown or even parity in the 17W segment. Sure, Intel is working at 22nm and AMD at 32nm, so the problem is the manufacturing process, AMD could have more cores or higher frequencies, but the fact is they don’t have 22nm manufacturing, and it will take time till it comes. In the other hand, intel won’t go to 14nm next year, so it gives AMD a break somehow. Sure, we are reaching the physical limit for silicon transistors but I’m not sure AMD can survive those 3-4 years of advantage that intel has in the manufacturing process. AMD news APU are using technology that intel was using in their second generaion Cire iX series, but they are fighting third and fourth generation core iX. Comparing to seconds generation the APU’s are pretty good, so I’m pretty sure that architecture-wise AMD is not in a bad position, but they need the manufacturing process to catch up eventually or at least, not wide the difference to keep competitive.

      • chuckula
      • 7 years ago

      [quote<]17W AMD APU should have around 60% +/-5% performance of top tier APU model, so both IGP may end at similar performance points. That would be a WIN for AMD[/quote<] Oh rly? Moving the goalposts a bit in anticipation of the low-power Trinity parts? It used to be that 17 watt Trinity would have a GPU that annihilated anything that Intel will make this decade... now we've fallen back to the "at least it's cheaper!" arguments of old....

        • Peldor
        • 7 years ago

        Yeah, I’m not quite sure how AMD gets an all-caps WIN from “similar” IGP performance. It might be a win for consumers who get to buy cheaper chips, but it isn’t going to help AMD all that much.

        • Arag0n
        • 7 years ago

        Most people here after seeing the HD4000 started to claim that AMD was going to lose the crown in the 17W segment….. So, this reviews proves they didn’t, but the margin is pretty narrow. I would still bet in AMD for better image quality and better drivers even if same FPS. Average FPS is an important point, but not the only, as long as AMD is in the same range or over intel, they still have a winner. But, if AMD had a 85% of performance, would drivers and better image quality compensate for it? Doubtful.

          • chuckula
          • 7 years ago

          Nice revisionist history there. I’ve been called all sorts of names here for saying exactly what you said in your first post.. that 17 watt Trinity & IB will have competitive GPU performance. The difference is that I said it 6 months ago before it was “cool”. I can’t think of anyone who has consistently said that IB at 17 watts would massively outperform Trinity at 17 watts. Instead, it’s been the exact opposite.

            • NeelyCam
            • 7 years ago

            You’ve said it would outperform Trinity at 17W. I’ve said that it will lose to Trinity at 17W (and get killed by Trinity at 35W).

            • chuckula
            • 7 years ago

            I’ve said it will be [i<]competitive[/i<] with Trinity at 17W over and over again and I stand by that now too just like I did last year when everyone on this site said it was physically impossible for Intel to improve its IGP. Especially considering that 17 watt Trinity parts will only have one module activated and running at lower clock speeds, many apps that could theoretically take advantage of Trinity's GPU may be CPU bound anyway. At the end of the day, performance is performance and tradeoffs are tradeoffs.

            • Arag0n
            • 7 years ago

            I remember the hype of people saying that the desktop HD4000 shown months ago was the same HD4000 that was going to be in the 17W CPU’s and that would be awesome, no one ever worried about reduced clocks or anything then. Trinity will kill the desktop variant, will have a comfortable lead over 35W and 25W, and will be competitive at 17W. Some months ago, not even years, it was hard to belove AMD being competitive at low power CPU ranges.

            • cosminmcm
            • 7 years ago

            But it won’t be competitive at 17W. Ivy 17W beats Trinity 35W in processing power. With Intel 4000 graphics being at least equal at 17W, if not better, Trinity 17W will loose badly in everything else.

            • NeelyCam
            • 7 years ago

            [quote<]With Intel 4000 graphics being at least equal at 17W, if not better[/quote<] See, that's what we were discussing - not the CPU performance (it's not even a contest). I claimed Trinity will beat IB in graphics at 17W. Chuckula said IB would beat Trinity (even though now he claims he said it'd be 'competitive') Why do you think Intel HD4000 can beat Trinity at 17W? Did you not see the 99percentile plots?

            • cosminmcm
            • 7 years ago

            I think the average frame rate is more important. That’s what matters to me.
            At 1366×768 resolution the CPU will have a very important role, even with a weak graphics part. With only one core (+ the equivalent of HT) and lower clocks than Ivy, there is no way that the little Trinity can do well in games.
            At 17W Ivy delivers about 70% the gaming performance of it’s 45W brother
            [url<]http://www.anandtech.com/show/5878/mobile-ivy-bridge-hd-4000-investigation-realtime-igpu-clocks-on-ulv-vs-quadcore[/url<] But I don't think that Trinity at 17W can offer more than even 50% of it's 35W variant (and I am optimist when I think that), not only because of the lower clocks and less shaders, but also because of the very poor processor that it has. A single module will just not be enough.

            • Peldor
            • 7 years ago

            That’s an interesting piece over at Anandtech. I’m somewhat surprised the GPU is able to clock as high as it is at 17W.

            • cosminmcm
            • 7 years ago

            Let’s see how the 17W Trinity does percentage wise compared to the big A10.

            The big Trinity is X.
            At base clocks the little Trinity will be (X – 33% shaders) – aprox 33% frequency.
            In absolute terms it will be X * 0.66 * 0.66 = 0.44X.

            At turbo clocks the little Trinity will be (X – 33% shaders) – aprox 38% frequency.
            In absolute terms it will be X * 0.66 * 0.62 = 0.41X.

            That is only judging by the graphics numbers. But the CPU will be slower too, and with only one module. And also there is the chance that the higher wattage part will turbo more often and to a greater percentage than the 17W part.
            Of course it doesn’t mean that by reducing the number of shaders there will be a propotionate drop in frame rate, but the little trinity will really be a much much poorer performer than its 35W brother. Or at least that is what the numbers say.
            And because of the small resolution used (gaming at more than 1366×768 is not reasonable on those machines) the CPU will have an important role, and there is no contest at that.

    • dragosmp
    • 7 years ago

    Great review and thumbs up for getting it out before anyone else!

    At the 17W level it really seems Intel did a good bit of progress compared to the previous generation, I only hope they’ll be available soon. Afaik IB quads are still quite rare, not every notebook vendor refreshed their inventory and A10s are practically nonexistent.

      • NeelyCam
      • 7 years ago

      [quote<]A10s are practically nonexistent.[/quote<] Paper launch... again?

    • MadManOriginal
    • 7 years ago

    Man, first there’s the Trinity pre/review non-production notebook everyone got, now there’s this notebook…paper launches or ‘pre-launch’ reviews weeks before a product is out. Darnit, just send stuff out when it’s actually available!

    • Johannesburg
    • 7 years ago

    Hmmm… If the power gating tech is supposed to work as advertised, would it even matter if the cores were present, but disabled? maybe it would have an impact on numerical cost-per-unit, but it even a half dead core should half the core wattage ( iGPU & memory controller & whatever else aside)

      • dragosmp
      • 7 years ago

      The power consumption isn’t changed if you have physically 2 cores or 2 out of 4 cores enabled. One could imagine some parasitic circuits closing at the very border of the disabled cores, but they should be so close to zero that they wouldn’t be visible at the W- rating level (not even @mW probably).

      Otherwise from a TDP pow you’re not quite right because the IGP, Ring and IMC are about half the 4-core die, so by disabling 2 cores and the corresponding caches you only remove (at iso-W/surface) 25% of the TDP, so from 45=>33W. The rest to 17W is underclocking, undervolting, die harvesting and more aggressive power saving.

    • pedro
    • 7 years ago

    Second post?

      • LoneWolf15
      • 7 years ago

      Second verse, dumb as the first.

    • pedro
    • 7 years ago

    Yeh baby!

      • flip-mode
      • 7 years ago

      Best 1st post ever. Can we break -100?

        • NeelyCam
        • 7 years ago

        Let’s do it!

          • superjawes
          • 7 years ago

          -20 now.

          Well on our way!

            • cygnus1
            • 7 years ago

            Vote No for Pedro!

            • Duck
            • 7 years ago

            Looks like it’s not even going to break -50.

            • pedro
            • 7 years ago

            Yeh baby 2: The Return!

            • NeelyCam
            • 7 years ago

            Yeah – I would’ve been at -70 by now

        • pedro
        • 7 years ago

        I’ve gotta say, I’m rather proud of it!

          • crabjokeman
          • 7 years ago

          So is the guy that comes in 5th in a Special Olympics event (though his pride is far more justified than yours).

Pin It on Pinterest

Share This