Samsung’s Galaxy Note 4 with the Exynos 5433 processor

After our look at the iPhone 6 Plus, the next logical step was to spend some time with its toughest competitor, Samsung’s Galaxy Note 4. Happily, we’ve managed to do so—but that’s not the whole story. The version of the Note 4 that we’ve been poking at in Damage Labs is something special.

Most of the world gets a variant of the Note 4 based on the familiar Qualcomm Snapdragon 805 system-on-a-chip (SoC), a fine chip that’s getting a little long in the tooth. In Samsung’s home country of Korea, though, the firm ships a different variant of the Note 4 based on Exynos 5433 SoC from Samsung’s System LSI. This brand-spanking-new SoC is manufactured using the latest 20-nm HKMG fabrication process from Samsung’s chip-making operation, and it’s a showcase for the newest CPU and graphics technology from ARM. With eight 64-bit CPU cores and a 64-bit Mali-T760 GPU, the Exynos 5433 could make this version the fastest and most capable Note 4—and it gives us some quality time with the Cortex-A53 and A57 CPU cores that will likely dominate the Android market in 2015.

Wrapped around this intriguing SoC is an astounding, premium phone-tablet hybrid that goes toe to toe with the iPhone 6 Plus. It includes one of the best displays we’ve ever seen—anywhere. Naturally, we’ve spent some time looking at most aspects of the Note 4.

Samsung
Galaxy Note 4
Model SM-N910K
SoC Samsung Exynos
5433
Display size & resolution 5.7″ 2560×1440
System RAM 3GB LPDDR3
Flash storage capacity 32GB
Primary camera resolution 16 megapixels
(5312×2988)
Cellular 2G GSM, 3G UMTS,
4G LTE
Wi-Fi 802.11a/b/g/n/ac
2.4GHz + 5GHz
Other connectivity Bluetooth 4.1,
NFC, MHL, ANT+
Battery 3220 mAh
replaceable
Operating system Android 4.4.4
with TouchWiz UI

That said, you can’t buy this model of the Note 4 on these shores, so I won’t claim this is a complete product review. For instance, I couldn’t muddle through the menus for all of the features, some of which seem to rely on software supplied by Korean carrier Olleh. They don’t switch to English text when the rest of the phone does. All indications are that this version of the Note 4 is essentially the same hardware as the Qualcomm-based version wrapped around a different motherboard. Still, some features won’t carry over. For example, I don’t believe this version of the phone supports quick-charge battery tech, which is a Qualcomm exclusive.

We’ll look at this version of Note 4 closely, but keep in mind that you may not get the same experience out of the variant of the Note 4 sold in North America.

Design and build quality

Samsung has built the Note 4 with premium everything, from the Corning Gorilla Glass on the device’s front to the metal chassis and high-grade textured plastic on its back. The in-hand feel is excellent, and all of the buttons are well-placed, with solid, clicky feedback when pressed.


Yeah, I need lotion

Samsung Galaxy Note 4
Height x Width x Depth 153.5 x 78.6 x
8.5 mm
Weight 6.21 oz/176 g
I/O ports microUSB connector,
3.5mm headphone
Expansion microSD slot,
battery bay, Micro-SIM slot
Other Fingerprint
sensor, stylus
Available colors Charcoal black,
frost white,

bronze gold, blossom pink

This is, of course, a Really Large Phone. Heck, it’s four grams heavier and a couple of millimeters thicker than the ridiculously large iPhone 6 Plus, with a slightly larger 5.7″ display. The one dimension in which the Note 4 is smaller is important: vertically, where the 6 Plus is a handful of millimeters taller. Neither phone is gonna fit easily into every front pocket, though.

The Note 4 knows one trick that will confound the competition. Slide a fingernail into a small slit on the device’s back cover, run it around the perimeter of the device, and the entire plastic back of the phone pops off with relative ease. Doing so reveals a swappable battery, the Note 4’s Micro-SIM slot, and a microSD slot that can host up to 128GB of additional flash storage. This sort of flexibility is appealing, but I’d rather not have to pull off the back to swap batteries on a daily basis. The process is a little too tedious, and the plastic rear panel feels a little too fragile for everyday access.

The Note 4’s home button houses a fingerprint reader for biometric user authentication. Unlike Apple’s Touch ID sensor, though, this one requires a swipe of your digit across the surface of the button. I set it up with my thumbprint and found out that I’m evidently a clumsy swiper. I’d usually have to swipe several times in order to login successfully. I can’t imagine using the fingerprint reader in place of a numeric PIN or Android’s trace-a-pattern unlock mechanism in everyday use. Those other methods just seem quicker.

Combined with the Note 4’s NFC capability, the fingerprint reader should be capable of supporting payment systems like Google Wallet. I think you’d want to practice your swipe technique before uncorking a payment attempt in line at Target, though.

Notice the stylus peeking out of its cradle there, as well. Pen-based input has always been one of the Note 4’s calling cards. I’ll admit I didn’t spend much time with pen input on this version of the Note, in part because I couldn’t read some of the Korean-language software prompts. I’m intrigued by pen input, though, and wish I had something similar to use for taking notes at trade shows.

The Exynos 5433 SoC

We know the major components of the Exynos 5433 SoC thanks to the Note 4’s public specifications, but quite a few of the details are still mysterious. When asked, Samsung’s System LSI confirmed that the chip is manufactured on a 20-nm fabrication process, but the company declined to answer our other questions about this SoC’s specifics. At least this TechInsights teardown gives us a look at the Note 4’s motherboard. Happily, ARM has been fairly forthcoming about some of the major components used in this chip, as well.

Samsung
Galaxy Note 4
SoC Samsung Exynos
5433
Manufacturing process Samsung 20 nm
Die size 113 mm²
CPU cores 4 Cortex-A57 + 4
Cortex-A53
A53 quad die area 4.6 mm²
A57 quad die area 15.1 mm²
Max core frequency 1.9GHz (A57) / 1.3GHz
(A53)
System memory 3GB LP-DDR3
Memory config 2 x 32-bit
channels at 825 MHz

The Exynos 5433 hosts a pair of quad-core CPU clusters, one comprised of Cortex-A53 cores and the other of Cortex-A57s. These CPU cores are compatible with the 64-bit ARMv8 instruction set architecture. Samsung has paired them with a 32-bit version of Android, so the Note 4 can’t reap all of the benefits of ARM’s new instruction set (which can add up to a ~6% performance improvement independent of other factors.) Still, the Note 4 does make use of ARM’s updates to the 32-bit AArch32 instructions, so it shares in some of the improvements, including AES encryption acceleration.

The presence of eight CPU cores may seem like overkill for a phone—and it probably is—but Samsung’s engineers have made the SoC more efficient by implementing ARM’s big.LITTLE scheme for power-efficient performance. To understand how big.LITTLE works, we first need to understand the differences between the CPU cores in question.

The Cortex-A53 is the latest iteration of ARM’s small, ultra-efficient CPU core for application processors, the successor to the Cortex-A5 and A7. This core has a pretty small footprint. Four A53s situated together in a quad occupy about the same die area as a single Cortex-A57.

The A53’s microarchitecture borrows heavily from the Cortex-A7 before it. The A53 can issue two instructions per clock cycle, and instructions execute in program order. The main execution pipeline is just five stages long, while the floating-point/SIMD side has seven stages. ARM thinks the A53 has taken this simple structure about as far as possible in terms of instruction throughput and power efficiency. Thanks to a host of tweaks—including better branch prediction, higher internal throughput, and power reductions than can be converted back into performance—the A53 is over 40% faster than the Cortex A7, according to ARM’s own estimates. (In fact, ARM tells us the A53 is roughly 15% faster than the mid-sized Cortex-A9 rev4.) Crucially, the Cortex-A53 is fully ARMv8 and 64-bit compliant.


A block diagram of the Cortex-A57 CPU. Source: ARM.

The Cortex-A57, meanwhile, is ARM’s largest core. Derived from the Cortex-A15 used in a number of today’s phones and tablets, the A57 adds ARMv8 support and incorporates a number of changes meant to increase instruction throughput. ARM intends to see the A57 used in servers, not just mobile devices, so it’s pretty beefy. This core can fetch, decode, and dispatch three instructions per clock cycle. The engine gets wider after that, as illustrated in the block diagram above, and it executes instructions out of program order to improve throughput. The A57 is quad-issue into the integer execution units, dual-issue to the floating-point/SIMD units, and dual-issue to the load/store unit. ARM estimates the A57 outperforms the A15 by 25% or better.

A single A57 cluster can host up to four CPU cores, and those cores use a single, shared L2 cache up to 2MB in size.

The idea behind ARM’s big.LITTLE is to extend the dynamic operating range of a chip’s CPU cores beyond what’s possible with a single CPU architecture. big.LITTLE operates in conjunction with traditional SoC power-saving measures. The CPU cores still operate at a range of clock speeds and voltages, depending on how busy they are. The CPU cores can still gate off clock signals to inactive units. Idle CPU cores or clusters can still be powered down temporarily when they’re not needed. The difference with big.LITTLE is that threads can also be shifted from a large core to a small one, or vice-versa, depending on which type of core provides the most optimal operating point for the thread’s current demands.

For instance, a simple thread that polls a phone’s GPS sensor periodically might never need anything more than a Cortex-A53 in order to do its thing. Running that thread on a small core might be the most energy-efficient arrangement. Meanwhile, a big, branchy thread for rendering a webpage might fare best when shifted to a Cortex-A57 for quick completion. Since both of these core types support the full ARMv8 instruction set, transitions between them should be seamless.


The three big.LITTLE schemes illustrated. Source: ARM.

Earlier SoCs have deployed big.LITTLE in relatively simple fashion, swapping threads between a pair of quad-core clusters or migrating directly between big and little cores as needed. More recently, ARM and its partners have moved toward an arrangement known as global task scheduling in order to extract the most efficiency out of big.LITTLE operation. Global task scheduling is a form of asymmetrical multiprocessing in which all cores are active. The OS scheduler—in this case, a modified version of the Android kernel—chooses where to place threads. Newer Exynos SoCs, including the 5433, have been widely reported to use global task scheduling.


Core residency in different power states during workloads. Source: ARM.

In theory, the most efficient hardware configuration for a mobile device with big.LITTLE would likely involve two big cores and four small ones, for reasons illustrated above. Even relatively intensive workloads like games don’t spend much time executing on the large cores—and the amount of time spent in the highest power state on the big cores is vanishingly small. Two big cores should be more than sufficient to keep performance from dropping in cases where an especially difficult code sequence must be executed. Either the Exynos 5433 was originally conceived with eight cores for use with CPU migration or, perhaps more likely, the octal core config was chosen for marketing rather than power-performance reasons.

“Eight cores” does have a nice ring to it, I suppose.

One thing to keep in mind as we look at the benchmarks below is that the Note 4’s measured CPU performance should largely be defined by its Cortex-A57s. When you’re running benchmarks that really push the limits, the big cores will be the ones doing the lion’s share of the work. The A53s might chip in a little during multithreaded tests in a global task scheduling scheme, which is an interesting prospect, but they’re not going to be the main attraction.

The Exynos 5433 has another major piece of ARM tech inside: the CoreLink CCI-400 north bridge, which glues together the CPU clusters, the Mali graphics processor, and everything else. ARM’s north bridges support the proper interfaces and provide hardware cache coherency, so they should work seamlessly with big.LITTLE thread migration.

CPU performance

We have several devices of, er, note to compare against the Galaxy Note 4 with the Exynos 5433 SoC. The iPhone 6 Plus only has two cores, but those are relatively fast CPU cores of Apple’s own custom design. Aside from the Note 4, the iPhone 5S, 6, and 6 Plus have the only other ARMv8-compatible cores in this comparison. The LG G3 and OnePlus One are both based on the Qualcomm Snapdragon 801 with quad Krait cores. The version of the Note 4 based on the Snapdragon 805 should be up to 20% faster than these devices. Finally, one of the more intriguing comparisons is Nvidia’s Shield Tablet, which packs four Cortex-A15s at up to 2.2GHz. The Cortex-A15 is the direct architectural predecessor to the Cortex-A57, but aboard the Shield Tablet, those cores are operating within the larger power envelope of an 8″ device.

Also on hand is the Asus Memo Pad ME176C, a low-cost tablet based on Intel’s Atom “Bay Trail” Z3745 SoC. This SoC features quad “Silvermont” cores and Intel HD Graphics.

Memory bandwidth

The Note 4 with Exynos isn’t off to a terribly impressive start in this synthetic test of memory bandwidth. The Exynos 5433 has been reported to have dual-channel 32-bit memory running at 825 MHz, or 1650 MT/s, which would yield 13.2 GB/s of peak bandwidth. The Note 4 doesn’t come close to reaching that mark in Stream’s copy test (and it’s no faster in scale, add, or triad, which I’ve not reported above).

The Note 4’s relatively weak showing could be the result of bumping up against a power management limit—directed tests sometimes do that—or it could be something else. Most modern CPUs max out their memory bandwidth by pairing a relatively large cache with a predictive pre-fetch mechanism that analyzes access patterns and pulls data from memory before it’s needed. The Cortex-A57 cluster in the Note 4 may not be tuned as aggressively as its competition for whatever reason.

Geekbench

Geekbench runs natively on both iOS and Android, and it offers us a look at both single- and multi-threaded performance. You can click on the buttons below to toggle between the two sets of results.

The Note 4 performs well in the single-threaded tests, turning in the best score overall among the Android-based devices. Only Apple’s ARMv8 custom cores are faster—and that depends on the test. In integer math, the Note 4 outperforms even the Cyclone core in the iPhone 5S. Only the iPhone 6 and 6 Plus are faster.

Interestingly, in these same integer tests, the quad Cortex-A57s in the Exynos 5433 substantially outperform the four Cortex-A15s in the Shield Tablet’s Tegra K1 SoC, despite spotting them a 300MHz clock speed advantage. The gap closes almost entirely in the floating-point math tests, though.

Switch over to the multithreaded results, and the Exynos 5433 looks even stronger. The Note 4 takes the top spot in Geekbench overall and outright dominates the multi-core integer scores. With only two cores on tap, the 6-series iPhones fall well behind.

The AES encryption test illustrates the impact of tailored acceleration instructions built into the ARMv8 instruction set. The Exynos 5433 benefits from these instructions, as do Apple’s newer custom cores.

Since the Exynos 5433 uses the global task scheduling version of big.LITTLE, we might be able to find some evidence of its four A53 cores assisting the four A57s. One indicator would be cases where the Note 4’s multithreaded performance is more than four times its single-threaded performance.

Single-

threaded

Multi-

threaded

Difference
Geekbench overall 1280 4218 3.3x
Geekbench integer 1585 6101 3.8x
Geekbench floating point 1019 3841 3.8x
Geekbench AES encryption 818 4069 5.0x
Geekbench raytrace 1452 5585 3.8x

If the Note 4 is using more than four cores, the effect is fairly subtle. Only in certain sub-tests, like AES encryption, does the Note 4 achieve a speed-up of more than 4x with multiple threads. Hmm.

We might get some more insight by comparing Note 4’s scaling to that of the quad-core Shield Tablet. Here are the overall results along with some hand-picked sub-tests that show the best scaling from one to many threads.

Single-

threaded

Multi-

threaded

Difference
Shield Tablet integer overall 1197 4303 3.6x
Note 4 integer overall 1573 5935 3.8x
Shield Tablet AES encryption 67 305 4.6x
Note 4 AES encryption 818 4069 5.0x
Shield Tablet SHA1 encryption 2108 8259 3.9x
Note 4 SHA1 encryption 4393 21792 5.0x
Shield Tablet JPEG compress 1322 5384 4.1x
Note 4 JPEG compress 1383 5866 4.2x

The Note 4 does scale slightly better than the Shield Tablet in Geekbench’s overall index, but both are less than four times as fast. In certain sub-tests, the Note’s multithreaded performance improves by more than 4x, but the Shield Tablet’s does, too. Some other factor, like good locality in the tests’ access patterns keeping the caches warm, could account for scaling beyond 4x.

If the Cortex-A53s in the Exynos 5433 are contributing to higher overall performance, it’s awfully hard to tell by looking at these benchmark scores. The reality may be that the Exynos 5433 and its quad A57 cores are too power-constrained to allow those Cortex-A53s any meaningful thermal headroom. Devoting all of the juice to the A57s instead may be the best use of the SoC’s power budget, anyhow.

Browser benchmarks

I used Google Chrome as the browser for all of the web-based benchmarks on the Android devices. That decision may have cost the Note 4 some performance in SunSpider and Kraken, if the Samsung browser has more extensive JavaScript optimization in it. Even so, the Note 4 performs relatively well.

Notice that the two top spots in SunSpider and Kraken are occupied by desktop CPUs. I sneaked those in just for fun. They have the benefit of much larger power envelopes.

BaseMark OS II

WebXprt

The rest of the CPU benchmark results are strangely inconclusive, in a way. The Note 4 easily outperforms the Snapdragon 801-based LG G3 in Basemark, just like it does in Geekbench, but the balance changes in WebXprt. There’s no question the Exynos 5433 is a solid performer overall, but its position relative to the competition depends quite a bit on the workload in question.

That said, the contest with the iPhone 6 Plus looks to be pretty clear-cut. The iPhone is at or near the top of the stack in nearly every case. Apple’s custom CPU core still gives it an edge, even though the Cortex-A57 helps to close the gap.

Graphics

The Mali-T760 GPU in the Exynos 5433 is a full-featured graphics processor based on ARM’s Midgard architecture. The Note 4 has given us our first chance to spend significant time with a Mali-equipped device, and our impressions are generally quite positive.

Samsung Galaxy Note 4
SoC Samsung Exynos
5433
GPU ARM Mali-T760 MP6
GPU die area 30.9 mm²
Est. clock speed 700 MHz
fp32 flops/clock 204
Texture filtering  6 texels/clock
Pixel fill 6 pixels/clock
System memory 3GB LPDDR3
Display resolution 2560×1440

Midgard is a bit of an unconventional GPU architecture in terms of its execution model, but it has a robust feature set, especially for the mobile space. The Mali-T760 supports 64-bit addressing and can handle IEEE-compliant floating-point datatypes, including 64-bit double precision. In fact, the Mali-T760 has a nearly desktop-class feature set, with support for DirectX 11.1 (feature level 11_1, or the real thing), OpenGL ES 3.1, and OpenCL 1.2.

Thanks to this combination of high mathematical precision and standards compliance, the T760 is perhaps better suited for GPU computing than most of its competition in the mobile GPU market. ARM was one of the first companies to join the HSA consortium, along with AMD, and has publicly supported that effort.


A partial functional block diagram of a Mali-T760 graphics processor

I’m tempted to call the Mali-T760 a “tiler,” but I expect there are folks in the PowerVR camp who would take umbrage with that wording. ARM’s Midgard architecture doesn’t use fully deferred tile-based rendering like Imagination Tech’s PowerVR GPUs. Midgard uses early Z detection like conventional immediate-mode renderers in order to avoid drawing some pixels that would be occluded by other polygons in the final scene, but it doesn’t reorder the graphics pipeline in order to eliminate overdraw entirely. Instead, Midgard renders all pixels into on-chip buffers representing 16 x 16-pixel tiles, so blending and overdraw happens on the chip. Midgard can conserve bandwidth and save energy by using a tile buffer in this fashion, since DRAM transactions tend to burn a lot of power.

Divining the structure of Midgard’s “shader cores” can be a little confusing. ARM’s public documentation appears to divvy things up according to the number of flops the hardware can process rather than its likely underlying organization. Each Midgard “shader core” has two arithmetic pipelines. My sense is that those pipelines break down into two stages: the first stage includes a 128-bit-wide vector unit plus a scalar ALU. The second stage has a special function unit capable of a four-wide vector dot product, and there’s a scalar ALU in this stage, too.

Midgard’s shader pipelines are unusually flexible in their support for different datatypes. That 128-bit-wide vector unit can be subdivided in various ways. A single vector ALU can process two 64-bit operations, four 32-bit ops, or eight 16-bit operations in a clock cycle.

If you’re counting flops at home, the most relevant tally involves 32-bit operations. That big vector unit in the first stage can process four multiply + add operations (eight flops) and the scalar ALU can contribute a multiply (one flop). Stage two’s SFU then contributes seven more flops, and the scalar ALU adds another, for a total of 17 from the pipeline in each clock cycle. Thanks to the dual pipes, that’s a total of 34 flops per cycle from each “shader core.”

ARM offers versions of the T760 that scale up to 16 shader cores, as depicted in the diagram above. The Exynos 5433 hosts a six-cluster version of the GPU known as the Mali-T760 MP6. Reports have suggested this GPU runs at 700MHz in the Note 4.

If that clock speed is correct, then on paper, the Note 4’s key graphics rates should look awfully similar to those of the PowerVR GX6450 in the iPhone 6 Plus. The two would have roughly similar pixel fill (~4 gigapixels/second) and bilinear texture filtering (~4 gigatexels/second) rates, and both devices’ fp32 arithmetic throughput should peak at about 140 gigaflops. I hesitate to put those numbers into a table, since they’re both theoretical and speculative in nature, but the story here should be rough parity.

Of course, theoretical peaks are one thing, and delivered performance is another. We know the PowerVR GPUs tend to be very efficient in real applications with their given resources. The Midgard architecture also contains some provisions meant to boost efficiency, including a nifty transaction elimination feature that only updates the contents of framebuffer pixel blocks when they have changed from the prior frame.

Directed tests

The story of rough parity we outlined above plays out as expected in the fill rate test, but the Note 4 trails the 6 Plus by quite a bit in GFXBench’s ALU test. I’m not sure exactly what this directed ALU test does or whether it takes advantage of the dot-product unit in the Midgard shader core. I wouldn’t read too much into these results, though.

The Note 4 also trails the 6 Plus in this graphics-specific test of alpha blending capacity. Then again, so does the Tegra K1 chip in the Shield Tablet, and we know it’s one of the faster mobile GPUs on the planet.

As I understand it, this benchmark attempts to measure driver overhead by issuing a draw call, changing state, and doing it again, over and over. Performance in this test may end up being gated by CPU throughput as much as anything else.

Off-screen gaming

All three of these tests are rendered off-screen at a common resolution, so they’re our best bet for cross-device GPU comparisons. They’re also more complete benchmarks than the directed tests above, since they involve rendering a real scene that could plausibly come from a mobile 3D game. The older iPhones can’t run GFXBench’s “Manhattan” test, because it requires OpenGL ES 3.0 compliance.

The iPhone 6 Plus leads all of the phones in the two GFXBench benchmark scenes, while the Note 4 trades blows with the LG G3 and the OnePlus One, both of which are based on Qualcomm’s Snapdragon 801 GPU with Adreno graphics. The Note 4 with Exynos comes roaring back to take the lead in 3DMark’s Ice Storm test. Only the Shield Tablet, which has a larger power envelope, is faster.

Native device resolution gaming

The Note 4 has a higher display resolution than most of its competitors, including the iPhone 6 Plus, so it has to work harder to paint all of those pixels. As a result, the Note 4 ends up shadowing the L3 G3 in these native-resolution gaming tests. Like the Note 4, the G3 has a 2560×1440 display.

The iOS version of Basemark X runs on-screen and off-screen tests and then spits out a single, composite score. I wish we could break out the component tests, especially since this benchmark walks through a nice-looking scene rendered using the popular Unity game engine. As you can see, the Note 4’s Mali GPU fares well in this benchmark.

Image quality

One other feature of Basemark X is an intriguing quantitative test of graphics image quality.

Real-time graphics is strange because there’s not always one “right” answer about the color of a rendered pixel. Filtering methods and degrees of mathematical precision vary, since GPU makers take different shortcuts to trade off image quality against performance.

Basemark X attempts to quantify the fidelity of a GPU’s output by comparing it to some ideal—in this case, I believe the reference renderer is a desktop-class graphics chip. That’s a fair standard, since desktop chips these days produce something close to ideal imagery. The higher the signal-to-noise ratio reported, the closer these mobile GPUs come to matching the reference image. (Frustratingly, a couple of the devices refused to run the quality test with “out of memory” errors.)

The Mali-T760 fares well in these tests, scoring better than the rest of the mobile GPUs with the exception of the desktop-derived Tegra K1. This solid showing is likely a consequence of ARM’s decision to build in support for IEEE standard datatypes. It’s also an indicator that ARM hasn’t cut too many corners on things like texture filtering techniques in order to boost performance.

To give you a further sense of the Mali-T760’s image quality, I’ve embedded screenshots below from the Note 4 with Exynos and the Nexus 7, which has a Qualcomm Adreno 320 GPU. Move your mouse over each image to see a close-up version of it, or click to open the full-resolution screenshot.

Note 4/Mali-T760:

Nexus 7/Adreno 320:

There are more pixels in the Note 4’s image thanks to the device’s higher display resolution, but otherwise, I think the output produced by these two GPUs is fairly comparable.

Especially in the brightest areas of the image near the upper middle of the screen, the Note 4’s imagery tends to look softer. I believe the Mali GPU is applying a post-process bloom filter that the Adreno GPU in the Nexus 7 isn’t using—likely because this application knows the Mali GPU is more capable.

In terms of both image quality and performance, the T760 appears to be a credible alternative to the competition from Qualcomm and Imagination Tech.

In several ways, the Mali-T760 MP6 built into the Exynos 5433 isn’t as efficient as the PowerVR GPU in the iPhones. Although the Note 4 and 6 Plus would seem to be equals on paper, the iPhone generally outperforms the Note 4 in offscreen tests at the same resolution. Meanwhile, the PowerVR GPU on Apple’s A8 SoC occupies about two-thirds of the die space that the Mali-T760 MP6 does in the Exynos 5433. (We’ll consider the question of power efficiency momentarily in our battery life tests.) The counterbalance here is that ARM’s Midgard architecture offers full IEEE compliance. Midgard produces measurably higher fidelity images and is better suited for use in GPU computing applications.

Storage

This version of the Note 4 ships with 32GB of onboard MLC flash memory. About 25GB of that space is available after Samsung installs the operating system and its own suite of software.

The Note 4 writes data into its flash storage array nearly as quickly as it reads it, according to these simple tests. As a result, the Note 4’s storage subsystem is one of the fastest ones we’ve tested overall. Still, its 204MB/s average read speed isn’t anything to write home about.

Battery life

We tested battery life in four different scenarios. In each case, the phones’ display brightness was set to 180 cd/m² before starting, and display auto-brightness features were disabled. Our workload for the web surfing tests was TR Browserbench. The video test involved looped playback of a 1080p video recorded on one of the phones, and our gaming workload was the Unreal Engine-based Epic Citadel demo.

I’d say the iPhone 6 Plus has a slight edge in our battery life results, overall, versus the Note 4. Both of these “phablets” have exceptionally long run times, though.

Keep in mind that the 6 Plus has a 2915-mAh battery, while the Note 4’s battery capacity is a tad higher, at 3220 mAh. Then again, the Note 4 has to drive a slightly larger display with substantially more pixels. The fact that these two devices are so closely matched in our run time tests speaks well of the Exynos 5433 SoC.

The fact that the Note 4 reaches nearly six hours of run time in our gaming test, with Epic Citadel, is an indicator that its Mali-T760 GPU is reasonably power-efficient, too.

Display

The display on the Note 4 will change your view of what’s possible. That’s probably the most succinct way I can sum it up. Samsung’s AMOLED technology is heart-stoppingly bright and crisp, with color and clarity that you may not find anywhere else. Periodically throughout my time with the Note 4, I’d just spend a moment staring at the display in stupefied wonder. I’m constantly surrounded by outstanding desktop displays with IPS panels, but the Note 4 triggers a response in me like almost nothing else.

Samsung
Galaxy Note 4
Display size 5.7″
Display type Super AMOLED
Resolution 2560×1440
PPI 515
Other S Pen support

I say “almost” only because the Note 4’s top competitor, the iPhone 6 Plus, also has a display that’s edge-of-the-bell-curve excellent. The two are natural rivals, although they’re strong in different ways.


A magnified look at the subpixel arrangement on the Note 4’s AMOLED display. Source: Samsung.

Like many of Samsung’s past OLED panels, the Note 4’s display does not use a conventional RGB subpixel arrangement. Instead, the OLED panel has twice as many green subpixels as it does red and blue. When discussing Samsung’s older OLEDs that use pentile subpixel layouts, I might have complained about the fact that the display doesn’t really offer its “true” advertised resolution. I might have worried about the difficulty of making vertical or horizontal edges look perfectly straight. Those are criticisms one could offer about this panel, too, and they might not be wrong.

But those objections will melt away as soon as your rods and cones feast on the reality of this display. The effective resolution is razor-sharp, clearly higher than the iPhone 6 Plus’s. I’ve taken some close-up screenshots of the two displays side by side that you can look at below. These shots frankly show more detail than you’re likely to notice with the naked eye, even if you’re just a handful of inches from the screen.

You can move your mouse over the thumbnail to see a pop-up window with a close-up photograph. Mobile users, just tap the thumbnail to load the full image and pinch-zoom to your heart’s content.

iPhone 6 Plus:

Galaxy Note 4 (Exynos):

According to DisplayMate’s informative analysis, the Note 4 has several user-selectable modes for display calibration and color gamut targets. By default, the Note 4 ships in Adaptive mode, which offers a wide color gamut. Images look vibrant and colors heavily saturated in this mode, a trait that has kind of become a hallmark of Samsung’s consumer devices. Virtually no content is authored to take advantage of this color mode, so it’s not exactly true to life. The Note also offers a Photo mode calibrated to the Adobe RGB standard and a Basic mode meant to match the widely used sRGB standard. To my eyes, the Basic mode’s colors look most correct—and therefore pleasing—most of the time.

We tested the Note 4 in both Adaptive and Basic modes to generate the results below.

I’d like to show you contrast ratios, but the pixels of the Note 4’s AMOLED display only glow when they need to, so black images on the Note 4 are perfectly black. That leads to a contrast ratio of DIVIDE BY ZERO ERROR. Whoops.

Still, one can see that the Note 4’s combination of white and black levels offers exceptional contrast and “perfect” blacks. None of the competing displays based on IPS LCD technology can replicate that latter trait. That said, the iPhone 6 and 6 Plus offer nearly comparable perceived contrast by combining reasonably dark black levels with substantially brighter white levels than the Note 4.


In Adaptive mode, the Note 4’s OLED is capable of displaying a broader spectrum of colors than anything else we’ve measured. When content authoring standards catch up, OLEDs like this one should allow higher-fidelity color reproduction all around. Switch the Note 4 into Basic mode, and it constrains itself almost perfectly to the sRGB color gamut.


The Note 4’s excellence continues in our color tests. In Basic mode, the display tracks closely with the 6500K white point standard, and its overall color accuracy rivals that of the two new iPhones. The Adaptive mode’s gamut-stretching hyper-saturation is reflected in higher levels of delta-E, especially in the primary and secondary colors. Even that outcome is the result of an intentional choice by Samsung. Still, I wish Adaptive mode wasn’t the default setting for the device.


The iPhone 6 Plus (left) and Note 4 (right) from multiple angles

The Note 4 would have a clear lead over the iPhone 6 Plus in the display department were it not for a single obvious weakness: color shift at indirect viewing angles. I first noticed this quirk of the Note 4’s display while reading a book on the Kindle app at the dinner table. The Note 4 was sitting flat on the table next to me, and the white background had taken on a blue-green cast. I think I’ve captured the shift in the images above. The top image shows that the 6 Plus and Note 4 are comparable when viewed straight on. As you can see in the image on the lower left, the angle doesn’t have to be terribly extreme for the Note 4’s colors to begin to shift. The display on the 6 Plus exhibits almost no color shift at these angles thanks to the dual-domain pixels on its IPS panel.

The camera

This version of the Note 4 isn’t sold here, and the operation of its camera may be substantially different than what you’d find in the U.S. version. As a result, our coverage of the camera won’t be terribly extensive. Still, the results we were able to get when shooting with this device are certainly worth sharing.

Samsung Galaxy Note 4
Primary camera resolution 16 megapixels
(5312×2988)
Front-facing camera resolution 3.7 megapixels (2560×1440)
Max video resolution UHD 4K
(3840×2160) at 30 FPS

The Note 4’s primary camera has double the pixel count of the cameras on the latest iPhones. Chasing more megapixels isn’t always the best choice, but the Note 4 can sometimes capture more detail than the iPhones, as we’ll see in at least one sample shot. More notably, this version of the Note 4 has optical image stablization, which should make it easier to achieve sharp focus in low-light shooting.

Also, several of Samsung’s phone cameras have included an advanced autofocus technique known as phase detection. Imported from the world of DSLR cameras, this mechanism can greatly enhance the camera’s reflexes, cutting the time required to bring a subject into focus. Apple adopted this method in the new iPhones to good effect. Samsung Korea’s spec sheets aren’t terribly helpful on this front, but I believe this Note 4 may be blessed with phase-detection autofocus, too.

What I can tell you is that the camera in the Note 4 with Exynos rivals the one in the iPhone 6 Plus as one of the best we’ve seen in any mobile device. Focus times are exceptionally quick, as is the overall process of capturing images. The dual image signal processors in the Exynos 5433 avoid injecting any tangible delays into normal shooting.

The sample shots

I took a ton of pictures for the sake of this review, including some attempts at ISO reference patterns and home-brewed “test scenes” with a quasi-intentional jumble of objects. I discovered that getting the proper shot in some of those cases with a smartphone is prohibitively difficult. Also, many of those pictures didn’t do a very good job of illustrating the differences between the cameras in question. In the end, I wound up choosing a series of sample shots, one in outdoor light and several indoors, in order to demonstrate the sort of results you can expect. The Javascript zoom tool below will show you every pixel as you mouse over the thumbnails (or you can click/tap on each image to load the original in a browser tab).

I’m reasonably satisfied that the shots below do most of the work we need, but this setup does have some limitations. For one, I tried to select the best exposures from each camera for the samples below, and I used burst mode to shoot them when possible. As a result, you can’t know from looking at the examples how much easier it was to get clear, sharply focused shots on the phones with optical IS. One can usually achieve similar results on the iPhone 6 without optical stabilization, but not nearly as consistently.

Outdoor scene

To get a sense of what the Note 4’s 16-megapixel sensor buys you in terms of clarity, have a look at the satellite dish on left of the scene, attached to my neighbors’ house. You can’t read text in the DirecTV logo on the shot from the iPhone 6 Plus, but on the Note 4, it’s clear as day.

That said, the Note 4’s image does have its weaknesses. I was surprised to notice that some colors look a little washed out, most obviously the blue of the sky and the putty-colored paint on the house. Also, Samsung’s default sharpening routines are very aggressive. The heavy sharpening produces images that really pop on the Note 4’s amazing display, but when you zoom in and look at the pictures closely, the resulting artifacts aren’t hard to spot. Check out how extensively the individual blades of grass and the tree branches have been sharpened on the Note 4. Personally, I prefer the less intrusive approach Apple has taken on this front.

iPhone 5:

iPhone 6:

iPhone 6 Plus:

Note 4:

OnePlus One:

Sample scene: Santa

This is my favorite sample scene, since it’s a challenging low-light setting with a complex subject. In this case, even with twice as many megapixels and heavier post-process sharpening, I don’t think the Note 4’s results look as clear as those from the iPhone 6 Plus. The image from the 6 Plus does seem to have a little more of the noise that comes with shooting in high-ISO modes, though.

Be aware that I’m splitting hairs. These two premium phablets both take fantastic pictures, and they seriously outclass cheaper contenders like the OnePlus One.

iPhone 5:

iPhone 6:

iPhone 6 Plus:

Note 4:

OnePlus One:

Sample scene: Edinburgh

One other thing you’ll notice if you click on the sample shots: the Note 4’s pictures aren’t the conventional shape. Samsung has chosen a wider 16:9 aspect ratio for still photos on this product, just as with video. I suppose that’s heresy, but I kind of like it.

iPhone 5:

iPhone 6:

iPhone 6 Plus:

Note 4:

OnePlus One:

Video

The Note 4 will shoot video at resolutions up to 4K (3840×2160) at 30 FPS, and the Exynos 5433 supports H.265 compression to facilitate 4K capture. That resolution is easily a step above the 6 Plus, which tops out at 1080p. Unfortunately, I didn’t find any slow-motion video modes in the Note 4 to rival the 240 FPS capture capability Apple has implemented.

Although we didn’t shoot in 4K, we do have some examples of recorded video from each phone below. YouTube’s compression has done some violence to the quality of the video produced by each device, but these examples should serve to illustrate a couple of things.

iPhone 5:

iPhone 6:

iPhone 6 Plus:

Note 4:

OnePlus One:

Even through all of the compression, I think the individual frames of video look sharper on the Note 4 than on the 6 Plus. However, for this particular scenario where the phone is handheld and I’m walking along, Apple’s algorithmic video stabilization is more effective than lens-based optical image stabilization alone.

Conclusions

Even though this version of the Note 4 isn’t available here, the total package is good enough to make it worth our time. The Exynos 5433 chip is particularly intriguing—especially now that Qualcomm has lost a major design win for the Snapdragon 810. (It’s widely expected that Samsung is the customer Qualcomm lost.) One way or another, the Cortex-A57 and A53 pairings running in a big.LITTLE configuration will be common in 2015’s premium Android devices.

With its quad Cortex-A57 cores doing the heavy lifting, the Exynos 5433 posted the top score of any mobile SoC we tested in Geekbench’s multithreaded performance index. Most phone applications are limited by the performance of one or two threads, and in those cases, the A57 doesn’t quite keep pace with Apple’s beefy custom CPU core. The Cortex-A57 does close the gap with Apple substantially, though, while adding 64-bit compatibility.

Our first look at Mali graphics left us with a positive impression, too. The Mali T760 MP6 GPU in the Exynos 5433 offers decent performance and better-than-average image quality for the mobile market. Thanks to its robust support of high-precision datatypes, the Mali Midgard architecture has the potential to encourage wider use of GPU computing in mobile devices, as well. As we’ve noted, however, this version of Mali doesn’t appear to be as efficient as the PowerVR GPU in the iPhone 6 Plus. The T760 doesn’t extract as much delivered performance from its theoretical peak specs or from its die area.

Speaking of efficiency, our battery life results suggest that both the Cortex A57/A53 big.LITTLE CPU config and the Mali-T760 MP6 graphics processor in the Exynos 5433 are reasonably power efficient when paired with Samsung’s 20-nm fab process. At the very least, Samsung has taken the appropriate measures to ensure nice, long battery run times from the Note 4 in the scenarios we tested.

Aside from the whole “you can’t buy it here” thing and some Korean-only menus, my main beef with this version of the Note 4 is, perhaps predictably, the clumsy TouchWiz skin that Samsung drapes over the lithe body of the stock Android user interface. TouchWiz and I are sworn enemies going way back to the Gingerbread days. I was able to replace some of the TouchWiz elements by installing Google alternatives from the Play Store, but there are definite limits to that approach.

You know what’s impressive about the Note 4? The hardware is good enough to tempt me to relax my stance on the TouchWiz issue. I’m not talking just about the Exynos SoC but the whole shebang—the display, the enclosure, the display, the camera… and did I mention the display? The Note 4 is fast enough that it doesn’t feel particularly slow in absolute terms, even when saddled with TouchWiz. But it does feel slower than it would with stock KitKat, given its hardware spec and measured performance.

Don’t tell anyone, but I would probably do terribly questionable things in order to get a version of the Note 4 with stock Android Lollipop pre-installed.

TouchWiz fanboys can vent their rage at me on Twitter.

Comments closed
    • strangerguy
    • 5 years ago

    I have a N9005 Note 3 that runs fantastic on leaked 5.0 Lollipop which I really wanted to like since day one, except for that godawful battery life in all versions of Kitkat for over a year, all that preinstalled bloatware and a middle finger to the homebrew Android with warranty voiding Knox trip crap that was shoved down our throats. They makes me exceeding wary of buying another Samsung phone, much less their premium model.

    I seriously has far less issues with a $150 Xiaomi Note 4G (and better battery life, far less expensive accessories) than my $600++ Note 3. Samsung isn’t doing itself any favors to retain buyers amidst in some of the most intense competition in any consumer market ever where even low-end phones are much more than capable in day-to-day use.

    • Melanine
    • 5 years ago
    • Cranx
    • 5 years ago

    I couldn’t agree more about having this phone w/ stock Android 5.0. That would be a great package.

    • jangove
    • 5 years ago

    FWIW, I have the S5, which I got after cleverly leaving my Note 3 atop my car when pulling out of my parking space and driving onto Lakeshore drive (in Chicago). You get used to the finger print reader pretty quickly, in my experience, although I found it helped to make sure that when I set the scanner was holding it in a way that reflected how I actually use the device when unlocking it, and that I took advantage of it having three different allowed fingerprints to give myself some wiggle room. It isn’t perfect, but I do prefer it to the iPhone system, which I could never figure out how to use one handed.

    • Melanine
    • 5 years ago

    Yet another “Apple is the best” review in disguise of the Note 4 review. Are you jealous of Anand, Scott? You seem awfully intent on following his suit.

    Nothing matters unless Apple does it. Apple not having certain features is never a minus. That changes once Apple adopts a new feature, and everyone has to do it exactly like Apple does, or else you are a fail.

    I mean, not even a courtesy explanation for the stylus, for the Note 4 review?

    Of all the improvement of AMOLED in recent days, one thing you thought worth pointing out is color shift in an angle? (Not that I have seen it elsewhere, so I am guessing it is your unit, your eyes, or your camera)

    “Unfortunately, I didn’t find any slow-motion video modes in the Note 4 to rival the 240 FPS capture capability Apple has implemented.” Really.. why would other phones should do a gimmick like 240FPS? Just because Apple did it? I would have thought the 4K is more pertinent to where the tech is going these days.

    And of course you only care about single-core benchmarks because that is where Apple excels. Have you had the same attitude when quad-cores were introduced on desktop? Or, have you taken a look at how many cores are accessed on a phone these days? Or, as you say, if “most phone applications are limited by the performance of one or two threads,” did you try doing same thing on the Note 4 and the iPhone 6+ and see which one gets thing done faster? Why not do a real-world test instead of synthetic benches and tell us which one is faster?

    Complete omission as to the Note 4’s refined ergonomics compared to horrendous ergonomics of the iPhone 6+. Yah, sure you can buy a case. But I thought that was not the point and that is why all non-Apple phones were criticized for their “cheap looks.”

    This is not a review of an Android phone. It is another Apple choir dressed up as an Android phone review. I am sorry to discredit you this way but there is no other way to put it because I saw the same stuff many time elsewhere, and I turned out to be right.

    Hope you get a thank you call from Apple for your effort. That would at least be a beginning. Good luck.

      • Beelzebubba9
      • 5 years ago

      *yawn*

      It’s funny when fanboys crawl out from where they live to start pretending that up is down, the world is flat and reading comprehension is for the lame. Scott was pretty clear that:
      [quote=”Scott Wasson”<] I won't claim this is a complete product review [/quote<] Then goes into details on multiple occasions how the KDM nature of the phone prevented him from going in depth in features like the stylus: [quote="Scott Wasson"<] I'll admit I didn't spend much time with pen input on this version of the Note, in part because I couldn't read some of the Korean-language software prompts. [/quote<] Is the concept that the review was more for the SoC than the device really that hard to understand? Also where did you get the idea that Scott was underselling the Note 4's AMOLED display? Did you not read the review at all? Or did you just skip over the whole page he dedicated to how amazing it was only noting at the very end there's color shift? [quote="Scott Wasson"<] I'm constantly surrounded by outstanding desktop displays with IPS panels, but the Note 4 triggers a response in me like almost nothing else. [/quote<] There's a whole paragraph of equally effusive praise where that came from too. And charts. Lastly, single threaded CPU performance is still more important overall than multi-threaded performance. This has pretty much always been true of single user PC workloads and is even more applicable on a smartphone where you generally aren't doing heavy transcoding or the type of engineering work that can take advantage of 3+ threads. The reason the A8 dominates its Android competition in most benchmarks is similar to the reason why Intel had no problem dispatching any of the Bulldozer variants because single threaded performance is king. I know how hard it is when people don't love a device as much as you do, but Apple generated more revenue from the iPhone 6 than Intel, Microsoft, and Google earned in the same period combined. At some point it's useful to just admit that Apple makes the standard by which other smartphones are judged and that has been broadly true for years. Crying 'bias' because Scott treats the iPhone 6+ the way its position in the market dictates is just lame and boring.

      • chuckula
      • 5 years ago

      I don’t even like Apple and that was too much.

        • Melanine
        • 5 years ago

        I don’t even like Samsung and I thought this “review” was grossly unfair to the Note 4. (Heck, I have not even used any of the Samsung smartphones – flip phones, maybe)

        This is not a Note 4 review. It is “why the iPhone 6+ is better than the Note 4” editorial. Scott may be unconsciously biased for now because iPhone 6+ is what he likes and uses, but sooner or later you will see deliberate bias disguised as objectivity.

        To be blunt, I am guessing he already knows. Like, “because I do not read Korean” is a laughable excuse not to write about S-Pen. Was he able to read “Adoptive mode” or “Basic mode” in Korean? I suppose those are written in Korean. He surely has read many reviews of the Note 4 prior to writing his own. It does not take a genius to look at the icon and take a guess at their functions. Even I (who has not used the Note products) know what the first and last options of the S-Pen pop-up menu do, just by looking at the icons.

        He just did not want to cover it. Plain and simple. He did not want to cover it because it is not in iPhones, thus it is not something he has used or will use in the future. It is just not interesting to him. (forget potential consumers who might be reading this review hoping to learn about it) And since it’s what iPhone lacks, it doesn’t look good for Apple by default. So there. But worry not. I am fairly confident that he will have an in-depth coverage of a stylus if and when Apple introduces one.

        I am no fan of any of the hardware manufacturers. I am also no fan of double standards and deceptions.

      • tipoo
      • 5 years ago

      Condescending tone aside, I’d be down for some real world tests apart from benchmarks, like a marathon of sorts, see who opens 10 apps, then multitasks backwards through them the fastest.

    • RdVi
    • 5 years ago

    Typo:
    [quote<]Keep in mind that the 6 Plus has a [b<]2195[/b<]-mAh battery, while the Note 4's battery capacity is a tad higher, at 3220 mAh. [/quote<] Should be 2915-mAh.

      • Damage
      • 5 years ago

      Doh, fixed.

        • chuckula
        • 5 years ago

        That changes this paragraph from one of intentional sarcasm to one of actually meaning a “tad higher”….

    • anotherengineer
    • 5 years ago

    [url<]http://cdn.memegenerator.net/instances/40268574.jpg[/url<]

    • kukreknecmi
    • 5 years ago

    Practically, it is almost impossible for a T760 to achive 17 ops. There was a thread on Arm dev. forums i guess. They didnt want to disclose assemlbly debug, yet gave a code example which seems too hard to achive 17 ops. Thoose 7 dot. operations are linked to normal vector units, and seems unpractical to achive mostly. So mainly T760 is 10 ops max / 4 MAD, if you include scalar unit its 5 MAD then.

    5 ops / tripipe /
    10 ops / core

    10 ops > 20 Flop for MP1
    so for 700mhz with MP6 >> 20 flop x 6 x 700 = 84 Gflop

    Probably thats why ALU scores are a bit low. ARM wanted to put 760 on a balance with ALU-Tex perf. perspective.

    • blastdoor
    • 5 years ago

    Very interesting review — thanks! Great to get some real performance measures on an A57 based SOC.

    It looks like Apple’s A8 will only lose the smartphone performance crown when the A9 comes out. A57 looks to be totally inferior to cyclone.

      • chuckula
      • 5 years ago

      Who called it back before the phones even launched?
      THIS GUI!

        • blastdoor
        • 5 years ago

        Did anyone disagree?

          • chuckula
          • 5 years ago

          SSK.

    • September
    • 5 years ago

    [quote<]Yeah, I need lotion[/quote<] I would recommend some Biotin (B7) vitamins in the winter too...

    • SamBennett
    • 5 years ago

    If you’re talking about the sm-n910h (which appears to have the same specs), I can confirm that the rapid charging works (awesomely), even in the US. We bought a couple off the gray market and “accidentally” ended up with this version instead of the US version. Really fantastic phone, but I’ve noticed some odd performance issues. When playing Minecraft with my son for instance, I’ve noticed some jerkiness in the motion. I haven’t heard experienced this with any other phones – including the Xiaomi Redmi Note 4G.

    The biggest bummer using this phone in the us however is lack of LTE – HSPA+ is the best I’ve been able to muster.

    • NeoForever
    • 5 years ago

    [quote<]..in order to get a version of the Note 4 with stock Android Lollipop pre-installed[/quote<] I've upgraded to Lollipop on my (Google's OWN!) Nexus 10. All fancy animations and eye candy at the expense of performance and buggy operation. First OS update where I don't consider it an improvement in any practical way. Has anyone used Lollipop and like it better than Kitkat?

      • ferdinandh
      • 5 years ago

      Yes. I can multitask slightly better and adjust the screen brightness on the fly.

      • jessterman21
      • 5 years ago

      I wouldn’t say it’s slower – in fact I get a little snappier performance on my 2012 Nexus 7. Just different…

      Now what I really like is the Lollipopped Google Now Launcher for my phone 🙂

      • tuxroller
      • 5 years ago

      I haven’t. It multitasks slightly worse, IMHO.

      I’d love to see some benchmarks that validate the move to art.

      • End User
      • 5 years ago

      The Nexus 10 is over two years old. It was not exactly a powerhouse when it came out.

      • BabelHuber
      • 5 years ago

      I do prefer Lollipop:

      I have installed it on my Samsung Galaxy S4 (GPE ROM) and on my Sony Xperia Z2 Tablet (Official CM12).

      Both devices run very smooth and perform well.

      Perhaps you should try out a factory reset. Or head over to xda.developers.com and check the forums. You cannot be the only one with this problem.

      • Ninjitsu
      • 5 years ago

      I do like Lollipop better, slight hit to battery life but still lasts me 1 day and 12 hours on an average.

      Some apps are ever so slightly buggy (WhatsApp for example) but it’s mostly okay. It is slightly slower but I like the notification drawer.

      I’m using the 2nd Gen Moto G.

      • Kretschmer
      • 5 years ago

      My Nexus 4 became quite buggy after switching to the Lolly.

      This OnePlusOne *screams*, though!

    • southrncomfortjm
    • 5 years ago

    TouchWiz is just a non-starter. Too many years on stock android, though I recently switched to a OnePlus One, which is kinda sorta stock, but still blazingly fast. Its too bad, I’d really like to buy a the successor to the Galaxy Tab S 10.5, but I don’t want to deal with the bloat. Guess the successor to the Shield Tablet will have to do.

      • ferdinandh
      • 5 years ago

      The shield tablet is awesome for its price. I hope they can improve: batterylife, storage performance/size, better buttons, tap to wake, even better speakers and please wireless charging.

        • southrncomfortjm
        • 5 years ago

        I’d like to see a 9inch version of the Shield Tablet with 32GB of storage standard. Stop messing around with 16gb.

        What I’d really like to see, too, is for Nvidia to basically copy the concept of the WikiPad and attach the controller elements to the side of the tablet. Makes so much more sense than a separate Xbox 360-esque controller.

    • sweatshopking
    • 5 years ago

    OH, IT’S ANDROID? YAWN.

      • Ninjitsu
      • 5 years ago

      We need special “Trolling” points for you. Sorry, I mean “TROLLING” points. :p

    • USAFTW
    • 5 years ago

    Although the new Exynos SOC is probably not as fast as the A8 and A8X, I’d argue that it’s more than capable for the workloads you would typically expect to run on a phablet of this tier.
    There’s definitely ways of fixing the consumer’s reluctance to upgrade to the latest and greatest. The Lollipop update for my Nexus 7 definitely made it slower, even though I tried every trick to skip it. Even if I go as far as nuking everything and do a clean install of Kit Kat, it will download and install and there’s no way to chose not to install it. I wonder how it works with iOS updates…

      • tipoo
      • 5 years ago

      In what way do you mean it’s a better match for phablet workloads? I’m guessing split screen multitasking? I still feel a dual core with so much higher per-core performance would see better utility there, over the cluster of A57s. Sure this can theoretically throw in the A53s if the governor finds the workload worthy, but within SoC TDP constraints it’s probably best to dedicate the power draw to the larger cores in high load scenarios.

      I’m really not sure what would come out on top then. I think three cores might just be the sweet spot, I’ll be curious in seeing if the iPhone 6S has three like the iPad Air 2 does.

        • blastdoor
        • 5 years ago

        I agree — two beefy cores is way better for a smartphone.

        Many things that could be parallelized to take advantage of 4+ cores could be more efficiently executed on a GPU and even more efficiently executed in fixed function hardware (like video encoders/decoders).

        Come to think of it, I’d rather have two beefy cores in a PC than 4 wimpy cores, too (sorry AMD fans, but it’s true!)

    • GrimDanfango
    • 5 years ago

    No interest in the phone, but I can’t wait to see this screen in the Oculus Rift 🙂

    • spiked_mistborn
    • 5 years ago

    Great article! I would love to see a comparison of this SOC running 64-bit code to see if it has any improvement.

    Also, are the storage scores for the Shield tablet and Nexus 7 transposed?

      • Damage
      • 5 years ago

      Nope! Those are the scores. Can’t vouch for PassMark’s quality, but that’s what they scored.

        • willmore
        • 5 years ago

        Nexus 7 2013 edition is NAS Ready!

    • Eaglehugo
    • 5 years ago

    Once again … some fogging is in the air …
    I have a note 4 910c and
    I get :
    1280 in single core
    4120 in multicore ….

    Would be nice to have really independent reviews ….

      • nico1982
      • 5 years ago

      Are you really complaining on a 5% difference in Geekbench or are just being sarcastic? :O

      Such small delta is easily imputable to even different room temperature on a thermally constrained device like a smartphone, let alone what has been run on the phone right before the test or different services running in background.

    • tsk
    • 5 years ago

    I too am a sworn enemy of touchwiz and similar skins from others.
    It’s like adding rainmeter to windows 7, just terrible.

    • albundy
    • 5 years ago

    just curious, was this phone tested with a clean stock ROM or the touchwiz bloatware from Samsung?

      • The Dark One
      • 5 years ago

      There’s no Google Play Edition of the Note, so there never has been a stock image for it. Any ROM would be a third party effort.

    • BoBzeBuilder
    • 5 years ago

    The Note 4 is one seriously sweet phone. I’ve been using it for months now and there’s nothing I can complain about.

    • fredsnotdead
    • 5 years ago

    I wonder if we’ll ever see big.LITTLE in X86, using e.g. atom and core together in a notebook processor. Maybe it wouldn’t be worth the extra die space. How much does using big.LITTLE depend on the OS?

      • Ninjitsu
      • 5 years ago

      I doubt, Intel rather use Hyperthreading. ARM has to do this because they started hitting an efficiency wall with the A15.

      • MadManOriginal
      • 5 years ago

      More likely fixed hardware for certain things. We already see this for video decode, for example. Race to idle is a better method for the chips Intel makes.

      • blastdoor
      • 5 years ago

      big.LITTLE is a hack. Intel and Apple don’t have to resort to such silliness. They can design a single core that can effectively scale performance and power.

        • tuxroller
        • 5 years ago

        It’s not a hack. Not really. You can absolutely use dvfs and get similar results, that’s what Qualcomm has been doing (edit: in their leaked roadmap, Qualcomm looks to be moving to big.LITTLE with their next custom arch) . The issue with dvfs is the penalty to move from the various S states to P states. When to move into which state is EXTREMELY far from trivial. On Linux Intel even broke out of the standard dvfs system to build their own P state driver (with arguable results). The point being, it’s not a solved problem.
        Arm’s solution provides a massive dynamic scaling window AND shorter latencies than moving a CPU from an S ->P state. This makes sense because of the very specific target of big.LITTLE in tiny tdp packages. On servers it’s less of an advantage.
        To be clear, it’s a very difficult problem determining which/when to migrate processes between cores, so we’ll see how this all shakes out.
        Also, the arm cores can, and do, obviously, make use of dvfs. It’s just that race to idle isn’t their only option to keep battery life long.

    • RdVi
    • 5 years ago

    I think the most disappointing thing about the Exynos note 4 is the hobbled memory bandwidth. They’ve clearly done this to tie the performance down to Snapdragon 805 levels, but it means that we just can’t take too much away from this review performance wise with regards to future chips. I would expect that on the the graphics side especially the Mali-T760 would likely see a huge boost from higher bandwidth.

    Very glad you guys got hold of this phone for review regardless. I had a look at a S805 one in store recently and the screen is simply amazing. It’s a shame about the colour shift though, and I hope they can improve on that in the near term.

      • willmore
      • 5 years ago

      Hobbled imples that they intentionally made it lower than it could be. Do you have any evidence of this?

        • RdVi
        • 5 years ago

        [url<]http://www.phonearena.com/news/Samsung-Galaxy-Note-4-benchmarks-Snapdragon-805-vs-Exynos-7-Octa_id63111[/url<] They employed a dual 64-bit (128-bit) bus at 800MHz on the Snapdragon Note 4, what other reason could they have to halve than on the Exynos version than keeping performance between the two somewhat on par? Somehow basemark OS II gives the Exynos a faster 'memory' score, but honestly I think this is down to cache since we know mobile benchmarks aren't the most accurate. There is such a gulf between the two on paper that the benchmark simply cannot be accurate. I really expect the Exynos version to be a poor choice for gaming and heavy media use because of their memory choice.

          • tuxroller
          • 5 years ago

          Qualcomm is using a custom design while Samsung is using the arm design.
          It’s as simple as that.
          Blame arm for “hobbling” their memory interface, if you wish.

    • MadManOriginal
    • 5 years ago

    3GB of RAM…I bet only 2.625GB of it is fast.

    Kappa

      • chuckula
      • 5 years ago

      Nah, it’s on a smartphone. I’m willing to bet it’s ALL SLOW.
      But it’s UNIFORMLY slow, so it’s not cheating.

    • Sam125
    • 5 years ago

    Thanks for the Saturday review Scott. I like the Note 4 and the only reason I’d be interested in it is for the stylus. I think the display and camera are an added bonus but not significant reasons for wanting a smartphone with a stylus.

    • hubick
    • 5 years ago

    These are too small. I want a premium phone with a 7″ screen. I may settle for the 6.44″ on the Xperia Z4 Ultra though.

      • shank15217
      • 5 years ago

      There is one, its called Nexus 7 LTE

        • hubick
        • 5 years ago

        As far as I know it can’t make regular phone calls over the cellular network, only data.

        I need something with AWS 1700 Mhz support for Wind in Canada (like T-Mobile uses). I know the Xperia phones support that and pretty much everything.

      • Sam125
      • 5 years ago

      [url<]http://www.bluproducts.com/index.php/studio-7-0[/url<] [url<]http://www.amazon.com/BLU-Studio-Unlocked-1-3Ghz-Android/dp/B00PKM1QBA[/url<]

      • MadManOriginal
      • 5 years ago

      [url<]http://www.engadget.com/2014/06/01/samsung-7-inch-galaxy-w/[/url<]

        • hubick
        • 5 years ago

        Yeah, doesn’t support the AWS 1700 Mhz I need 🙁

        Plus, what’s with Android 4.3 instead of 5?!

          • MadManOriginal
          • 5 years ago

          It’s Samsung…you get ONE android version upgrade! One free upgrade is all anyone could want, right?!?

            • willmore
            • 5 years ago

            Is that one upgrade called Cyanogenmod? 😉

            • derFunkenstein
            • 5 years ago

            Nah, usually about 6-8 months after Google releases an update, your phone will get it, if it’s in the first 2 years since the device’s release AND if it’s a flagship family like Galaxy S or Galaxy Note.

            • RdVi
            • 5 years ago

            I’m not sure how it is in the US, but here my S2 went from 2.3 to 4.1 (2 upgrades). My step dads S3 went from 4 to 4.3 (3 upgrades). And my Girlfriends S4 has come from 4.2 to 4.4 with 5 to come shortly. Seems fine to me. Now the timeframe on the other hand has always been slow, especially if you buy from a local carrier and don’t import, which I what I opted to do this time with my S5 mainly for price reasons, but hopefully faster updates also.

      • CaptTomato
      • 5 years ago

      Get a 4G/cellular tablet….?

      • ferdinandh
      • 5 years ago

      Why not 14″?

    • tipoo
    • 5 years ago

    It’s certainly interesting to see what Exynos chips have been up to, as the Galaxy S6 is said to have dumped the Snapdragon 810 for its supposed power draw and heat issues. So that may be a North American mass shipping Exynos based smartphone, coming soon.

    • brucethemoose
    • 5 years ago

    Those JavaScript benchmarks… mobile SOCs are within spitting distance of big desktop CPUs now. Heck, if Apple makes a new core for the A9, our PHONES could be on par with slower Kaveri cores in 2015.

    I, for one, welcome our new ARM overlords.

      • jjj
      • 5 years ago

      Denver is rather big, in Geekbench does some 2k points single core vs some 1.2k for this one and 1.6 iphones and 1.8k ipad. Geekbench being more appropriate for testing the CPU than java benchmarks.
      Denver in single core almost catches up with Nehalem (i7 920) at current clocks and a quad Denver should be doable on 16nm ff+ at slightly higher clocks.

        • Ninjitsu
        • 5 years ago

        I remember Denver being fairly inconsistent in performance.

          • blastdoor
          • 5 years ago

          Indeed. And does Denver have more than a single design win?

      • MadManOriginal
      • 5 years ago

      Javascript has a lot to do with the web browser too…it is really more of a web browser optimization benchmark honestly.

      • auxy
      • 5 years ago

      What? Not even close.

      My 4790K scores over 50K on Octane. That’s FIFTY THOUSAND. The Iphone 6 scores a measly 7613.

      My 4790K scores 59.1 [s<]seconds[/s<] milliseconds on Sunspider using IE11. Much faster than the result in the article.

        • Ninjitsu
        • 5 years ago

        59.1 ms, I’m sure? 😉

          • auxy
          • 5 years ago

          Aha! (*´ω`*)

            • Ninjitsu
            • 5 years ago

            Is that a pig’s face or a human’s face? :/

            • auxy
            • 5 years ago

            neko da~

        • deathBOB
        • 5 years ago

        50,000 / 88 w = 568.18

        7613 / 2? w = 3806 = god phone confirmed

        Seriously though, the rise in mobile chip performance is crazy. It’s just too bad the desktop market isn’t as competitive.

          • auxy
          • 5 years ago

          >implying IE11’s javascript engine is multithreaded
          >implying it fully loads even one core
          >implying race to idle doesn’t mean I actually used less total power
          (・∀・)

        • ferdinandh
        • 5 years ago

        IE11 doesn’t run a lot of sunspider benchmarks. Don’t use sunspider as a benchmark for cpus and don’t use IE11 with sunspider. It is worse than guessing at the performance.
        You could have know that because spidermonkey and v8 are both much faster than chakra. So anytime chakra is faster it means they are cheating.

          • auxy
          • 5 years ago

          Hmm? [url=https://www.webkit.org/blog/2364/announcing-sunspider-1-0/<]Sunspider has test validation to check for such cheating.[/url<]

            • ferdinandh
            • 5 years ago

            IE11: 84,7ms
            Chrome40: 207,9ms
            Firefox38: 142,8ms
            Yeah looks fine to me. No cheating going on here!

            • auxy
            • 5 years ago

            Sure. What’s the problem? (。´・ω・)?

            Is it just because IE is beating out Chrome and Firefox? So sorry that the evil empire has the best product. I’ll try not to stomp on any more of your unjustified prejudices. (*‘∀‘)

        • derFunkenstein
        • 5 years ago

        weird, your 4790K is 3x as fast as the 4790K in the article. Must be a browser difference? My 3570K at 4.5GHz can only get it down to around 135ms.

        nvm I see you’re using IE11. When I run it in IE11 I get around 60ms.

      • ferdinandh
      • 5 years ago

      Ah spitting distance is a 300% deficit. I will tell AMD that.

        • Beelzebubba9
        • 5 years ago

        A 300% performance difference isn’t bad when put in context of a ~2,000% power consumption delta.

          • Pettytheft
          • 5 years ago

          Yes but these leaps and bounds will soon hit a wall much like desktop CPU’s/GPU’s have.

            • Beelzebubba9
            • 5 years ago

            Oh sure – they’re all playing by the same laws of physics – I was just pointing out that a 300% performance different seems pretty good compared to the power consumption.

            • tuxroller
            • 5 years ago

            What law of physics states that efficiency has to fall as power goes up?
            This seems more an observation from very few samples, especially as concerned with archs.

            • Beelzebubba9
            • 5 years ago

            Oh sorry – that’s not what I meant. I was referring to ARM and AMD64 CPUs all being bound by the same constraints at the chip level so boosting ARM designs to the level of performance of Intel’s Core CPUs would likely erase whatever Power:Perf advantage that have on the low end.

            It wasn’t an interesting point, really. 🙂

            • tuxroller
            • 5 years ago

            Ah, that makes a bit more sense. So much for contextual reading:)
            I’d probably still be bit wary about such statements since architecture and implementation are the delimiters, and arm is fairly different from x86 (implementation wise, soc benefit from utilizing the same process node, aiui), but probably not enough to overcome the nonlinear costs of going increasingly OoO, and, obviously, frequency scaling (which IS a law of physics).

            • Beelzebubba9
            • 5 years ago

            Yeah, that was what I was exactly trying to get at – the performance:power of the A8/Cyclone Enhanced SoC is seriously impressive, but it’d likely lose much of its efficiency if Apple tried to scale it to Core levels of performance.

            I really would like to see better benchmarks of the Core M vs. Denver vs. A8X relative to actual power consumption. I’d guess Core M would mop the floor due to its significant process advantage, but the gap might not be as extreme as I’d expect.

    • jjj
    • 5 years ago

    a lot to be said here
    – i assume the die area for the core clusters includes the cache ,the cores themselves should be a lot smaller but can you make it clear? Edit – are those numbers really for A57/53 on 20nm? A15+A7 quad clusters with the cache were at about that on 28nm so either the new cores are far bigger than expected or the cache scales very poorly from 28 to 20nm but even if the cache scales poorly it doesn’t seem enough to lead to those sizes.
    – the multicore scaling in Geekbench is a bit odd and SD810 scales poorly too, on the other hand the octa core Mediatek A53 at 1.7GHz gets to about 4k points (800+ for single core) so it scales a lot better but i assume it’s more a TDP matter than anything else. the Mediatek is also just single channel and supports just up to 800MHz DDR3 and that cripples it’s score a bit. The SD615 scales poorly too ,to the point that it’s uncertain if it ever uses more than 4 cores.
    – i am curious about both perf and battery in browsing on encrypted sites vs older devices since A57/53 got this huge boost in encryption but not very sure how much of a difference it makes in browsing
    – that GPU seems rather big , it is on 20nm after all and others got smaller GPUs on 28nm so not sure it’s perf is all that great
    – do wish you had the Nexus 9 and some 1.5Ghz or higher A53 SoC in the charts
    – the soon to arrive 7420 is quite a bit faster [url<]http://browser.primatelabs.com/geekbench3/1780313[/url<]

      • tuxroller
      • 5 years ago

      Do you have a link to the sd810 results?

        • jjj
        • 5 years ago

        Go on Geekbench and search for msm8994 (did it for you [url<]http://browser.primatelabs.com/geekbench3/search?utf8=%E2%9C%93&q=msm8994[/url<] ) , the CPU perf is in line with this one although Qualcomm seems to have some problems with encryption perf but might be software and on it's way to being fixed. There also seems to be a new revision of SD810 but the early tests don't show better perf. SD810 scores also seem to lack consistency,no idea if it's early software or throttling. The clocks are just low on 20nm for any huge gains, the 7420 on 14nm is far more exciting. It also showed up in GFXbench and you can compare it there with other devices , look at offscreen since the tested device has a 4k screen [url<]https://gfxbench.com/device.jsp?os=Android&api=gl&D=Qualcomm+MSM8994+for+arm64+%28Adreno+430%2C+development+board%29&benchmark=gfx30[/url<] There should also be some Antutu score for the Xiaomi Note Pro that uses SD810 but i can't take Antutu seriously, that's for noobs.

          • tuxroller
          • 5 years ago

          Thank you so much! I’ve being googling for benchmarks of this chip for awhile and not managed to find anything.
          About the GB scores for the exynos 7, did you notice the clocks were listed @1.5GHz?

            • jjj
            • 5 years ago

            The clocks listed are for the A53 cores , that’s the case for the other SoCs too.
            Given the 25% gain in perf i assume the 7420 is at 2.4Ghz-2.5Ghz plus maybe DDR4 but we don;t really have details yet. We should see it in the G6 soon. I expect the G6 to hit stores in April or so (and ,if it matters, i’m 95% certain it has an area fingerprint sensor from Synaptics).

            • tuxroller
            • 5 years ago

            I realized that about the listed clocks which made me a bit concerned about the results GB returned. It might be as simple as GB not having the capability to correctly report aSMP systems.
            The gain in performance is what arm told us to expect. Iirc, (vs an A15, unknown revision): 50% faster with 64bit (around 30% in 32bit mode) instructions at integer workloads, 35% improvement for floating, and around 20% for memory intensive steam.
            So, clocks are probably around 2.1GHz (the same as what Qualcomm has claimed the 810 would use for the A57s).
            I don’t we’ll see ddr4 in exynos since, iirc, arm’s designs are still using ddr3.

            • jjj
            • 5 years ago

            The SD810 is 2GHz (well, just under but listed as 2GHz) and DDR4 support is there for SD810 and likely the 7420.
            The 7420 is clearly clocked well higher no doubt , we just don’t know for sure how high they can go on 14nm.

            • tuxroller
            • 5 years ago

            Mea culpa. You’re absolutely right about the sd810.
            Is be extremely surprised if Samsung had already moved to 14nm mass production since they only released their first 20nm chip back in sept, iirc. That would be an insanely fast cadence, especially for what Intel has found to be a difficult node (in addition isn’t 14nm where Samsung was moving to finfet?).

      • Ryszard
      • 5 years ago

      The areas for the CPUs include the caches and they’re correct. A53 in particular is a lot bigger than A7.

        • jjj
        • 5 years ago

        I find that hugely unexpected , on 28nm the 2 clusters together were just under 20mm2 , for just the core (without cache) the numbers available were putting A7 at 0.45mm2 and A15 at 2.7mm2 on 28nm TSMC.
        Thought the new cores are not a lot bigger on the same process while this is a full shrink. Guess it explains why A57 can’t clock higher but then the perf is rather disappointing.
        A53 at least does bring nice perf gains and i wonder how high it could be clocked in the same TDP as an A57.

    • revparadigm
    • 5 years ago

    I’ve already influenced 3 people I know to get Note 4’s after they simply handled mine first hand & had a little chance to navigate around in it. The phone is simply amazing , from the display – to the s-pen.

    I think the Galaxy S6 is going to be just fine without Qualcomm inside too.

    • juampa_valve_rde
    • 5 years ago

    I had at december for a couple of days another variant, Note 4 Duos (Snapdragon 805 dual sim), and it was really nice, it was a pitty that was based on the old platform but it felt surprisingly snappy for such big @ss resolution. The terrible thing were the chinese/korean interface and touchwiz, it was so disgusting that such phone with CyanogenMod or AOSP KitKat would have been much better rounded and practical.

    • ssidbroadcast
    • 5 years ago

    I’d like to hear the commentary about this on the next podcast. In the Display section of the article, the pictures look good but the light-box product photos make the screen look like it has a blue-shift. Is that just from adjusting the Levels in post?

      • Damage
      • 5 years ago

      The white balance in my photos is geared toward the lighting and backdrop, not the display’s temperature. Rest assured the display is as good as the colorimeter readings suggest.

    • ultima_trev
    • 5 years ago

    My LG G3 now weeps in its technical obsolescence. 🙁

    Excellent review by the way, it makes me anticipate the Qualcomm implementation.

Pin It on Pinterest

Share This