AMD’s A4-5000 ‘Kabini’ APU reviewed

AMD has been surrounded by a fair amount of gloom for the past couple of years, but the firm’s low-cost and low-power Brazos platform has been a consistent bright spot in spite of everything. The E-series APUs based on Brazos have saturated the low end of the laptop market, helping to send traditional, functionally hobbled netbooks to their doom. AMD’s new leadership has repeatedly spoken about the virtues of Brazos as a business. They like it because it’s high-volume—you tend to move a lot of chips when you sell ’em cheap—and because Intel hasn’t competed vigorously against Brazos, apparently for fear of eating into its low-end Core i3 business.

The follow-up to Brazos is a single chip, known by the twin code names Kabini and Temash, that packs four CPU cores, a miniature Radeon graphics processor, and everything else you need for a functional PC onto a tiny slice of silicon. The true competition from Intel will be the Bay Trail part based on the Silvermont architecture, but it’s not slated to arrive until later this year. In the meantime, AMD will have something truly distinctive to offer: a quad-core SoC that’s fully PC compatible, very affordable, and fits into various sorts of sleek, slim systems. Kabini will aim for laptops—think ultra-thin systems with long battery life for under 500 bucks—and low-cost desktops. Meanwhile, Temash will target tablets between 10.1″ and 11.6″ in size that are roughly 10mm thick, maybe a bit less. Imagine a tablet that sits between the Microsoft Surface and Surface Pro in price, size, and performance and you’ll have the basic idea.

A true PC system on a chip

Some time in the past couple of years, pretty much everybody in the PC industry started calling their CPUs “SoCs” or systems-on-a-chip. It’s trendy, sounds like what Apple does, and is therefore entirely irresistible within a 100-mile radius of the San Francisco Bay. Although the definition of a “real” SoC is a little wobbly, Kabini/Temash may have the best claim yet to being the first true PC SoC.

A perspective-heavy block diagram of Kabini/Temash. Source: AMD.

Naturally, then, this chip packs in a ton of components. The headliners are undoubtedly the four “Jaguar” CPU cores, based on an evolution of the Bobcat microarchitecture used in Brazos, and the integrated graphics processor, which is derived from the same Graphics Core Next (GCN) architecture as the Radeon HD 7000-series discrete GPUs. The GPU includes a UVD media processing block capable of H.264 decoding and encoding, of course, and the chip’s north bridge acts as a traffic cop, routing requests to the SoC’s single-channel DDR3 memory controller.

All of those elements might be familiar from past AMD APUs, but this SoC also incorporates all of the I/O functionality that has traditionally been built into a separate south bridge chip or “Fusion controller hub,” as AMD calls it. Branching out from Kabini are four PCI Express x1 links, two SATA 6Gbps disk interfaces, an SD card controller, two USB 3.0 ports, eight USB 2.0 connections, a gaggle of display interfaces including HDMI and DisplayPort 1.2, and a dedicated four-lane connection for an optional discrete GPU. Oh, and legacy I/O like keyboard ports and such are in there, too. Makes you wonder if there aren’t secret connections for a Turbo button and an EISA card.

Integrating all of these things together on one chip saves power, reduces the physical footprint of a system, and cuts costs, too. That’s why we keep seeing more and more integration over time. Kabini simply takes that concept to a logical endpoint by bringing aboard pretty much an entire small-scale PC.

The key words above, by the way, are “small-scale.” AMD tells us these chips are being manufactured by two different foundry partners, TSMC and GlobalFoundries, at 28-nm process geometries. We haven’t managed to wrangle the chip’s exact transistor count or die size yet, but I’ve held one in my hand, and it’s tiny. There has been some talk about how this SoC is closely related to the chips going into the PlayStation 4 and Xbox One—and it is, quite closely—but Kabini is scaled way down. The PS4, for instance, has eight Jaguar cores and 1152 GCN shader ALUs, while the Xbox One reportedly has eight cores and 768 shader ALUs. This chip has four cores and 128 shader ALUs. The memory bandwidth disparity is similarly huge between Kabini and the consoles, more than an order of magnitude. Although they share quite a bit of DNA, Kabini and Temash are aimed at much lower cost and power targets than the chips AMD has built for Sony and Microsoft.

The Jaguar core

Although its bigger CPUs haven’t been as competitive as hoped lately, AMD has had a nice run with the Bobcat core used in the Brazos platform. Bobcat came out of the gate using out-or-order execution and only one thread per core, and as a result, it was about 20% faster than the Atom in our tests, especially in cases where applications weren’t readily multithreaded. Now, Intel has committed to a similar template for the upcoming, all-new Silvermont Atom architecture, with an emphasis on improving per-thread performance. Meanwhile, AMD has revised its low-power microarchitecture in a multitude of ways both big and small, and the result is the evolutionary step known as Jaguar.

Jaguar brings a few principal improvements over the prior generation in terms of power efficiency and performance, which are essentially two sides of the same coin these days. A host of tweaks throughout the core has produced a 22% gain in instruction throughput per clock, although that gain is more like 15% if you don’t factor in the impact of the larger L2 cache. Either way, the generational advancements are substantial. Also, Jaguar has been retooled for better frequency-voltage response, in part via the addition of a couple of pipeline stages, so the chip should consume less power at a given clock speed. Finally, the core has been tweaked for better power efficiency in other ways, too, including some unit redesigns and an expansion of the ability to gate off the clock signal from portions of the chip that are currently idle.

Even greater performance increases are possible by harnessing extensions to the x86 instruction set, and Jaguar adds support for a whole range of those, including the SIMD alphabet soup that is SSE 4.1, SSE 4.2, and AVX. Also supported are AES-NI encryption acceleration and F16C format conversions. Other new features suggest Jaguar may find its way into server systems soon, including the expansion of physical addressing to 40 bits and the better support for OS virtualization.

Functional block diagram of the Jaguar core. Source: AMD.

Above is a functional block diagram of the Jaguar architecture. Although there are tweaks throughout the core that contribute to the IPC gains, the most sweeping changes are reserved for the floating-point unit, which is a total redesign. The new FPU is 128 bits wide, twice the width of Bobcat, and is responsible for executing many of those extended SIMD instructions like SSE and AVX. With single-precision datatypes, the execution hardware can perform four multiplies and four adds per cycle. For double-precision math, the rate is one multiply and two adds per clock.

AMD says support for 256-bit wide AVX extensions is achieved by “double-pumping” the 128-bit execution units. In this case, “double pumping” means data are fed through the units in two passes, but the units do not run at twice the base clock frequency, as the Pentium 4’s integer ALUs did.

The physical floorplan of a single Jaguar core. Source: AMD.

Kabini’s four revised Jaguar cores are fed by a 2MB L2 cache shared via a common interface that connects to each core individually. Sharing a cache in this way has several benefits. In light workloads where one or more cores are inactive, the busy cores will effectively have more L2 cache capacity available to them, improving per-thread performance. Meanwhile, because the L2 cache replicates the contents of the cores’ L1 caches, the L2 can act as a probe filter for coherency traffic, facilitating more efficient multitasking.

AMD has put some work into the L2 interface, which makes sense since it’s the cores’ only path to the rest of the system. The L2 interface runs at the full speed of the CPU cores and has built-in smarts, including the ability to store L2 tags, so it knows which portion of the cache to light up when the time comes to access one of its four 512KB banks. When those L2 cache banks aren’t needed, they’re clock gated to save power. AMD further conserves power by clocking the L2 arrays at half the frequency of the CPU cores.

GCN graphics scaled down

Kabini is the first APU to incorporate a graphics block based on AMD’s GCN architecture, and this addition grants the SoC a rich suite of graphics- and compute-focused features, including support for the DirectX 11.1 graphics API and the OpenCL media and compute API. Also, crucially, the presence of GCN hardware makes Kabini compatible with the latest AMD Catalyst graphics drivers, which should translate into solid compatibility with the latest applications and relatively frequent driver updates.

Kabini’s integrated graphics processor – logical diagram. Source: AMD.

As we’ve already noted, Kabini’s graphics have been scaled down pretty massively in order to fit into the power envelopes in question. The chip has only two GCN compute units, or CUs, with a total of 128 shader ALUs and eight texels per clock of filtering capacity. A single render back-end offers four pixels per clock of blending throughput. For this class of product, these choices are sensible, but we’ve already established how much grander the scale is in this chip’s console siblings. Compare, also, to the Radeon HD 7790; that $149 graphics card has 14 compute units, for a total of 896 shader ALUs, and 16 pixels per clock of ROP throughput. So, you know, don’t expect the world from Kabini’s graphics, even though they’re likely to be the best in their class.

Heck, I’m a little surprised AMD was able to squeeze its full-fat desktop GPU architecture into a chip of this class in any form. AMD has made a few adjustments to adapt GCN to this sort of deployment. This is, in fact, a newer version of GCN than you’ll find in most current Radeons; it includes some instructions to facilitate memory sharing between the CPU and IGP. Also, in Kabini, the number of banks in the local data share in each CU has been reduced from 32 to 16. Beyond that, as far as we know, the only other power-saving measures are at the physical design level, where transistor selection was optimized for low-power operation.

Power management

Power management in an SoC like this one is paramount. As AMD’s Sam Naffizger told us, if all portions of Kabini were turned on at once in a typical system, it would drain the battery in less than an hour and probably melt part of the system case. These chips can only fit into their prescribed power envelopes by constantly adjusting themselves.

To that end, Kabini has a fairly sophisticated power management setup, similar in basic capability to what’s built into AMD’s larger Trinity and Richland mobile APUs. At its heart is an onboard 32-bit microcontroller with its own memory that takes inputs from a range of sources across the chip, including power monitors in the CPU cores, the GPU, the display interface, and the FCH. The power controller estimates total power use based on activity and can require an individual unit to ramp down its voltage and clock frequency in order to prevent the chip from exceeding its power budget and overheating.

Power sharing opportunities across the chip. Source: AMD.

Kabini includes power gates for each of its four cores and for its IGP, so power to these sections of the chip can be turned off entirely when one of those entities is idle. The combination of dynamic voltage and frequency scaling, power gating, and intelligent monitoring of power opens up opportunities to share power headroom between different portions of the chip via a mechanism AMD calls Turbo Core. The concept is straightforward: if the GPU is idle and the CPU cores have work to do, the GPU can be shut down and its power budget shifted to the CPU cores. With more headroom, then, the CPU cores can increase voltage and frequency beyond their usual limits.

The user experience is often dominated by the performance of a single core. Source: AMD.

Similarly, a single active CPU core could borrow headroom from its inactive neighbors to range up to higher clocks temporarily. This provision can increase single-thread performance and improve the user’s sense of system responsiveness.

All good in theory, right? The strange thing here is that, at launch, only a single model of Temash supports Turbo Core: the 8W A6-1450 quad-core intended for tablets. None of the Kabini-derived parts do. They all can save power via Kabini’s DVFS scheme, but they can’t shift power around to extract more performance headroom.

AMD does have another trick up its sleeve, also named after forced induction, that may offset any loss of performance from the lack of Turbo Core: it’s called Turbo Dock. This feature is intended for dockable tablets like the Asus Transformer series. When the tablet is detached from its keyboard dock, the SoC inside will operate under the burden of a relatively low power limit, to preserve battery life and reduce heat. Once the tablet is docked, the TDP limit is raised, potentially to twice the limit of slate mode. AMD expects to achieve up to ~30% higher performance via this trick. We’ll have to see it in action in an actual product, of course, but the theory sounds good.

The products: A- and E-series APUs

Kabini and Temash are code names. Out in the market, AMD will use a different nomenclature to refer to these chips. The Kabini lineup will officially be known as the 2013 AMD Mainstream APU Platform, and it will include both A-series and E-series models, as outlined in the table below:

Model Radeon TDP CPU

cores

CPU

clocks

L2

cache

size

Radeon

ALUs

GPU

clocks

Max

DDR3

speed

A6-5200 HD 8400 25W 4 2.0GHz 2MB 128 600MHz 1600MHz
A4-5000 HD 8330 15W 4 1.5GHz 2MB 128 500MHz 1600MHz
E2-3000 HD 8280 15W 2 1.65GHz 1MB 128 450MHz 1600MHz
E1-2500 HD 8240 15W 2 1.4GHz 1MB 128 400MHz 1333MHz
E1-2100 HD 8210 9W 2 1.0GHz 1MB 128 300MHz 1333MHz

AMD says the A6-5200 will compete against Intel’s low-end Core i3 processors, while the A4-5000 will go up against slower Pentium models, and the E-series offerings will stack up against even slower Celerons. The company claims Kabini “completely outclasses” those Pentium processor, and it expects the chip to “dominate” the low-end notebook market.

While those are bold words, AMD touted similar positioning with its original Brazos platform. That was two years ago, of course. Today’s Pentiums and Celerons are faster, and Brazos is in no position to match them in a fair fight. Kabini’s extra performance should be instrumental in helping AMD recapture lost ground there.

The Temash lineup is known as the 2013 AMD Elite Mobility Platform, and it only includes three A-series models:

Model Radeon TDP CPU

cores

CPU

clocks

L2

cache

size

Radeon

ALUs

GPU

clocks

Max

DDR3

speed

A6-1450 HD 8250 8W 4 1.0/1.4GHz 2MB 128 300/400MHz 1066MHz
A4-1250 HD 8210 9W 2 1.0GHz 1MB 128 300MHz 1333MHz
A4-1200 HD 8180 3.9W 2 1.0GHz 1MB 128 225MHz 1066MHz

In tablets, all three of the A-series chips above will sit between the Atom and the Core i3. In other words, Windows 8 tablets based on Temash should be faster and a little more power-hungry than Atom tablets, but they shouldn’t be quite as big, bulky, or expensive as Core-powered slates like the Microsoft Surface Pro. That device isn’t so much a tablet as the open-faced sandwich version of an ultrabook.

The A4-1200 has a tighter power envelope than both of AMD’s existing tablet chips, the Z-60 and Z-01. Those offerings also have dual 1GHz cores, but they’re rated for TDPs of 4.5W and 5.9W, respectively. They also have fewer shader ALUs (80) and less L2 cache (only 1MB) than the new A-series parts. Considering Jaguar’s IPC improvements, it’s fair to expect the A4-1200 to deliver better CPU performance, better graphics performance, and better battery life than the Z-60 and Z-01.

Temash will also appear in what AMD calls “small screen touch notebooks.” In those systems, the chips will again slot in between the Atom and Core series, but they’ll have direct competition from low-end Pentium and Celeron processors.

The Kabini whitebook

Our sample Kabini system was a 13-inch notebook PC powered by the fastest 15W variant of Kabini, the A4-5000. This notebook isn’t a production system. Rather, it’s a “whitebook” bereft of corporate branding and assembled solely for testing purposes.

The system’s 13″ display lacks touch-screen capabilities, but it has a 1080p resolution, a matte coating, and what looks to be an IPS panel. Inside the chassis, there’s a single 4GB DDR3 DIMM, a 1TB Hitachi hard drive with a 5,400-RPM spindle speed, and a 45Wh battery. Connectivity includes USB 3.0, DisplayPort, VGA, and Ethernet.

At 3.83 lbs and 0.87″ thick, this thing is a little heavier and thicker than your average ultrabook. It’s still very thin and light, though, and AMD tells us that similar configurations could cost just $499 out in the wild. That would be a good $100-200 more affordable than the cheapest ultrabooks.

We asked AMD whether this whitebook was representative of a typical Kabini configuration. We were told that it’s “in the middle of what you might see.” The company expects the most inexpensive Kabini notebooks to be priced at just $399. Thanks to the processor’s tight power envelope, PC makers will have a wide range of display sizes to choose from—and there will no doubt be some touch screens in the mix, as well.

Our testing methods

We compared the performance of the A4-5000 whitebook to that of four systems:

  • A premium ultrabook, the Asus Zenbook Prime UX31A, which has a 17W Core i5 processor and is priced at $1,100 right now. Retail notebooks based on the A4-5000 shouldn’t cost anywhere near that much, but the Zenbook Prime gives us a high-water mark for performance in the ultrathin category.
  • A low-end ultrathin laptop, the Asus VivoBook X202E. This system has a 17W Core i3 CPU backed by single-channel memory, and it costs $399. In terms of both pricing and performance, this should be one of the most direct competitors to upcoming laptops based on the A4-5000.
  • An Atom-powered Windows 8 tablet, the Asus VivoTab Smart ME400C, which is priced just south of $430. This is one of the lowest-power Windows 8 systems on the market today. Its Atom Z2760 processor manages to squeeze dual 1.8GHz, Hyper-Threaded cores into a Lilliputian 1.7W power envelope. The ME400C is obviously not in the same league as the A4 whitebook, but it provides us with a performance baseline for an ultra-low-power x86 config.
  • A Mini-ITX desktop build based on AMD’s E-350 mobile APU. The E-350 is the A4-5000’s predecessor. It has two Bobcat cores, integrated Radeon HD 6000-series graphics, and an 18W thermal envelope. We were hoping to procure a notebook based on the E-350 (or the slightly quicker E2-1800) to run battery life comparisons, but we weren’t able to get one in time for our review. This desktop build is the next best thing; it will let us see how much Kabini has moved the ball forward.

You’ll find the full specs of those machines in the table below.

One more thing to note: the Atom Z2760 processor doesn’t run 64-bit software. That’s not a deficiency of the silicon; rather, it’s a product segmentation move by Intel. Either way, we had to test our tablet using 32-bit versions of our benchmark apps. In instances where those apps were available in both 32-bit and 64-bit versions, we tested 32-bit builds on the Atom, 64-bit builds on the Core processors, and, in order to provide a frame of reference, both 32-bit and 64-bit builds on the A4-5000. In such instances where multiple versions of the same benchmark were run, you’ll see 32-bit runs labeled clearly in the graphs.

We ran every test at least three times and reported the median of the scores produced. The test systems were configured like so:

System Asus ME400C Asus UX31A Asus X202E AMD A4-5000 whitebook Gigabyte E350N-USB3 test system
Processor Intel Atom Z2760 1.8GHz Intel Core i5-3317U 1.7GHz Intel Core i3-3217U 1.8GHz AMD A4-5000 1.5GHz AMD E-350 1.6GHz
Platform hub Integrated Intel HM76 Express Intel HM76 Express Integrated AMD Hudson M1
Memory size 2GB 4GB (2 SO-DIMMs) 4GB (1 SO-DIMM) 4GB (1 SO-DIMM) 4GB (2 DIMMs)
Memory type LPDDR2 SDRAM at 800MHz DDR3 SDRAM at 1600MHz DDR3 SDRAM at 1333MHz DDR3 SDRAM at 1600MHz DDR3 SDRAM at 1066MHz
Memory timings 6-8-8 11-11-11-28 9-9-9-24 11-11-11-28 6-6-6-19
Audio Intel SST codec with 6.2.9200.25166 drivers Realtek codec with 6.0.1.6710 drivers Via codec with 6.1.0.1100 drivers Conexant HD audio with 8.64.42.0 drivers Realtek ALC892 with 6.2.9200.16497 drivers
Graphics Intel Graphics Media Accelerator

with 9.14.3.1099 drivers

Intel HD Graphics 4000

with 9.17.10.3071 drivers

Intel HD Graphics 4000

with 9.17.10.3071 drivers

Radeon HD 8330

with 13.101-130507a-156998E drivers

Radeon HD 6310

with Catalyst 13.5 beta drivers

Hard drive SEM64G 64GB SSD Adata XM11 128GB SSD HGST Z5K500 500GB 5,400-RPM Toshiba MQ01ABD100H 1TB 5,400-RPM Crucial m4 256GB SSD
Operating system Windows 8 x86 Windows 8 Enterprise x64 Windows 8 x64 Windows 8 Pro x64 Windows 8 Pro x64

Thanks to AMD and Asus for providing our test systems.

We used the following versions of our test applications:

The tests and methods we employ are usually publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Memory subsystem performance

As always, we’ll begin our performance comparison with some synthetic tests. The first of those is a simple measure of memory bandwidth.

The A4-5000 and E-350 are both limited to single-channel DDR3 memory, although the A4 supports higher speeds—1600MHz instead of 1066MHz. The Core i3 and Core i5 both support dual-channel DDR3-1600, but in the systems we tested, only the Core i5 machine had both of its memory channels populated.

The A4-5000 enjoys a sizable boost in memory bandwidth over the E-350 thanks to the higher memory frequency. However, the Core i3 seems to be able to extract more bandwidth out of a single DDR3-1600 SO-DIMM than the A4 does. A look at cache performance should give us some insight as to why.

SiSoft Sandra’s more elaborate memory and cache bandwidth test is multithreaded, so it captures the bandwidth of all caches on all cores concurrently. The different test block sizes step us down from the L1 and L2 caches into main memory.

The A4’s caches keeps up with the Core i3’s until the 256KB block size, after which they fall behind. If you read our architectural exposé earlier, you’ll know Kabini has four 32KB blocks of L1 cache (one per core) supplemented by a 2MB pool of L2 cache, which is shared among the cores. Since this test is multithreaded and uses up all 256KB of Kabini’s L1 caches before moving on to the L2, the results suggest AMD’s newcomer has faster L1 but slower L2 cache than the Core i3. (Remember, the Core i3 is a dual-core processor, so we’re comparing two Ivy Bridge cores to a quad-core Kabini module.) The L2 performance picture isn’t entirely surprising, since Kabini’s L2 cache runs at half the CPU clock speed.

In any case, the A4’s L1 and L2 cache performance is considerably higher than that of the E-350. The E-350 even falls behind the Atom, which has less than a tenth the power envelope, in this test. Of course, both the E-350 and the Atom have only two cores, while the A4-5000 has four.

Sandra also includes a new latency testing tool. SiSoft has a nice write-up on it, for those who are interested. We used the “in-page random” access pattern to reduce the impact of prefetchers on our measurements. We’ve also taken to reporting the results in terms of CPU cycles, which is how this tool returns them. The problem with translating these results into nanoseconds, as we’ve done in the past with latency measurements, is that we don’t always know the clock speed of the CPU, which can vary depending on Turbo responses.

This test isn’t multithreaded, which explains why the A4’s latency goes up at the 32KB block size and then again above 2MB. The E-350 exhibits a similar pattern but latency rises right after the 512KB block size. That’s because it has half as much L2 cache (only 1MB), and that cache is split between the two cores. Each core therefore has only 512KB of L2 at its disposal.

Synthetic CPU performance

The latest version of AIDA64 includes some synthetic CPU tests that can give us a sense of Kabini’s branch prediction effectiveness and its performance with the AVX instruction set. FinalWire explains the mechanics of those tests in detail on its website, but we’ll give you the Cliff’s Notes version below.

We’ll start with the CPU Queen test, which FinalWire describes as a “simple integer benchmark” that assesses each processor’s branch prediction effectiveness.

This test is nicely multithreaded, which explains the rough doubling of performance from the E-350 to the A4-5000. The A4 is running at a 100MHz lower clock rate, and the Bobcat core has one or two fewer pipeline stages than Jaguar. Still, the A4’s performance is more than twice the E-350’s, which suggests Jaguar’s branch prediction is indeed more accurate than Bobcat’s.

Next up is PhotoWorxx, which simulates photo processing workloads. FinalWire says this test focuses on integer and memory performance. AVX instructions are used here.

No doubt thanks to its AVX support, the A4 comes fairly close to the Core i3 in this test.

The CPU Hash, FPU Julia, and FPU Mandel tests are all written in assembly, and they all utilize AVX instructions. CPU Hash measures encryption performance using the SHA1 hashing algorithm. The FPU Julia and FPU Mandel benchmarks measure single- and double-precision floating-point performance, respectively, using fractal computations.

The A4-5000 outruns the Core i3 in the hash test. It lags a little behind its rival in both floating-point tests, but it’s hugely quicker than the E-350 across the board. The increase from Brazos to Kabini here is much larger than one would expect from just doubling the core count. Kabini’s AVX support and redesigned FPU probably deserve the bulk of the credit for the size of the gains.

Productivity

TrueCrypt disk encryption

TrueCrypt supports acceleration via Intel’s AES-NI instructions, so the encoding of the AES algorithm, in particular, should be very fast on the CPUs that support those instructions. We’ve also included results for another algorithm, Twofish, that isn’t accelerated via dedicated instructions.

Along with the Core i5, the A4-5000 is the only processor in the mix to support AES acceleration. That acceleration pays substantial dividends in TrueCrypt.

The Twofish algorithm isn’t accelerated, but the A4-5000 still gives the Core i3 a minor whupping there. It’s also more than twice as fast as the E-350—and the Atom.

7-Zip file compression and decompression

The Core i3 has a clear advantage when it comes to data compression in 7-Zip, but two chips handle decompression at about the same rate.

SunSpider JavaScript performance

In this JavaScript-focused web browsing test, the A4-5000 winds up closer to the E-350 than to the Core i3. Our past results suggest SunSpider favors single-threaded performance, which may explain the disparity. (Keep in mind the A4 has twice as many cores as the E-350.)

The Core i5 does perform better than the Core i3 here despite its slightly lower base clock speed, but remember, it also has Turbo Boost and considerably more memory bandwidth at its disposal.

Image processing

The Panorama Factory photo stitching
The Panorama Factory handles an increasingly popular image processing task: joining together multiple images to create a wide-aspect panorama. This task can require lots of memory and can be computationally intensive, so The Panorama Factory comes in a 64-bit version that’s widely multithreaded. We asked it to join four pictures, each eight megapixels, into a glorious panorama of the interior of Damage Labs.

In the past, we’ve added up the time taken by all of the different elements of the panorama creation wizard and reported that number, along with detailed results for each operation. However, doing so is incredibly data-input-intensive, and the process tends to be dominated by a single, long operation: the stitch. Thus, we’ve simply decided to report the stitch time, which saves us a lot of work and still gets at the heart of the matter.

The A4 is again slower than the Core i3, but it’s much faster than the E-350.

Well, that is, unless you’re running the 32-bit version of the app, which seems to be much slower than the 64-bit version. The Atom Z2760’s lack of 64-bit support puts it at a substantial disadvantage.

Video encoding

x264 HD video encoding

We’ve devised a new x264 test, which involves one of the latest builds of the encoder with AVX2 support. To test, we encoded a one-minute, 1080p .m2ts video using the following options:

–profile high –preset medium –crf 18 –video-filter resize:1280,720 –force-cfr

The source video was obtained from a repository of stock videos on this website. We used the Samsung Earth from Above clip.

The A4 is almost three times quicker than the E-350 and the Atom Z7260. However, it’s a tad slower than the Core i3. These results largely echo those from AIDA64’s AVX-enabled synthetic benchmarks.

Accelerated applications

The benchmarks we’ve run so far have made use of the CPU cores only. The ones on this page and the next tap into the SoC’s integrated graphics processor for general-purpose computing tasks.

LuxMark OpenCL rendering

LuxMark uses OpenCL to render a 3D scene using an OpenCL-accelerated ray tracing algorithm. Since OpenCL code is by nature parallelized and relies on a real-time compiler, it should adapt well to new instructions. For instance, Intel and AMD offer integrated client drivers for OpenCL on x86 processors, and they both claim to support AVX.

We’ll start with CPU-only results. These results come from the AMD APP driver for OpenCL, since it tends to perform well on both Intel and AMD CPUs.

For some reason, the A4 falls behind the E-350 in when we run LuxMark on the IGP only. Kabini has much faster integrated graphics on paper, so we may be looking at a driver optimization issue or some other software hiccup.

Pair the CPU and IGP together, and the A4-5000 trounces the Core i3. AMD’s decision to dedicate plenty of die area to graphics doesn’t just bode well for games; it pays dividends in compute tasks like this one, as well.

The Atom is absent from these results, because even with the APP runtime installed, it wouldn’t run LuxMark properly. The application started, but it complained of a lack of OpenCL-capable devices and wasn’t able to proceed with rendering.

Musemage

This photo editing application features OpenCL acceleration. It also includes a built-in benchmark, which applies a set of filters to a photo and spits out a score at the end. That’s what we used.

Musemage’s OpenCL acceleration is kind to the A4-5000. The AMD chip races ahead of even the Core i5 from our premium ultrabook.

WinZip 17.5

For the last couple of versions, WinZip has featured a parallel processing pipeline with OpenCL support. The pipeline allows multiple files to be opened, read, compressed, and encrypted simultaneously, all with hardware acceleration.

We tested WinZip by compressing a 1.17GB directory containing about 150 small text and image files, a couple dozen medium-sized PDF files, and 14 large Photoshop PSD files. We tested first with OpenCL disabled in the options, and then with OpenCL enabled, to get a sense of the performance benefits GPU acceleration would yield. Each operation was timed with a stopwatch.

Without OpenCL, the A4-5000 compresses our test archive substantially slower than the Core i3 does. Once we enable OpenCL, the tables are turned—but only because IGP acceleration actually slows down the Core i3. For a fair contest, we should compare the A4’s accelerated compression time (89 seconds) to the Core i3’s regular time (77 seconds). The A4 still ends up at a disadvantage, but the gap between the two chips shrinks from 25% to about 13%.

By the way, you’ll notice that the Atom is included in both sets of results. That’s because WinZip doesn’t hide the OpenCL setting on our Atom-powered tablet. However, it doesn’t look like the setting actually changes anything. Compression times are the same regardless of whether the checkbox is ticked or not.

HandBrake

There are now public builds of HandBrake available with an OpenCL-accelerated version of the x264 encoder—and an option to enable or disable OpenCL when encoding.

We tested by encoding a 1080p version of the Looper trailer into 720p format. We used the encoding options outlined in the screenshot below, with the constant frame rate setting enabled.

Here, too, we tested both with and without OpenCL enabled. The OpenCL setting wasn’t exposed on the Intel systems, however, so we presented the results in the same graph.

The gain from OpenCL acceleration on both the A4-5000 and the E-350 is extremely minor. At least, it seems that way until we look at total encoding times for our test video, which was 3649 frames in length.

OpenCL cuts encoding times by five seconds on the A4 and 24 seconds on the E-350. Which is, you know, better than nothing. It’s still not enough to give the AMD chips an edge over the competition from Intel, however.

The Elder Scrolls V: Skyrim

Our Skyrim test involved running around the town of Whiterun, starting from the city gates, all the way up to Dragonsreach, and then back down again.

Testing was done at 1280×720 using the game’s “Low” quality preset. Our Atom system had to sit this one out, since it couldn’t run the game properly at any settings.

Frame time

(ms)

FPS rate
8.3 120
16.7 60
20 50
25 40
33.3 30
50 20

Let’s preface the results below with a little primer on our testing methodology. Along with measuring average frames per second, we delve inside the second to look at frame rendering times. Studying the time taken to render each frame gives us a better sense of playability, because it highlights issues like stuttering that can occur—and be felt by the player—within the span of one second. Charting frame times shows these issues clear as day, while charting average frames per second obscures them.

To get a sense of how frame times correspond to FPS rates, check the table on the right.

We’re going to start by charting frame times over one representative test run for each system. (That run is usually the middle one out of the five we ran for each card.) These plots should give us an at-a-glance impression of overall playability, warts and all. You can click the buttons below the graph to compare the different solutions.


Right away, it’s clear that the A4 delivers a huge graphics performance improvement over the E-350. Also, the A4 achieves very consistent frame times overall, even if the plot line isn’t particularly low. (40 ms works out to about 25 FPS, for the record.) The E-350 and the Core i3 are both all over the place, and their frame times are clearly higher on average.

Only one chip beats the A4, and that’s the Core i5. Of course, that chip is equipped with dual-channel memory, whereas the A4 is limited to only a single channel—and real-time graphics is a very bandwidth-intensive task.

Now, we can slice and dice our raw frame-time data in several ways to show different facets of the performance picture. Let’s start with something we’re all familiar with: average frames per second. Average FPS is widely used, but it has some serious limitations. Another way to summarize performance is to consider the threshold below which 99% of frames are rendered, which offers a sense of overall frame latency, excluding fringe cases. (The lower the threshold, the more fluid the game.)

The average FPS and 99th percentile results confirm our initial observations: the A4 is second only to the Core i5, and it’s well ahead of both the E-350 and the Core i3. However, the A4’s 46.3 ms 99th-percentile frame time is a little on the high side if you’re hoping for fluid animation.

By the way, those 99th-percentile figures only capture a single point along the latency curve, but we can show you that whole curve, as well. With single-GPU configs like these, the right hand-side of the graph—and especially the last 5% or so—is where you’ll want to look. That section tends to be where the best and worst solutions diverge.

The A4 and the Core i5 both manage to keep frame latencies consistent throughout about 98-99% of the frames. That’s the kind of consistency we’d expect from a good discrete desktop GPU.

Finally, we can rank the cards based on how long they spent working on frames that took longer than a certain number of milliseconds to render. Simply put, this metric is a measure of “badness.” It tells us about the scope of delays in frame delivery during the test scenario. You can click the buttons below the graph to switch between different millisecond thresholds.


The Core i5 spends comparatively so little time working on frames over 50 ms that its bar is too thin to show up in the graph above. The A4 trails reasonably closely, although it’s at a disadvantage in the 33.3-ms rankings because of its relatively high average frame times. Still, 4368 milliseconds (or 4.4 seconds) over that threshold isn’t bad out of a 90-second run.

From a seat-of-the-pants perspective, Skryim feels fluid enough to be playable on the A4—at least in this test run, which involved multiple characters and detailed geometry but no combat. The high average frame times mean the game isn’t as silky-smooth as it would be on a decent desktop gaming rig. Still, frame times are low and consistent enough to make the game playable.

Battlefield 3

We tested Battlefield 3 by playing through the start of the Kaffarov mission, right after the player lands. Our 90-second runs involved walking through the woods and getting into a firefight with a group of hostiles, who fired and lobbed grenades at us.

As in Skyrim, we tested at 1280×720 using the “Low” quality preset. Again, our Atom system sat out this round of testing.



Let’s not mince words: none of these systems are fast enough to run Battlefield 3 acceptably, even at these very low detail settings. The A4 achieves the lowest, most consistent frame latencies of the bunch, which is commendable, but neither it nor the Core i5 deliver what we’d call a playable experience. Clearly, folks hoping to game on their Kabini-powered laptops will have to pick less demanding titles.

Subjective impressions

Empirical benchmarks tell us a lot, but they don’t communicate how quick and responsive a system feels when running day-to-day tasks. After completing our suite of empirical tests, we spent some time using the A4-5000 whitebook in order to get a feel for it.

Since slow storage plays a huge part in perceived responsiveness, we swapped out the A4-5000 whitebook’s 5,400-RPM hard drive and replaced it with a 256GB Crucial m4 solid-state drive. The idea was to see how snappy a Kabini system could be under ideal conditions.

Our verdict: the A4-5000 whitebook is noticeably slower than our premium ultrabook, the Zenbook Prime UX31, but the difference is small—much smaller than you’d imagine.

Oh, sure, we noticed some slightly longer pauses when skipping from page to page across the web. And there were other minor slowdowns, such as when scrolling down content-rich pages or opening applications. However, we never had the impression that Kabini was struggling to keep up with input, which is often the case with slower, Atom-powered systems—and with machines based on Kabini’s predecessor. More importantly, the Kabini system never felt slow to the point of frustration; it just wasn’t quite as snappy as a $1,100 ultrabook.

Out there in the real world, A4-5000-powered laptops probably won’t come fitted with 256GB SSDs, and they certainly won’t compete head-on with premium ultrabooks. Instead, they’ll be saddled with mechanical storage and made to fight it out with similar machines powered by Intel’s Core i3 and Pentium CPUs. In those matchups, the responsiveness difference may be imperceptible. We certainly didn’t get the impression that the A4-5000 whitebook was noticeably slower than our Core i3-powered VivoBook X202E, despite what the benchmark numbers on the previous pages indicate.

What about gaming? Well, after the promising showing in Skyrim and the, er, somewhat less promising results in Battlefield 3, we thought we’d try some more casual titles to see if the A4-5000 handled those any better. We didn’t really have time to benchmark these games, but we did load up Fraps and keep an eye on reported frame rates while playing.

Our first candidate was Counter-Strike: Global Offensive, a snazzed-up sequel to Counter-Strike: Source. At 1280×720 using the lowest possible detail settings, frame rates hovered between 20 and 50 FPS, and the game ranged from smooth and playable to choppy and not-really-playable. Heavy combat saw frame rates drop into the teens, which had a direct impact on our kill-to-death ratio.

You can play CS:GO on the A4-5000, but the integrated graphics will drag you down in serious multiplayer skirmishes.

Next up was Dyad, an abstract indie game that’s a favorite of our own Geoff Gasior. Dyad has oodles of kaleidoscopic eye candy, but it ran better on our Kabini whitebook than CS:GO. Frame rates hovered in the 30-50 FPS range at 720p, which was playable, albeit somewhat less buttery-smooth than on a desktop gaming PC. Dyad is definitely a game you can enjoy on the A4-5000.

We rounded out our subjective gaming tests with Ilomilo, an Xbox Live port that’s now available through the Windows 8 app store. This game runs from the Modern UI environment, and unfortunately, Fraps isn’t able to monitor frame rates inside it. Playing the game, however, it was clear that the A4-5000 had no problems maintaining a smooth, fluid experience. A touch screen would have made things even better… too bad our whitebook doesn’t have one.

Based on our results so far, I’d say the A4-5000 is more than qualified to handle casual games, and it treads close to the playability threshold in more serious titles. In some of those, it’s fast enough at the lowest detail settings; in others, like Battlefield 3, performance isn’t sufficient to make the game playable.

Considering this chip is expected to appear in sub-$500 notebooks, I’d say that’s a pretty good overall showing.

Battery run times

We tested battery life twice: once running TR Browserbench 1.0, a web browsing simulator of our own design, and again looping a 720p Game of Thrones episode in Windows Media Player. (In case you’re curious, TR Browserbench is a static version of TR’s old home page rigged to refresh every 45 seconds. It cycles through various permutations of text content, images, and Flash ads, with some cache-busting code to keep things realistic.)

Before testing, we conditioned the batteries by fully discharging and then recharging each system twice in a row. We also used our colorimeter to equalize the display luminosity at around 100 cd/m².

The A4-5000 whitebook achieves much longer run times than the Core i3-based VivoBook. It even edges out the larger, Core i5-driven Zenbook in our video playback test. The Zenbook stays awake an hour longer in the web-browsing run, though.

These are tricky comparisons to make, though, because the systems don’t all have the same battery capacities and displays. We can’t compensate for the display differences, but we can normalize the data based on the capacity of each battery. The following results show normalized run times in minutes per watt-hour. (For the record, the UX31A has a 13″ 1080p screen, while the X202E and ME400C spread the same 1366×768 resolution across 11.6″ and 10.1″ panels, respectively.)

These normalized numbers show the A4-5000 actually comes very close to the Zenbook in the web-browsing run—and it’s substantially better in the video playback test.
If all of these systems had a 50Wh battery, the Kabini whitebook would have stayed up 6.9 hours in the web test, compared to 7.4 for the Zenbook. That’s a rather small difference. Both runs are also within spitting distance of the “all day battery life” nirvana.

Conclusions

AMD has achieved two things with Kabini.

One, the company has delivered a substantial across-the-board performance increase over Brazos, its previous low-power mobile platform, without increasing the power envelope. In fact, Kabini is more power-efficient in spite of the higher performance. AMD has cut the power envelope from 18W to 15W, and that now includes the integrated Fusion controller hub. Also, as we noted earlier, Kabini features new power management mojo to further improve energy efficiency.

In addition to all that, AMD has come very close to matching the CPU performance of Intel’s ultrabook-bound Core i3 processors—at least in a single-channel memory configuration, which seems to be commonplace in low-end notebooks. That near-competitiveness on the CPU side is accompanied by better graphics performance and, as we saw, superior battery life. Kabini is a single-chip solution, too, while the Core i3 requires a separate chipset. Kabini’s smaller die size and higher power efficiency could make for lighter, more compact systems.

If AMD’s official product positioning charts are any indication, the A4-5000 may very well wind up priced lower than the Core i3-3217U. Should that be the case, then PC makers may be able to build better systems for the money using the AMD silicon. A4-powered offerings might have better displays, say, than their Intel counterparts. We were surprised by the inclusion of a 13″ 1080p panel on the A4-5000 whitebook, but AMD says we can expect similar goodness from $499 retail offerings. Finding a similar display on a $500 Intel machine is difficult, if not impossible, right now.

Although Kabini and Temash look poised to spawn some pretty attractive products in the coming weeks, we can’t help but look at this SoC and the Jaguar CPU architecture and think about the remaining, untapped potential. Any conversation about that subject has to start with the lack of Turbo Core in all but one of the current products based on this chip. The 15W quad-core Kabini A4-5000’s CPU cores top out at 1.5GHz. Surely a single core could reach 2GHz with Turbo Core enabled.

That’s just the beginning of the tuning opportunities, we think. Currently, Kabini and Temash use digital activity counters to measure utilization and estimate power, just as AMD’s Trinity chip did at its launch. Since then, AMD has taken the same Trinity silicon and released a new lineup of products under the Richland code name that include higher CPU and GPU clock speeds courtesy of smarter power management routines. Those algorithms improve on Trinity’s dynamic behavior via more extensive profiling in AMD’s labs and by taking advantage of the temperature sensors embedded in Trinity/Richland silicon. According to AMD power guru Sam Naffziger, Kabini and Temash have thermal sensors built into them, as well, but they’re currently only used as a last-resort safety mechanism. One could easily imagine AMD doing a refresh of this product lineup using the same Kabini/Temash silicon with more refined power management firmware. Naffiziger admitted to us that there’s even more opportunity to extract headroom by monitoring temperatures outside of the chip, at the platform level, as well.

What’s more, Kabini and Temash don’t support Windows 8’s connected standby mode. AMD elected to focus its efforts on speeding up two things: quickly resuming operation when coming out of sleep mode and ensuring quick reconnects to Wi-Fi networks. Those are sensible choices, but AMD could add support for connected standby mode in a future refresh. We expect they’d end up getting lower platform operating power in the process.

Really, that’s just the beginning. AMD says a Jaguar core takes up about 3.1 mm² of die area at 28 nm, virtually the same size as an ARM Cortex-A15. This is arguably AMD’s first “true” SoC, and its lowest power envelope is 3.8W, which limits it to relatively thick tablets. Beyond the power management and platform-level work we’ve mentioned, there are opportunities for further integration and power savings in future chips. For instance, this SoC has redundant paths to memory for the Jaguar CPU cores and the Radeon IGP. Those could be unified in a future chip, saving power without compromising performance. With the Jaguar microarchitectre in its arsenal, AMD appears to have the core technology needed to take a PC-like experience into even smaller devices in the future; the firm just needs to do the engineering work to make it happen.

Comments closed
    • swaaye
    • 6 years ago

    My 8 year old Athlon 64 2.2 GHz notebook gets 515ms on Sunspider in Chrome. In Truecrypt it only bests Atom in the AES encrypt test. Obviously multiple cores pay off, if you can use them.

    Oh and the 130nm Athlon 64 isn’t exactly low power (lol). This notebook is a beefy 15.6″ DTR.

    • ronch
    • 7 years ago

    Just one last comment, if I may.

    Years ago I thought about VIA being a good platform for those markets where folks aren’t too performance-sensitve but at the same time price-conscious. Guess that market is pretty much owned by Intel and AMD now. Too bad VIA is too slow to use its previous advantage.

    • ronch
    • 7 years ago

    Btw, there’s no mention of audio. With the FCH on-die, does it mean Kabini will interface directly to an audio codec? Couldn’t find any Kabini system diagram that explicitly shows it. I’d also be curious what would happen if an OEM decides to use a separate audio controller such as, say, a C-Media 8738, but I can’t imagine anybody doing it given Kabini’s market positioning.

    • spigzone
    • 7 years ago

    Where AMD will C-R-U-S-H Intel … that 600lb Jaguar in the pipeline .. the one based on a cut down PS4 APU .. a killer gaming laptop or entry level desktop on the cheap, especially running next gen games, that Intel has NO answer for into the foreseeable future. THAT’s a market segment AMD could totally dominate.

      • ronch
      • 7 years ago

      Yeah… For the next 6 months, sadly.

    • ronch
    • 7 years ago

    I just read at Anandtech (or was it PCPer?) that Jaguar refers to a 4-core module. Thus, a Jaguar module consists of 4 CPU cores along with up to 2MB of L2 cache similar to the way a Bulldozer module consists of 2 tightly linked cores with up to 2MB of L2. And so the idea of these so-called modules lives on…

    • kamikaziechameleon
    • 7 years ago

    so basically as a product that matters it isn’t anything to run out and buy? but as far as it forecasting future products, get excited. Intel has shown us some improvements that might make AMD’s progress a bit late to the table. This next year will be very interesting indeed.

    • FuturePastNow
    • 7 years ago

    Looks like a great low-power processor. Sadly I expect almost all manufacturers to stick it in disgusting 16-17 inch “laptops” with 1366×768 screens like they did with Brazos.

      • ronch
      • 7 years ago

      I’m not sure how market research firms can say AMD’s APUs are storming the market. For every AMD PC sold there must be ten other PCs with Intel inside. And even those few AMD-powered PC s are encased in cheap chassises and unexceptional displays, or most of them anyway. Sadly, AMD has never really gotten rid of its cheap image.

        • calyth
        • 7 years ago

        Well, It’s the OEM that takes their silicon to build machines, and it seems that OEMs always stick it together in the most crappiest way possible.

        I run an A10-5800K at home. It’s a cheap box that lets me run the steam games that I already have, and I don’t exactly play the graphic intensive stuff. I know where I could cheap out and where I couldn’t, and the box is pretty damn stable (other than using the newest LAN drivers from the Asus board, which started funny, but that’s a realtek problem, not AMD).

        Short of building devices with decent yet cheap parts themselves (or commission an OEM to build to their specs), I’m not sure how they could shake the “badly made” cheap image, and build a “good value” cheap image.

          • ronch
          • 7 years ago

          [quote<]Short of building devices with decent yet cheap parts themselves (or commission an OEM to build to their specs), I'm not sure how they could shake the "badly made" cheap image, and build a "good value" cheap image.[/quote<] Well, AMD isn't exactly seen as making 'badly made' products, generally speaking, although there will always be haters not just of AMD but any other company out there. Thing is, being a second-source x86 CPU provider inevitably makes AMD look like the 'other' CPU brand. Their long history of providing 'better value' also inevitably makes them look like the 'cheap' CPU brand. Unfortunately, it's a little hard to shake off. Unless they can produce another K8 (in terms of market-dominating performance), I don't think people would be willing to pony up at least $500 for their CPUs. Being the 'other' brand doesn't help too: Hyundai can produce a Benz killer but it will always be seen as a cheaper me-too automobile. Can Hyundai price their automobile like an E- or S-class? Not if they want to sell lots of it. Their success in winning the Xbox One and PS4 deal marks a major turning point in AMD's history. No longer are they competing solely in the PC space, going head to head with Intel in terms of performance and cutting prices sharply when they couldn't beat Intel. This has been going on far too long and it's a relief to see AMD venture successfully into other adjacent markets. Unfortunately, it seems like they're repeating the 'value' pricing strategy that they've relied on in the x86 PC industry with the console deals: One has to wonder just how much money these console wins will earn AMD. Then again, maybe it's all part of AMD's plan to shrug off their cheap image or their realization of the undeniable realities of the computer chip industry.

            • calyth
            • 7 years ago

            Well, getting hold of the console deals should hopefully give them enough cash to keep going for a while. I don’t see them making any headways on the performance PC front with their current offerings, and the phone/tablet market is just switching from one market where they’re being dominated to another which they’re being dominated.

            I suppose whether they really wanted to be in the limelight for a certain market, by that I mean maybe they might chose to stick with a rather unglamourous market for a while.

    • ronch
    • 7 years ago

    It’s ironic how AMD is selling far more ‘little core’ CPUs than ‘big core’ CPUs, considering it cost them far more to design Bulldozer/Piledriver than Bobcat/Jaguar. Obviously, price has a lot to do with it but still…

      • dpaus
      • 7 years ago

      With the process improvements they’re making, they have a real opportunity to make a ‘mega-CPU’ in the next cycle; 8 or even 12 ‘Excavator’ cores with 4/6 FP units.

      I really don’t know if there’s enough of a market to justify it, but I’d love to see them produce such an 8-core CPU with a deeply-integrated 7970 GPU on-die and 8 GBytes of fast RAM. Who cares if it has a 250W TDP? It’s still less than the total of the discrete components. Maybe Sony could market it as a PS4++ 🙂

        • tipoo
        • 7 years ago

        Even on 28nm, that would be one massive chip. Yield rates would be pretty low, thus higher cost or lower profit or both. The PS4 uses something like the 7850 and tiny Jaguar cores, yet still has a massive APU with reported yield issues, same as the One which is even more conservative on the GPU.

        • ronch
        • 7 years ago

        250W TDP? God I hope they’re not crazy enough to do it. It may consume less power than two separate chips but just imagine the cooling you’d need for such a monster. There’s no way you can cool it by air.

          • Anonymous Coward
          • 7 years ago

          Its really more of a question of thermal density than total wattage, but yeah I think they’d need a purpose-made cooler. I guess the whole thing would need to be soldered onto a motherboard and the cooler just bolted on.

        • Anonymous Coward
        • 7 years ago

        [quote<]I really don't know if there's enough of a market to justify it, but I'd love to see them produce such an 8-core CPU with a deeply-integrated 7970 GPU on-die and 8 GBytes of fast RAM.[/quote<] Looking at the die sizes, they could stick a couple BD modules onto a 7970 and still end up smaller than some of the things nVidia has launched. They'd have to have a really good handle on 28nm yields before trying it.

    • ronch
    • 7 years ago

    I wish some reputable OEM would take an A6-5200, design a tiny motherboard around it using all the SOC’s available connectivity options, put it in a decent-looking enclosure along with a nice 2.5″ HDD or something, and sell it for maybe less than $200. I’d gladly buy one for all my Facebook ‘friends’.

    Ok, maybe not, but I’d gladly buy one for the wife since all she does is type documents and play light games. And yes, it’d be perfect for our office too, where we don’t need Ivy Bridges and more desk space can’t hurt.

      • dpaus
      • 7 years ago

      I plus thee!

        • ronch
        • 7 years ago

        One up-thumb deserves another one! So… click!

    • Anonymous Coward
    • 7 years ago

    It seems to me that Jaguar is surprisingly good vs Bobcat, especially in the games. Four threads, improved FP and a 2MB L2 seem to go a long ways. Four way shared L2 is a bit unconventional but probably the capacity was near-essential.

    If the developing world has a healthy PC gaming market (do they?), seems like AMD would be foolish not to make another version that is more suitable for gaming on the desktop, say 3x or 4x the GPU and dual channel memory. It might be a perfectly fine alternative to a console for many people, and its close relation to PS4 and XB1 should keep it relevant. This would get close to Trinity and friends, but the platform cost would be lower.

    I really wonder how well a dual-module Bulldozer-descendant at maybe 3.5ghz would fare against four Jaguar cores at 2.0ghz in games.

    They make new GPU designs a dime a dozen, now I think they should do the same with Jaguar SOCs.

    Eh… I should do some work…

      • ronch
      • 7 years ago

      Well, you can grab an A6-5200 and compare it to an A10-5800K clocked at 3.5. Should be about as close to the specs you mentioned you’d like to compare.

        • Anonymous Coward
        • 7 years ago

        Yeah I’d be tempted to grab a motherboard on mini-PC with an A6-5200.

        I hope TR gets around to a review with a PCI-E GPU sometime. I remember that Bobcat was crap with or without such a GPU, but maybe a quad Jaguar will impress.

        Also it will be interesting to see how Intel does with Silvermont’s faster 1MB L2’s. I wonder if the smaller size will be a problem for serious games. Size vs speed. AVX might also make a difference.

    • LoneWolf15
    • 7 years ago

    I’d really like to see what this setup could do if it was redone with dual-channel support. I think the GPU would benefit more than the CPU; or of course, it just might balance better when there is high simultaneous demand from the CPU and GPU.

    • HisDivineOrder
    • 7 years ago

    I think AMD has released a number of great CPU architectures around this price point and in this device market. None have really taken root. This usually has to do with the fact that AMD CPU’s always mean low-cost hardware. And yes, I get that AMD is making an argument they can include better hardware (ie., displays) by reducing the cost of the CPU.

    But how many here think that’s what’s going to happen? Because what I see happening is the same old, same old.

    Lower cost PC with lower class display (1366×768 TN with the narrowest of viewing angles but now it’ll have horribly unresponsive 5 point touch!), a gray metal-like plastic shell that bends like bamboo and creaks like an 70 year old jogger’s knees, 4 gigs of ghetto 1333 DDR3 from Anonymous Manufacturer #5, a 500GB 5400RPM hard drive from Seagate or Toshiba, and Windows 8 non-Pro non-x64. The keyboard will miss keystrokes, the touchpad will be non-Synaptics, and the tiny el discounto fan they found will whine like a Geforce FX5800 owner living in Texas in a tiny closed room in mid-July an hour after the A/C went out.

    Then they’ll knock the price down $20 from $399 to $379 and call it progress. Intel will promptly drop Bay Trail, drop prices just enough to woo every OEM back to mainly Intel and Rory Reed will rest his head on his desk to hide all his frustrated weeping. With Bay Trail dominating the low end (and especially tablets which rapidly consume all the low end) and Haswell ruling the roost in the high end, AMD’s room for growth becomes increasingly narrow and niche.

    Putting AMD in a precarious position. Good thing they have discrete GPU’s as a place they are doing reasonably well, right? Except… nVidia continues to strut in the discrete GPU market despite AMD giving its customers every big publisher-released game released worth buying this year. Hell, nVidia is selling out of $1k GPU’s and is soon to be selling out of $650 GPU’s, too. Both based on hardware designed to replace $450-500 Geforce 580 parts over a year ago. They’re still selling gangbusters of 680’s and 670’s for $400-500 despite the part being built to replace 560 Ti’s that cost users $200-300. They’re even almost a lock for selling out of crappy-small tablets with tiny displays and a controller grafted on for $350 despite the fact the Nexus version of the same hardware (but with a bigger screen) would cost $200. Yeah, I’m sure the plastic of the controller and buttons cost more than the larger, better screen of the Nexus. Yeah.

    In comparison, AMD can’t even sell a $550 GPU for $350-400 or a $450 GPU for $275-$300 without free games. So I suspect AMD’s hold on the discrete GPU market is shaky at best. The low end is increasingly relying on APU’s, but Intel’s solutions there are improving rapidly at both the low end and the high end. Mid-range (between Atom and i3/i5/i7) holds little appeal for most users as the low-end increasingly gets to the “good enough” area and the high end is so blazing by comparison it makes the mid-range look like all compromise.

    So when I say AMD won’t get OEM’s to do much more than drop their prices by $10 or $20, I’m also saying that won’t do much for them. Bay Trail is coming like the Silver Surfer and AMD is the semi-advanced race of victims living on the next little non-Earth “Low-end” world that’s just had the misfortune of falling under Intel’s shadow.

    “I HUNGER!!!!!” Intel needs to expand out into the lowest of the low end–an area where they’ve mostly ignored–because of the proximity of “low end” to “good enough.” As such, Intel can no longer ignore this area and AMD will–going forward–find no respite there. This has been their safe place that they retreated to anytime they were smacked down by Intel.

    But Intel can ill afford to let ARM own the el cheapo devices, so now they must do what they’ve always been reticent to do. They must make a decently performing low-end CPU for devices in the $200-300 range instead of building (and constantly refreshing without really upgrading) a CPU that was meant to compel users to upgrade after experiencing the crappy state of its performance (re: every prior Atom). By extension, this means AMD’s long-held sanctuary has finally become Intel’s next big market and focus. The days of letting AMD linger in the murky, plastic-y, cheap corners of sub-$500 are over. Intel is facing a storm of ARM and they must shore up their entire market in order to weather it.

    So when I look at this CPU, I think… it’s decent, but it’s going to do nothing at all to solve any one of a million problems the company has that are keeping its processor products from taking off. AMD SHOULD have focused on netbooks years ago when Atom was frustrating us all. They waited until those got replaced by tablets before they got around to releasing something. Now they’re waiting on true tablet-focused processors until… well, until something else steals the public’s interest.

    Then I guess they’ll show up again a year late and a few million dollars short.

    All imo.

      • sschaem
      • 7 years ago

      If OEM released products based on this chip 6 month ago, it would have been a winner.

      AMD products are decent but always too late (by 12 month). This is truly their #1 issue to solve.
      Jaguar needed a better turbo, and a better memory controller. But I guess its better to get the chip out this way then wait another 6 month.

      I wonder is MS is using AMD x86 cpu in their xbox server infrastructure… maybe this type of deal will be AMD saving grace.

      • Xamir21
      • 7 years ago

      Actually, when Intel’s average sell price (ASP) drops below $100 and toward the $50 Atom pricing, it’ll kill their ability to manufacture new fabs. As the size of transistors shrinks, the cost of the fabs are rising rapidly and considering Intel’s tick-tock model is breaking and current fabs are partially utilized and a wasting asset, you have to wonder if Intel hits a tipping point and they financially disintegrate.

      Every Intel fanboy cites the Atom Baytrail as a saving grace, but even a multiple performance gain over a miserable Atom is a less miserable Atom. We now know from benchmarks on retail Haswell i7-4770s how an old AMD Trinity design still outperforms Intel’s latest and greatest in graphics and games. The upcoming AMD Kaveri utlizing HSA fusion of CPU and GPU should wipe out any Intel CPU in terms of price and performance.

      Finally, Intel has seen what a mini Ipad has done to demolish the big Ipad in terms of sales. It also is causing Apple’s profit margins to crash and so goes the company.

        • chuckula
        • 7 years ago

        [quote<]Every Intel fanboy cites the Atom Baytrail as a saving grace, but even a multiple performance gain over a miserable Atom is a less miserable Atom. [/quote<] Uh.. Atom is competing with ARM not with Steamroller. In that regard, current Atoms are already in the same ballpark as the Cortex A15, and Silvermont has substantial performance improvements while hitting a lower power envelope than Cortex A15 at those performance levels. [quote<]We now know from benchmarks on retail Haswell i7-4770s how an old AMD Trinity design still outperforms Intel's latest and greatest in graphics and games.[/quote<] We know that desktop Haswell has a substantially smaller IGP than Trinity, which is why it annihilates Trinity at any processing intensive task and has a lower power envelope. Oh, on the desktop where Intel didn't bother to put in a heavy duty IGP, Haswell trails the top of the line Trinity parts by 15-20%.... or actually, it destroys the top of the line Trinities when you put in a 2-year old mid-range GPU or better. Let's see how that 19 watt Richland does against a 15 Watt Haswell. From what I've seen, AMD is eager to advertise the existence of low-wattage parts, but has done just as much to make sure that a legitimate review site like TR never gets a sample for testing. [quote<]Finally, Intel has seen what a mini Ipad has done to demolish the big Ipad in terms of sales.[/quote<] What's really funny is that the ARM/AMD/anti-Intel squad are now ascribing failures of products that have nothing to do with Intel to Intel. The full-size iPad is *dying*!!! Therefore Intel is doomed because it had nothing to do with the full size iPad!!! For historical accuracy, I'd just like to point out that you aren't drinking Kool-aid, because they used Flavor Aid at Jonestown.

        • Stonebender
        • 7 years ago

        “Actually, when Intel’s average sell price (ASP) drops below $100 and toward the $50 Atom pricing, it’ll kill their ability to manufacture new fabs. As the size of transistors shrinks, the cost of the fabs are rising rapidly and considering Intel’s tick-tock model is breaking and current fabs are partially utilized and a wasting asset, you have to wonder if Intel hits a tipping point and they financially disintegrate.”

        Using this logic, all of the fabs churning out ARM chips are already toast and will have a hard time financing the move to smaller process nodes. Meanwhile, Intel with billions in the bank, has already converted to 14nm in more than one fab and are on track for 10nm in less than two years.

          • Xamir21
          • 7 years ago

          Logic based on false assumptions only provide false conclusions.

          Since the size of ARM cores are many times smaller than typical sizes of Intel X86 cores, companies like Qualcomm, MediaTek and numerous other ARM licensees can afford to sell phone and tablets chips for $10 to $20 range. They also don’t have to price into their chips the price to pay for fabs like Intel that are sitting idle.

          The success of this formula is now why the market value of Qualcomm has greatly surpassed the market value of Intel.

            • Stonebender
            • 7 years ago

            And these companies are at the mercy of the fabs that are producing these chips. Fabs that aren’t making much money because the margin on these tiny ARM chips is miniscule. These fabs are going to find it increasingly more difficult to switch to smaller process nodes.

            • chuckula
            • 7 years ago

            [quote<]Since the size of ARM cores are many times smaller than typical sizes of Intel X86 cores,[/quote<] Sigh... once again you take a false statement that fits in with the ARM hypefest and apply it to Intel for no reason. I've heard it a million times before "ARM cores are tiny!" That's very special, and let's assume it's true for the sake of argument. Chuckula: When was the last time you bought an ARM core? Xamir21: I have one in my phone you idiot! They sell Billions and Bilions! Intel is doomed! Chuckula: Wrong answer. You have a SoC in your phone that includes, amongst many other things, some ARM cores. Now, lets look at SoCs from just one of the major manufacturers out there who has made boatloads of money: Apple. Here's a nice wikipedia article dedicated to Apple's SoCs: [url<]http://en.wikipedia.org/wiki/Apple_System_on_Chips#Apple_A6X[/url<] Notice anything interesting? Even Apple's lower-end smartphone chips aren't tiny, and the original A5X [b<][i<]was slightly larger than a full-scale quad-core desktop Ivy Bridge[/i<][/b<] Interesting note: The supposedly horrible Clovertrail+ Atom that is at the same 32nm process as Apple's most advanced chips has a die size of... wait for it.. 65 mm^2 Source: [url<]http://www.techpowerup.com/cpudb/1551/atom-z2580.html[/url<] Before you come up with some disingenuous retort about performance, I strongly suggest you look at how Clovertrail+ is doing compared to extremely high-end ARM phones that are just hitting the market now. So please tell me again about how Intel is doomed DOOMED I SAY because their SoCs are smaller than competing ARM solutions...

            • NeelyCam
            • 7 years ago

            Needs more trolling. And cowbell

            • chuckula
            • 7 years ago

            Sorry Neely, I wasn’t trolling that time. I’ll come up with something if you feel troll withdrawal though.

            • NeelyCam
            • 7 years ago

            I know; it scored zero on my trollscale. A little bit would’ve been cool, but don’t go ‘full chuckula’…Save that for when Haswell is out.

            On that note, is it June 3rd (Monday)? So, I can expect to read Anand’s review on Sunday at 9pm Pacific?

            • Hattig
            • 7 years ago

            [quote<]the original A5X was slightly larger than a full-scale quad-core desktop Ivy Bridge[/quote<] Wasn't the A5X 45nm? No wonder a 22nm Ivy Bridge is smaller - that's two full shrinks. A process node advantage is one of the benefits that Intel has up its sleeve, but this comparison is a little off. In addition that Samsung 45nm process was dirt cheap compared with a cutting edge process. The vast majority of Apple's SoCs is devoted to GPU functionality because they believe that high-end mobile devices with retina displays need fast, snappy graphics otherwise the user experience is rubbish. To be honest in advanced SoCs the CPU die size isn't the main consideration unless you want four of them on the SoC. Jaguar is a tiny die, the new Atom will have a tiny die, it's all the stuff around the CPUs, the choice of GPU and its configuration, the video decode/encode, the offload engines, the I/Os, and so on.

            • chuckula
            • 7 years ago

            [quote<]Wasn't the A5X 45nm? No wonder a 22nm Ivy Bridge is smaller [/quote<] Completely irrelevant. I'm not saying that the A5X has anywhere near the performance of Ivy Bridge. I'm saying that it was economically viable for Apple to make chips using 300 mm wafers that are bigger than a full-blown quad-core Ivy Bridge part on 300 mm wafers. The fact that Ivy has far more transistors doesn't affect the economics of the die-count per wafer.

      • Coran Fixx
      • 6 years ago

      We’re gonna need a bigger boat…

      I completely agree with your statements about the end of the world at the bottom of the market. Seems like we are headed for a three segment bottom end.. $300 Win 8 touch laptop you can bend back and use as a tablet, $200 tablet or third choice “I’ll make due with my phone”.

      Since I’ve destroyed my Nexus 7 and do not have a laptop, I am kind of looking at those options. I’ve seen the TR maligned Vivobook selling for $430 and there is a Celeron 847 selling for $290. Considering one of those options (or fixing the Nexus) or maybe just becoming a phone freak.

    • ronch
    • 7 years ago

    Honestly, AMD’s naming conventions are really berserk nowadays. With Intel, I only need to look at the first digit of the 4-digit model number to know whether it’s a Sandy, an Ivy, or a Haswell. The i3, i5 and i7 tells me where it sits in the price list, and the last 3 digits of the 4-digit model number gives me a clue about its specific speed bin. And apart from Core, there’s Pentium, Celeron and Atom. Somehow, these names are easier to remember and grasp than pure numbers and letters, which AMD is using these days instead of those crummy names like Phenom and Duron. And I thought those names were bad.

    With AMD, ‘FX’ denotes all CPUs using the Bulldozer/Piledriver cores, but you need to specify the [b<]whole[/b<] model number (e.g. FX-8350) to know what it is, because AMD never labels these chips as 'FX-8' 'FX-6' or 'FX-4', which is kinda awkward anyway. With its APUs, things [i<]somewhat[/i<] get better because they have tiered the chips under the A10, A8, A6 and A4 monikers, but also get worse due to all the chips they sell under those monikers. The A8 brand can be used by Llano, Trinity, Richland, etc. A6 and A4 can also contain the previous three [u<]plus[/u<] Jaguar. It was easier back with Brazos, where Bobcat was strictly sold under the E-series, C-series, and Z-series labels. I know I can probably pull the table from somewhere and see how all of AMD's model numbers stack up, but I'm not even going to. Companies need to label their chips in a way that will not only be appealing to consumers, but easy for them to understand as well. As it is, AMD's branding just went from bad to worse.

      • chuckula
      • 7 years ago

      Yeah, it is a little sad when AMD has ginned up a naming convention that is even more convoluted than Intel’s. At least with Intel, there is [url=http://ark.intel.com/<]ARK[/url<] where I go to plug in Intel's confusing model numbers to get the vital statistics of the CPU.

      • abw
      • 7 years ago

      The A8 brand can be used by Llano, Trinity, Richland

      Right , i3 , i5 and i7 are used only for a single CPU generation…..

        • JumpingJack
        • 7 years ago

        He meant that i3, i5, i7 designates the price range bucket, the next digit tells the generation…

        i3-2xxx, i5-2xxx, or i7-2xxxx is Sandy Bridge generation
        i3-3xxx, i5-3xxx, or i7-3xxxx is Ivy Bridge generation
        i3-4xxx, i5-4xxx, or i7-4xxxx is Haswell

          • ronch
          • 7 years ago

          Thanks. That’s pretty much it.

          • abw
          • 7 years ago

          Is it different from A10 A8 A6 giving the price segment
          while the first digit of the thousand serie gives the generations.??..

          A10 5800 is replaced by A10 6800…..

      • dragontamer5788
      • 7 years ago

      You can tell if something is Temash, Kabini, or Richland by the “Elite Mobility”, “Mainstream”, or “Elite Quad-core” taglines.

      [url<]http://news.softpedia.com/newsImage/AMD-Introduces-Richland-The-2013-Elite-Performance-Processors-for-Notebooks-2.jpg/[/url<] Keep an eye on the bottom of the logo: Temash == "Elite Mobility" Richland == "Elite Quad-core" Otherwise, its a Kabini. (Generally named "Quad-core"). The most cost-effective chips on all ends are named "Essential". On the other hand, the Intel scheme is misleading to the general audience. Intel i7-3770k and the Intel i7-3517u have a 360% difference in performance... and a 450% difference in power usage. Sure, both are Ivy Bridge i7s, but that means jack $*** about its performance or power characteristics.

        • ronch
        • 7 years ago

        These ‘Elite Mobility’ and ‘Elite Quad Core’ branding schemes only serve to confuse the average consumer even more. I wish they just copied the way Intel labels its chips. Don’t worry, AMD, it’s not patented so Intel won’t sue your ass off, and by that I mean just copy the logic, not copy the brands themselves.

        Bottomline, if even AMD fanbois themselves get confused with AMD model numbers and say they find Intel’s numbering scheme a lot more sensible, it means AMD’s model numbering scheme is really crazy.

          • dragontamer5788
          • 7 years ago

          I don’t doubt that AMD’s model numbers are crazy. But we can’t pretend that Intel’s scheme is the best either.

          Lets start off with this: Both companies know how to make a good system. The Xeon system and Opteron numbering system are both extremely consistent and extremely intuitive. (Xeon E3 has one socket, Xeon E5 has two sockets, Xeon E7 has 4 sockets. The higher the number, the faster. Ditto for Opteron, except its Opteron 3xxxx / 4xxx / 6xxx).

          However, Intel’s i3 / i5 / i7 series are all screwed up, almost to the point that they’re blatantly lying to consumers IMO. Consider the i3-3240 and i7-3537U, which one is faster?? (Answer: the i3-3240 is approximately 70% faster… and some 10% faster despite the i7’s turbo).

          Similarly, some i5s are quad core, while others are not. Its not quite as inconsitent as the i7 names… but its definitely a bit arbitrary. (Which desktop chip has a better iGPU? The i3-3225 or the i5-3570 ?? Woops, the i3-3225 is equipped with HD4000 graphics while the i5 is slower with HD2500)

          When you see “i7”, you don’t know if you’re getting a quad-core, a 6-core monster or a 2-core laptop chip. Intel describes all of these as i7. You don’t know if you’re getting HD4000 or HD3000 integrated graphics. You don’t know how much Intel is cutting corners on your chip. Hell, even if you see “i7-3xxx”, you don’t know if you’re getting Sandy Bridge or Ivy Bridge ( Ex: i7-3930K is a Sandy Bridge chip).

          I’m not claiming that AMD’s system is perfect either… far from it. But Intel’s i3 / i5 / i7 scheme is extremely imperfect… and misleads consumers.

          Again, both companies seem to know what they’re doing with server chips. But consumer-grade chips are forced into arbitrary numbering schemes… perhaps in some twisted attempt by marketing to blatantly lie to consumers.

            • NeelyCam
            • 7 years ago

            [quote<]i3-3240 and i7-3537U[/quote<] I don't think this really matters, since one is a laptop part and the other is a desktop part. Nobody would be looking at these and trying to figure out which one to pick. So I don't consider this to mean lying.. however: [quote<]i3-3225 or the i5-3570[/quote<] Yep, that's a bit misleading..

            • dragontamer5788
            • 7 years ago

            [quote<] I don't think this really matters, since one is a laptop part and the other is a desktop part. Nobody would be looking at these and trying to figure out which one to pick. So I don't consider this to mean lying..[/quote<] Here's the problem: people familiar with the desktop i7 series will assume that i7 are quad-cores with Turbo and Hyperthreading enabled. (Outside of the LGA2011 Extreme series of hex-cores... the rest of the Desktop line of i7s are quad-cores + hyperthreading) In comparison... a desktop i5 is a quad-core (no hyperthreading), and a desktop i3 is a dual-core with hyperthreading. In fact... ALL i3s are dual-cores with hyperthreading. (Mobile, Desktop, everything). So, what do you think an i7-3537u is? The answer is... a dual-core with hyperthreading. Wait... wat? [url<]http://ark.intel.com/products/72054/[/url<] (At least Turbo is still enabled) Ultimately, the i3 / i5 / i7 moniker is a big marketing name. It describes how much you paid for the system, but that is it. The name tells you absolutely nothing about the system specs. The i7-3537u's closest desktop analogue seems to be the i3-3225... believe it or not. Here's another problem with Intel's naming scheme: there are a billion different kinds of "HD4000". The HD4000 on the i3-3225 is clocked at 650MHz, while the HD4000 on the i7-3537u is clocked at 350MHz. (At least AMD calls their Radeon 7660D vs Radeon 7660G for the different clocked Desktop vs Mobile versions... but not for the differently clocked A10-5700 vs A10-5800k) So if you go on review sites and look up "HD4000", you really don't know if its the performance of the severely throttled ULV chip or the unconstrained power of the desktop chip. Yeah... they gotta fix these numbers somehow.

            • NeelyCam
            • 7 years ago

            [quote<]Here's the problem: people familiar with the desktop i7 series will assume that i7 are quad-cores with Turbo and Hyperthreading enabled.[/quote<] The reason why I don't think it's such an issue is because 90% of the people don't understand anything you just said. It's the detail-oriented nerds like us that sweat over these kind of details - luckily, we know enough that we can figure out how to hit the sweetest performance/price spot, and not pay a 10% optimization penalty for not knowing all the specs for the exact chip model we're buying. HT dual-core vs. non-HT quad-core? For most people, it makes zero difference. i7 is generally better than i5, and i5 is generally better than i3. And the iGPU performance doesn't really matter that much to most folks out there, as long as the system handles YouTube, Netflix and Hulu without major issues.

            • dragontamer5788
            • 7 years ago

            [quote<]HT dual-core vs. non-HT quad-core? For most people, it makes zero difference. i7 is generally better than i5, and i5 is generally better than i3.[/quote<] But even then, thats not right. Ivy Bridge i3 (ie: i3-3225 3.3GHz) is probably better than Nehelem i7 (ex: i7-960 3.2GHz). Considering how much faster the later models are, its important to know whether or not your chip was made in 2008 or 2012. I mean, if we're going by the "Bigger number is better" measure (ignoring generations), then AMD's insane numbering system is all of a sudden just fine as well. But that is certainly not right either >_<. After all... A10 "is generally better" than A8... is generally better than A6... is better than A4 is better than E2 is better than E1. The "generation problem" of i3-3225 vs i7-960 is somewhat solved by AMD's naming scheme btw. AMD seems to create new tiers as processors get faster. Llano was A4, A6, and A8, but today Richland chips are A6, A8, and A10. So all A10s are "better" than all A8s. (And the weaker Kabini chips are reaching Llano levels of performance... and are being called A4s and A6s). Of course, AMD naming scheme has issues of their own, but I'm just saying how that one at least solves the generation problem.

            • ronch
            • 7 years ago

            And we thought the P-rating system was misleading! LOL

    • sschaem
    • 7 years ago

    Same stick of ram deliver 20% lower bandwidth with the jaguar memory controller…

    Ddr3 is not a new standard, so not much hope amd will ever get this “fixed”

      • Firestarter
      • 7 years ago

      Memory bandwidth and latency are also a compromise between power, die area and speed.

        • mczak
        • 7 years ago

        That is true (and I think the bandwidth is actually ok for this chip).
        However, while you can’t expect latency numbers like on a ivy bridge core, the latency numbers both for l2 cache and memory are simply atrocious. Granted for the l2 cache there’s some excuse (rather large and shared between cores won’t be good for latency), though being twice slower as the even lower power atom isn’t a good thing.
        But memory latency being even ~20% higher (after you take core clocks into account as they mostly aren’t relevant for memory latency) compared to the on-chip FSB saddled low power atom is definitely simply terrible. This kind of memory latency would be considered bad even for an old-style traditional FSB.

      • OneArmedScissor
      • 7 years ago

      There’s nothing to “fix.” Even low end Sandy / Ivy Bridge parts still have the advantage of an L3 cache, ring bus, and memory controller all synched to the much higher CPU clock.

      Jaguar just has a tiny L2, low power bus, sawed off memory controller, and fixed clocks with crippled turbo.

      Bobcat actually ran the L2 at half the core clock. That may still be the case.

        • Anonymous Coward
        • 7 years ago

        [quote<]Jaguar just has a tiny L2, low power bus, sawed off memory controller, and fixed clocks with crippled turbo.[/quote<] Its amazing how well it performs given this all. If you laid out the spec for Jaguar next to a K8, you might think the K8 would easily dominate. It would be lovely if TR would really dig around with historical questions such as Jaguar vs K8, with explanations.

        • Anonymous Coward
        • 7 years ago

        [quote<]Bobcat actually ran the L2 at half the core clock. That may still be the case.[/quote<] I hadn't heard that about Bobcat, but TR has claimed half-speed for Jaguar. The doubled latency vs Atom could be nicely explained by a half-speed L2. Interestingly, the bandwidth is looking good. Interesting choice to run the L2 at half speed. I wonder what the motivation was. My best guess is that they thought the power savings was worth it.

    • clone
    • 7 years ago

    did they say their is a chance for a 1080p 13″ display with this CPU that’ll also still be price competitive?

    I can get a discount with HP and if they do I’ll grab it then stuff an SSD inside, for the right price what a great compbination.

    • ronch
    • 7 years ago

    This is an interesting piece of silicon, if only because everything but the kitchen sink’s thrown in there. But really, I think this is ok for folks who just surf the Net, watch movies, and do light office stuff. Students who don’t play games, moms, grandparents, MS Office office PCs (say that again)… they’re all good candidates for Kabini. Not sure if I would buy one of these since I’m more interested in AMD’s higher-tier APU offerings, but I’m not locking the door.

    • maroon1
    • 7 years ago

    There must be something wrong with i3 benchmark

    Both HD4000 in i3-3217U & i5-3317U have 350MHz base clock, with 1.05GHz turbo

    Yet for some reason the difference between the two is so massive. i5 get more than twice fps in gaming benchmarks !!! How is that possible ??

    Also, tom’s hardware put the HD4000 in i3-3217U above A4-5000
    [url<]http://www.tomshardware.com/reviews/kabini-a4-5000-review,3518-7.html[/url<]

      • HTarlek
      • 7 years ago

      Because the i5 has Hyper-Threading (appears to be 4 cores instead of 2) and the i3 doesn’t?

      My bad; the i3 DOES have hyperthreading; I thought it didn’t.

      • Damage
      • 7 years ago

      The Core i3 we tested has a single-channel memory config.

        • thecoldanddarkone
        • 7 years ago

        and that’s why I made sure that I got an ivy bridge with dual channel memory… oems……

          • vargis14
          • 7 years ago

          Ohh yeah sandy/ivy bridge cpus especially the celerons and pentuims need to have 2 sticks of memory so they can run in dual channel mode or they can feel very sluggish and not snappy at all.

          My neighbor bought a ivy i3 system with win 8 a month ago and he was not impressed with it at all. So i took a quick look at it and found it only had a single 4gb stick of memory plus he was using the intel graphics also..no dedicated card. So he happens know how to do a little bit with a computer so i told him to purchase another 4gb stick of memory at microcenter.
          He could not believe how much of a night and day difference it made for his system just doubling the memory. I quickly corrected him that it was not just doubling the memory to 8gb it was the doubling of memory bandwidth from 10GB a sec to 20+ GB a sec to feed the graphics unit in the cpu that sped up his systems performance. He is a Happy camper now.

        • My Johnson
        • 7 years ago

        I think I have dual channel for my 17W Core i3. Specifically, I have this:

        [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16834200702[/url<] And it's now $359 which is forty dollars less than I paid. Anyway, Do you want me to run any benchmarks on it? Let me know.

          • NeelyCam
          • 7 years ago

          Wow, over 1/3 of the reviews are one-star reviews…

            • My Johnson
            • 7 years ago

            Yeah, but those giving it one-star reviews are being rather brutal about it. The only issue I’ve had with it other than it being a mere $400 is that I have to reset the wireless adapter on my home network on occasion. It’s so minor of an issue I have not sought out a solution.

            Otherwise, for $400 it is rather impressive. And it’s now only $369.

          • vargis14
          • 7 years ago

          Yep you are running dual channel. It comes with a 4GB stick along with a 2GB stick. If you run a sisoft sandra lite memory bandwidth test it should hit right around 20gb a sec.

            • My Johnson
            • 7 years ago

            It’s coughing up 15.25 aggregate GB’s/sec with that benchmark.

    • ptsant
    • 7 years ago

    Having read several reviews, I believe the chip is a victim of its own success. I mean, the performance is so good, that most are tempted to compare with higher-tier chips. Maybe AMD wanted to present things this way, but in the end I don’t know whether it’s better marketing to show Kabini as an atom-crusher (same price range) or as a decent alternative to much more expensive chips.

    For me, a 12″ Kabini notebook would be ideal and would completely obliterate my 10″ tablet and replace my aging 13.3″ notebook for 95% of tasks. (btw 12″ is the smallest size with full-size keyboard).

      • HTarlek
      • 7 years ago

      [quote<]12" is the smallest size with full-size keyboard[/quote<] Citation? I remember dozens of 10" netbooks that had decent keyboards with full-size keys.

        • ptsant
        • 7 years ago

        [quote<] Citation? I remember dozens of 10" netbooks that had decent keyboards with full-size keys. [/quote<] Just the fact that *someone* said something on the internet doesn't mean it's true (other than potentially generating self-referential circles where (a) cites (b) who cites (c) who cites (a)). So, I usually don't bother to cite. Here, however, we can find some decent sources. So, the [b<]department of labor[/b<] (here: [url<]http://www.osha.gov/SLTC/etools/computerworkstations/components_keyboards.html)[/url<] says that "Generally, the horizontal spacing between the centers of two keys should be 0.71-0.75 inches (18-19 mm) and the vertical spacing should be between 0.71-0.82 inches (18-21 mm) (Figure 7)." It's not just about the [b<]size[/b<] of the keys, but also about the [b<]spacing[/b<]. However, the samsung NC10, considered to have "a roomy keyboard" (here: [url<]http://www.laptopmag.com/review/laptops/samsung-nc10.aspx)[/url<] has a 93% of full size with a pitch of 17.7 mm (http://mail2web.com/blog/2008/09/samsung-nc10-netbook/.) The bigger brother, the NC20, is explicitly marketed as having a “standard keyboard” (here: [url<]http://www.samsung.com/ae/consumer/computers-peripherals/notebook/netbook/NP-NC20-KA01AE-features).[/url<] One of the smallest notebooks with a "full-size" keyboard is the HP dm1, a 11.6" notebook and the VAIO "hybrid" (here: [url<]http://www.sony.co.uk/product/vn-duo/svd1121z9e,[/url<] explicitly mention a 18mm pitch). I really don't think there is anything with a 18mm key pitch under 11" but really, for a "full size" keyboard you usually want 12". I'm too lazy to calculate, but the key sizes+spacing+key number probably exclude a 10" product. The exercise is left to the readers.

          • HTarlek
          • 7 years ago

          Thanks for that 🙂 But all I meant was that we had a few of those 10″ netbooks at work, and they all had keys that were the same size as the keys on my desktop keyboard. But I also remember the spacing being less, so if that’s considered ‘part of the spec’ then you’d be right.

    • ptsant
    • 7 years ago

    [quote<] For instance, this SoC has redundant paths to memory for the Jaguar CPU cores and the Radeon IGP. Those could be unified in a future chip, saving power without compromising performance. [/quote<] Well, it appears that this was a design decision. The GPU does massive serial reads and would pollute the CPU cache if it were to pass from the same path. So, the two paths exists in order to protect the cache, which is most useful in random reads that cannot be prefetched.

      • sschaem
      • 7 years ago

      Its trivial to set caching behavior from the origination.
      Serial read from the GPU that should be uncached can be marked un-cached already.
      The CPU memory control already have all that built in.

      The issue is that the two piece of silicon are A LOT less integrated then we think….

    • Bubster
    • 7 years ago

    Good review. Would be nice to see a pentium in there.

    Weirdly the toms review has the i3 demolishing the kabini in skyrim.

    [url<]http://www.tomshardware.com/reviews/kabini-a4-5000-review,3518-7.html[/url<] i3 should not be that bad (only about 20% slower than i5). Edit: I see the asus x202e has some really bad throttling problems.

    • BabelHuber
    • 7 years ago

    This is the best Kabini review I’ve read o far, thanks.

    Kabini/ Temash looks very promising from a technical point of view.

    Now AMD needs to get its act together and support OEMs to create devices.

    Since Intel wants to put the next-gen Atoms also into Android tablets, I can only hope that AMD has according projects on its own (e.g. Android GPU drivers).

    A smart OEM could release a Transformer-like device which allows installing of different OSes without voiding the warranty.

    Just imagine such a device which can run Andoid, Linux and Windows.

    The Asus TF700 can already dual-boot Ubuntu Linux and Android, but of course you need to void your warranty to do so.

    I think that lots of enthusiasts would love to get their hands onto a real open device, I certainly do. So go AMD!

    • dragosmp
    • 7 years ago

    Best review I’ve seen on the new CPU. You picked the right opponents and the CPU and memory synthetics show more of the inner workings of the cores than something like Pcmark would have. A point on FCAT – it does look like there’s a strong correlation betwween latency-fraps and FCAT for single GPU systems; as such time is probably better spent by testing more apps/games rather than fcat.

    edit:grammar

    • dpaus
    • 7 years ago

    [quote<]near-competitiveness on the CPU side is accompanied by better graphics performance and, as we saw, superior battery life[/quote<] You guys made one massive, fatal error in your testing: you forgot to consult NeelyCam. He's provided reams and reams of links and pointers and most importantly, [i<]opinions[/i<] that clearly, clearly show that this pathetic attempt by AMD to compete is doomed, as it has far less computing power than [i<]any[/i<] i7, its GPU can't begin to touch the GTX 780, and it sucks power voraciously compared to bottom-feeding Atoms. You didn't even get chuckula to write some pithy comparative screeds. I just don't know [i<]how[/i<] you expect us to take you seriously! But, seriously... Like you, I'm struck by the platform's significant potential for improvement, and the promise that holds based on its already-impressive efficiency. And while I don't consider gaming to be the definitive measure that you do, I can't help but assume that at least one or two vendors might take the platform savings of this over an i3 or i5 and put some of it into a mobile Radeon with those dedicated lanes to make a pretty decent gaming laptop (hopefully with that 1080p IPS screen) at an affordable price. I hope AMD can find the engineering and financial resources to make the obvious 'tock' happen. And the sooner - i.e., about the time Bay Trail comes out - the better.

      • NeelyCam
      • 7 years ago

      [quote<]Like you, I'm struck by the platform's significant potential for improvement[/quote<] Actually, I remember having these dirty thoughts when I first learned about Brazos. The compact core, the awesomely cost-effective package... I think I used the word "breathtaking". Alas, the performance ended up being a disappointment. One of the points of Brazos was that the layout was largely synthesized (as opposed to hand-drawn). The great benefits of this approach is that less design resources are needed (absolutely important for AMD right now), and that it can be transferred easily to other process technologies. I think the AMD took the same approach with Kabini as well. The downside is that it's harder to tweak the layout to get the circuits running really fast - the automated tools are getting better, and there is a lot of development in that area... but currently there are limits as to how perfect the synthesized layout is. Brazos couldn't go very fast, and I think Kabini will have the same problem. Meanwhile, Intel with its near-unlimited resources can customize the layout of all critical blocks, and can tweak the design to go superfast when needed. There is also the cost aspect to this. If you want to get things to go fast, you have to pay the price in power and area. High performance is not where Jaguar was aimed for... instead, it was aimed for low power high efficiency. To get Kabini to go fast means essentially overclocking/volting. Think of pushing Piledriver to 4.5-5GHz... that's sort of like trying to push Jaguar to 2-2.5GHz. It wasn't meant for that, and it won't be efficient. There are two limitations to the max clock frequency. 1) Power consumption (P is proportional to fV^2). 2) The ability of the circuitry to operate fast. Intel's transistors are faster and layouts are optimized for speed - that's why Intel chips can run fast... for those, the average/long-term power consumption is the limiter, so Turbo is widely used across the board. For synthesized layout, the limit 2) hits earlier, and it may not be a power consumption issue as much as the fact that the circuits just cannot run that fast even if you up the voltage to crazy levels. I have a feeling that this is what was limiting Brazos, and is limiting Kabini. So, we might see a 2GHz Kabini, but it's going to burn a crazy amount of power. Anything power efficient is going to run slower, as that's what the chip was targeted. It will be interesting to see what happens with Bay Trail, since it sounds like that chip was optimized for that 2-15W range (while Haswell is more like 10-80W range). By the way, as we saw, Ivy Bridge with a dual-channel memory configuration beat Kabini pretty much across the board, and Ivy Bridge has dual-channel memory support. So, Kabini's graphics advantage is there only when compared to single-channel IB systems, and it's not that much more expensive to add a second stick to the Ivy Bridge system to improve its GPU performance significantly. So, it's just a question of cost - a few dollars more would uncripple Ivy Bridge and Kabini would be in trouble

        • Anonymous Coward
        • 7 years ago

        [quote<]So, it's just a question of cost - a few dollars more would uncripple Ivy Bridge and Kabini would be in trouble[/quote<] Ah, the amazing effect of competition. Anyway judging by Sony and MS, we can imagine that these Jaguar cores could be used more aggressively than AMD has done so far (i.e. dual channels, more GPU). Four Jaguar cores might be pretty sufficient.

        • dpaus
        • 7 years ago

        OK, who are you, and what have you done with our resident fanatical Intel fanboi? And the answer better be good, because if I find out that this post – which I mostly agree with – was actually written by NeelyCam… Well, let’s just say I’ll be in therapy for months.

        But there is still a small chance that it’s you, because, as usual, you’ve fixated on one aspect of Kabini’s performance, and completely missed the larger picture. Specifically: you keep going on and on about how difficult it will be to increase Kabini’s clock speed. Neely, Neely, Neely… You’re like those kids I see at stoplights in their hopped-up subcompacts. They grin at me like idiots as they rev the snot out of their 1.2 litre 4-bangers, as I calmly sit behind the wheel of a 400 HP V-8. They foolishly try to make their little engines go faster, instead of enjoying the light, nimble handling that their cars came with – bone stock.

        I think there’s lots of room to improve Kabini’s performance without touching clock speeds (at least until the design is migrated to either GF’s or TSMC’s next process node). As I mentioned in an earlier post, Anand has listed [url=http://www.anandtech.com/show/6976/amds-jaguar-architecture-the-cpu-powering-xbox-one-playstation-4-kabini-temash<]several places for potential performance gains[/url<] in his article, none of them clock speed. Scott & Cyril specifically mention the redundant memory paths as a potential power saving, but I say keep the power, merge the memory access channel logic, but make it dual channel (of course, I can see why you wouldn't want to consider that, as dual-channel memory access is one of Ivy's few architectural advantages over Kabini). There's still room to expand the FP to 256 bits, and increasing the size of the L1 cache blocks would significantly un-cork memory access. But as I also mentioned elsewhere, I'm no CPU designer, so I'd love to hear David Kantor's take on it. Hopefully, he's using his long weekend to write it up 🙂 As far as the use of automated layout tools goes, I totally agree with you (which I find quite disturbing), but I think that in the long run, it's the right path for them to follow. As you also note, the tools can only continue to improve, and at some point, they're negate Intel's human-capital advantage, and at a much lower cost. Despite my reputation as a frothing-at-the-mouth AMD fanboi (not quite equal to your corresponding reputation as an Intel fanboi, but I'm working on it), I'm looking forward to Bay Trail too. I've mentioned previously that I look forward to the day when my 'good enough' Windows computing device can be carried in my pocket. I could get seriously interested in a 7 - 8" 'netbook' form-factor running Windows 9 (i.e., the point at which I hope they've come back to their senses) with enough on-board storage for my modest needs (64 - 128 GBytes should do it), and the ability to connect an HDMI and DisplayPort monitor, plus a USB 3.x 'docking station' for all other peripherals. As long as i can run any productivity software I want on it, and maybe some casual games, I'll be happy to make a few sacrifices in exchange for the portability. And I think we're getting close...

          • NeelyCam
          • 7 years ago

          [quote<]if I find out that this post - which I mostly agree with - was actually written by NeelyCam... Well, let's just say I'll be in therapy for months.[/quote<] Somebody hacked my account. I know who it is, and the perpetrator will be punished severely. I'll start using 20-character passwords from now on, so a post like that one will never happen again

            • dpaus
            • 7 years ago

            Widespread two-factor, biometric authentication is arriving just in the nick of time.

            Give them a smack upside the head for me too – totally ruined my Saturday morning.

          • NeelyCam
          • 7 years ago

          [quote<]And I think we're getting close...[/quote<] What you want is Bay Trail's 14nm successor.. or actually, whatever is the 14nm Atom tock (the one after Airmont - I haven't found the codename anywhere... maybe it's Skymont ..?). I bet that stuff is magical... like Apple magical.

        • raddude9
        • 7 years ago

        [quote<]So, we might see a 2GHz Kabini, but it's going to burn a crazy amount of power. [/quote<] Or... it might exactly have a 25W TDP.... Did you miss page 2 of the review? It clearly shows the A6-5200 which is a 2Ghz quad core Kabini with a 600Mhz GPU that has a 25W TDP. So no need for any speculation, a 33% clock increase (over the 1.5Ghz quad-core) increases the TDP by 66%. Not a great return, but expected for a chip that's targeted for low power. Your mentioning the single-channel memory configuration situation got me thinking, well, apart from thinking that 90% of the people with such a config will never have the technical knowledge to upgrade thus short-changing them out of potential performance. My other thought was that Kabini does quite well with it's single channel, but how much better would it do a bit more GPU and a single channel of GDDR5 instead of DDR3, a kind of "PS4-lite" kind of chip. I don't think the higher memory latency would really hurt the low-Mhz cores (and sure, the GDDR5 would burn an extra few Watts) but the extra bandwidth could boost games considerably. So apart from adding some more turbo-core features to Kabini it's another potential improvement, and one that should be relatively straightforward to make seeing as the PS4 has this already.

          • strata8
          • 7 years ago

          [quote<]Or... it might exactly have a 25W TDP.... Did you miss page 2 of the review? It clearly shows the A6-5200 which is a 2Ghz quad core Kabini with a 600Mhz GPU that has a 25W TDP. So no need for any speculation, a 33% clock increase (over the 1.5Ghz quad-core) increases the TDP by 66%. Not a great return, but expected for a chip that's targeted for low power.[/quote<] Don't forget that the GPU clocks also increased by 20% 🙂

          • Azkor00
          • 7 years ago

          TDP does not correlate to actual power usage. It is just for reference for amount of power needed, heat needing to be dispersed etc… In fact most CPU use alot less power than their actual TDP. AMD trinity chips on the other hand have been known to throttle back the processor in heavy workloads to keep it within the TDP.

        • Hattig
        • 7 years ago

        @Neelycam “So, we might see a 2GHz Kabini, but it’s going to burn a crazy amount of power. ”

        By “might” you mean “will”. The “A6-5200” IIRC.

        And that “crazy amount of power” is a 25W TDP according to AMD, with the GPU running 20% faster as well.

        This will compare very nicely to the 35W mobile Pentiums.

          • Hattig
          • 7 years ago

          AMD released the new Jaguar Opterons today.

          A Quad-Core 2GHz X1150 runs at up to 2GHz and has a 9W TDP (although elsewhere it says TDP range 9W-17W). No graphics, but it still has 8x PCIe, USB3, SATA, etc. Price is $64.

          I take it to mean that this chip actually makes use of higher clocking in single threaded code.

    • Star Brood
    • 7 years ago

    Clearly we are witnessing great strides in mobile development. Now that AMD has almost caught up to Ivy Bridge on ULV, Haswell comes out next week..

      • willmore
      • 7 years ago

      So, maybe a $800 Haswell laptop will tie a $500 Kabini laptop? Yeah, that’s a hard decision.

        • Klimax
        • 7 years ago

        Considering i3 devices are in same range as reviewed unit,i5 clearly won’t be necessary… (And prices with new cpus won’t move)

    • odizzido
    • 7 years ago

    Nice review of a nice product. I would like to see some 10 or 9 inch laptops using the A4-1200 and an SSD with a decent screen and 4gigs of ram. It will never happen, but I can dream…

      • willmore
      • 7 years ago

      I love my Acer AspireOne 722 which has a c-50 and a 1388×768 10.1″ display. Update it with one of these chips and it would be a dream. As it is, it ties my SB era Pentium B940 laptop on graphical tasks–and at much lower power. To get the same CPU (or better) performance and way better GPU at the same or lower power usage? Wow, that would be wonderful.

        • raddude9
        • 7 years ago

        Hate to nitpick, but the Acer Aspire One 722 had a 11.6-inch, 1366 x 768 display, perhaps you are thinking about the AspireOne 52 which came in a few flavors, one with a 10.1 inch 1024×600 and another with a 1280×720 display.

        I know because I have the AspireOne 522 with the 720p display and a C60. Once I’d fine-tuned windows and upped the Ram on it with a 2GB stick I had lying around, it turned into a great machine and could run rings around other netbooks.

        I’m with the OP, and I know they’re not trendy these days, but I’d like to see a new-age 10-inch netbook with one of these new chips in, I’d be sorely tempted.

          • A_Pickle
          • 7 years ago

          If I could get a sub-$300 netbook with USB 3.0 I’d be sorely tempted.

          • willmore
          • 7 years ago

          No, you’re right on the dimensions. I just keep remembering the add for it which incorrectly listed the size as 10.1″ and, for some reason, I can’t get my brain to update that. Thanks.

        • odizzido
        • 7 years ago

        I have the same laptop. It’s made kinda cheaply but it’s the best laptop I have ever owned. I would love to get an updated one.

    • codedivine
    • 7 years ago

    Hi folks. Just wondering which browser did you use for Sunspider? It was not immediately clear from the description.

      • Cyril
      • 7 years ago

      We used Chrome 26 for performance testing and the older Chromium build for battery testing (to be consistent with the battery life results from our mobile suite).

    • jensend
    • 7 years ago

    Sunspider 0.9.1 is three years old and it was already not a very good benchmark then.

    Kraken is much more indicative of real-world js perf.

      • Cyril
      • 7 years ago

      We used SunSpider 1.0, not 0.9.1. I believe 1.0 came out [url=https://www.webkit.org/blog/2364/announcing-sunspider-1-0/<]less than a month ago[/url<].

        • jensend
        • 7 years ago

        Did you just edit the testing methods page to update that? I could swear I saw 0.9.1 on the methods page when I wrote that. Wait- oops, I had another of your CPU reviews open in another tab for comparison and looked at its methods page by mistake. Doh!

        Anyways, the new version of SunSpider does correct a couple consistency problems so as to not be [i<]entirely[/i<] meaningless (strong praise indeed). But it doesn't address the basic problems. The test suite is outdated and completely unrepresentative of real-world JS. Sunspider was the best JS bench tool available in 2007. By 2009 with the development of the jit'd etc js engines it was [url=http://mac-os-forge.2317878.n4.nabble.com/Iterating-SunSpider-td179773.html<]starting to show serious signs of obsolescence[/url<]. (Note that the power-saving problem that guy mentioned is the main fix in Sunspider 1.0, only four years late. The other problems haven't been addressed.)

          • Cyril
          • 7 years ago

          [quote<]Wait- oops, I had another of your CPU reviews open in another tab for comparison and looked at its methods page by mistake. Doh![/quote<] Yeah, this is the first time we've used 1.0. Definitely said 1.0 on the testing methods page all along, though... updating that list was the first thing I did before writing up the test commentary. 🙂 Edit: We'll give Kraken a look for next time.

            • jensend
            • 7 years ago

            [quote<]Edit: We'll give Kraken a look for next time.[/quote<]Great, thanks. A number of other well-known tech sites have either switched to Kraken or thrown both tests into a test suite along with [url=https://blog.mozilla.org/nnethercote/2012/08/24/octane-minus-v8/<]Octane[/url<] and several others. Even a good JS test isn't marvelously representative of overall browser performance. To get an accurate view of that one needs to throw in at least some DOM tests and possibly a bunch of other things (CSS, Canvas, and so on). But since you're comparing processors, not browsers, since no current overall browser benchmark is all that great, and since js performance is of some independent interest, I think Kraken will do the job.

    • mczak
    • 7 years ago

    Latency graph on page 4 is presumably missing the decimal point for all numbers. Cause 40 clocks for a l1d access, don’t think so…
    Regardless I would say L2 and especially memory latency of Kabini is simply pathetic, considering the clocks memory access is actually slower on Kabini than Brazos, and getting beat by Atom (which again due to clock the difference is actually quite a bit larger even) is just silly (with Atom still operating with a “on-chip FSB”, it’s not really a traditional IMC).
    Surely that must be the #1 weakness of Kabini, a mistake Silvermont is imho unlikely to make.

      • Cyril
      • 7 years ago

      The Sandra L1 latency results are in line with those we got for desktop CPUs. See [url=https://techreport.com/review/23750/amd-fx-8350-processor-reviewed/3<]here[/url<].

        • mczak
        • 7 years ago

        Well then they are also wrong :-). Even the linked SiSoft blurb actually shows the decimal point so l1 latency is 4.0 clocks for SNB as it should be, not 40, I don’t know who’s fault it is that the decimal point disappeared, my guess would be some contry settings got confused with points and commas somewhere…
        btw the conclusion why Javascript is faster on i5 compared to i3 is totally bogus (“despite the slightly slower clock speed”), of course the reason is just turbo so i5 runs this at 2.6Ghz (or thereabouts) whereas i3 is just 1.8Ghz – it is stated there already it favors single-thread performance so that turbo really should kick in there.

          • Cyril
          • 7 years ago

          Updated the commentary for SunSpider. Waiting on Scott for the other thing.

            • mczak
            • 7 years ago

            Thanks! Nice article. I have to agree with the conclusion though that no turbo is a shame. A 30% or so cpu clock boost while certainly not closing the single-thread performance gap to the ulv i3 would do wonders (granted i3 ulv _could_ have turbo too, intel just disables it because they’d rather sell you a i5 ulv, but I can’t see why amd would really want to sell you a hobbled 1-module trinity instead so there’s little reason not to enable it especially for the higher end versions imho).

          • Cyril
          • 7 years ago

          Just spoke with Scott and fixed the latency graph with the correct scale on the Y axis. Thanks for pointing this out!

          Edit: In case you’re wondering, the scale was wrong because Sandra inexplicably spits out these latency results without a decimal point (either period or comma) when you export to CSV or XML. We just never noticed the discrepancy.

    • Bensam123
    • 7 years ago

    Where are the FCAT benchmarks?!?!?!?11 XD

    Good article, I like the subjective test results too. Glad you guys decided to add those.

    Anyone think the chip in front of the laptop in the small jpg looks like a pizza box?

      • HTarlek
      • 7 years ago

      Well, the lower 2/3rds DOES appear to say ‘pizza’ – in Klingon.

        • Bensam123
        • 7 years ago

        [url<]https://www.google.com/search?q=red+baron+pizza&safe=off&source=lnms&tbm=isch&sa=X&ei=PXuhUfOeKIXMyQGg-IHwCA&ved=0CAcQ_AUoAQ&biw=1773&bih=995[/url<] Wonder if they were really hungry when they made the chip layout.

    • dpaus
    • 7 years ago

    Guys, the 3rd graph on page 4 has a mis-labeled legend; both the i3 and i5 results are labeled as i3. And a space went missing between ‘but’ and ‘the’ in the 3rd paragraph of Page 9.

    God, I feel like my sister the English teacher.

      • Cyril
      • 7 years ago

      Fixed, thanks.

    • WillBach
    • 7 years ago

    [quote<][...] its lowest power envelope is 3.8W, which limits it to relatively thick tablets [...][/quote<] The Exynos 5 Dual in the Nexus 10 can use a lot more than that. If Kabini can push more performance than the Exynos we could see it in all sorts of Android tablets. Source: [url=http://www.anandtech.com/show/6536/arm-vs-x86-the-real-showdown/13<]The ARM vs x86 Wars Have Begun: In-Depth Power Analysis of Atom, Krait & Cortex A15[/url<] Edited to add "can".

    • Anonymous Coward
    • 7 years ago

    So does this thing generally beat K8 clock-for-clock, then? I wonder how much of the floating point performance is dependent on new instructions. Anyway, a nice improvement by AMD.

      • abw
      • 7 years ago

      Depend of the soft but it seems equal or better in Integer
      and as good in FP , dont know if AVX is used in softs
      like CBench but with theses instructions at use it should
      be better in FP as well.

        • mczak
        • 7 years ago

        AMD promised something like 90% the IPC of a K8 with Bobcat, imho they didn’t quite get that (more like 85%) but Jaguar should about do it.
        On the FPU/SIMD side code could see a much larger improvement thanks to the widening of the units from 64bit to 128bit (theoretically should beat K8 very easily there as K8 only had 64bit SIMD units too).
        (FWIW I’ve actually quickly profiled cinebench 11.5 and came up with 70% scalar ops (mostly double precision) and 30% packed ones (nearly all integer or single precision) – on the scalar ops the 128bit SIMD unit won’t really help but they should for the packed ones. This version shouldn’t use AVX but it wouldn’t help all that much anyway since pretty much all it does is eliminate move instructions thanks to 3-operand addressing, as again Jaguar would have to split up the 256bit AVX instructions into two 128bit ones just like bobcat had to split packed 128bit instructions into two 64bit halves.)
        If your SIMD code benefits from newer SSE features not previously supported neither on Bobcat nor on K8 or uses some other more esoteric instructions (popcount, aes or whatnot) then of course these specific bits can run much faster.
        Interestingly all in all that’s actually also about the IPC of Bulldozer, though of course avoiding complete humiliation of that CPU line at least Kabini won’t reach the rather high (turbo) clocks of mobile Trinity/Richland…

          • abw
          • 7 years ago

          The ratio bobcat/k8 was like 95% in integer but only 75% for FP so
          it was 85% , spot on with your estimaton, a mediocre repartition
          although for a low power general purpose CPU integer matter more.

          Thank for CB profiling numbers , i thought that it used less FP ressources
          since Bulldozer manage a 6 ratio scaling with 8 cores while it has only
          four FP clusters , although fully multithreaded.

            • mczak
            • 7 years ago

            These numbers for CB I did _only_ include the instructions going to the SIMD cluster (I wanted to know how many are scalar FP and how many packed FP), so there could be more going to the INT core, I didn’t capture that.
            Though one of the reasons why the shared SIMD unit may be a good idea especially for FP code is that since FP instructions tend to have quite long latencies (even more so on BD compared to other cpus) it may easily not possible to fill the execution resources the chip has since you’d need a boatload of independent instructions. Since by definition the instructions from another thread will be independent that makes it much easier to fill these otherwise potentially unused units, so it’s not unexpected to still see reasonable multithread scaling.

    • ssidbroadcast
    • 7 years ago

    [quote<]What's more, Kabini and Temash don't support Windows 8's connected standby mode. AMD elected to focus its efforts on speeding up two things: quickly resuming operation when coming out of sleep mode and ensuring quick reconnects to Wi-Fi networks. Those are sensible choices, [/quote<] More than sensible. Those two things are highly common in almost any end-user usage scenario. Nobody likes waiting for wi-fi reconnect after opening up the laptop.

    • windwalker
    • 7 years ago

    Thank you for including your subjective impressions of system responsiveness.
    It’s much more useful information than how quickly it zips files and encodes media because nobody just sits around waiting for that to finish.

    • abw
    • 7 years ago

    Where are the power comsumption graphs.?…

    Annoying that such numbers do not seems to matter
    in what is allegedly a low power CPU review…

      • Cyril
      • 7 years ago

      We tested pre-built laptops with different displays, storage configurations, etc. Measuring system power consumption wouldn’t tell us much. The battery-life figures (especially the normalized ones) on page 10 are much more enlightening.

        • abw
        • 7 years ago

        There s some downside in this method.
        Since the laptop has a recent battery were you sure that it had been
        given enough charged/discharge cycles as to reach its final capacity.?

        Measuring the power comsumption would instantly point the discrepancy
        if ever there is one in this matter and would had provided an idea of the power
        drained while you could have mentionned that it was partial results to be
        confirmed later.

        Anyway , set apart this point , a good review overall.

          • Cyril
          • 7 years ago

          [quote<]Since the laptop has a recent battery where you sure that it had been given enough charged/discharge cycles as to reach its final capacity.?[/quote<] From the review: [quote<]Before testing, we conditioned the batteries by fully discharging and then recharging each system twice in a row.[/quote<] In my experience, after two discharges, battery-life results are repeatable in subsequent runs.

            • abw
            • 7 years ago

            Thanks for the tip…

          • Anonymous Coward
          • 7 years ago

          [quote<]There s some downside in this method. Since the laptop has a recent battery where you sure that it had been given enough charged/discharge cycles as to reach its final capacity.?[/quote<] I think you are confusing the behavior of some other type of battery with lithium ion. Lithium ion batteries will never gain total capacity by recharging, deep discharge, other anything else except perhaps operating at a more ideal temperature. The software monitoring the battery state might benefit from "training" however. (Interestingly, lithium ion batteries irreversibly loose capacity simply by aging, regardless of usage. They loose capacity faster if they are stored fully charged, if they are used, or if they are stored at high temperatures.)

            • abw
            • 7 years ago

            Yes , thoses batteries are a mess to manage , CdNI and NiMe
            were simpler to use , lower capacity but lower internal resistance
            as well.

            • Meadows
            • 7 years ago

            Loose?

            • Anonymous Coward
            • 7 years ago

            OK, you win.

        • smilingcrow
        • 7 years ago

        Only having battery times for web browsing and video playback is fine for Tablets but for laptops it can be useful for some people to know how a system responds under a heavier load. This may be a weakness of this platform as on paper the TDP is high for the performance so it may be inefficient at load so it would good to know.

        I used to check my laptops’ power consumption by removing the battery, connecting to an external monitor and noting the idle and CPU load figures. It gave me a baseline figure and how much extra the system consumed under CPU load. Obviously the same can be done for the GPU by itself and for the CPU+GPU.

          • smilingcrow
          • 7 years ago

          It seems that the CPU is power efficient as is the platform overall but the GPU is like a sailor on a Saturday night; it drinks a lot. Doesn’t mean it’s inefficient just that it’s been allocated a lot of the power budget.

      • smilingcrow
      • 7 years ago

      It would be useful to see battery life under full CPU and CPU/GPU loads to get an impression of the worst case scenario.
      Notebookcheck.net go in for that and they have a review for Temash but no Kabini yet:

      [url<]http://www.notebookcheck.net/Review-AMD-A6-1450-APU-Temash.92264.0.html#c1160794[/url<] For the complete system they recorded a maximum at the wall of 21.9W for the A6-1450 which seems to be an 8W part.

      • raddude9
      • 7 years ago

      There’s plenty of other reviews on the net (although TR tends to be better) if you can’t find exactly what you want. Try Tom’s:
      [url<]http://www.tomshardware.co.uk/kabini-a4-5000-review,review-32695-13.html[/url<] or Anand: [url<]http://www.anandtech.com/show/6974/amd-kabini-review/6[/url<] or Techspot: [url<]http://www.techspot.com/review/671-amd-a4-5000-kabini/page2.html[/url<] or HotHardware: [url<]http://hothardware.com/Reviews/AMD-2013-ASeries-Kabini-and-Temash-Mobile-APUs/?page=7[/url<] you get the idea.

        • abw
        • 7 years ago

        Thank you for the linkery.

          • ronch
          • 7 years ago

          Honestly, any self-respecting hardware enthusiast can look for those information on his/her own instead of complaining about how a particular review site missed doing certain tests, don’t you think?

            • abw
            • 7 years ago

            I m not really a hardware enthusiast , besides
            a few samples wont give an accurate picture
            statisticaly speaking.

            Weird that both TR and Hardware.fr , two valuable sites ,
            dont give any power number.

            • ronch
            • 7 years ago

            [quote<]I m not really a hardware enthusiast7[/quote<] Really? You're participating in the discussion area of TR's AMD Kabini article and you're not a hardware enthusiast? So what are you here for? To pick fights with NeelyCam, Chuckula, and anyone else who doesn't agree with you or you think is trolling? Crazy logic, man.

            • abw
            • 7 years ago

            No agression from my side , only a just rewarding for some people.

            I have nothing against Neely but Chucktrolla made the mistake
            to be agressive and did lie against me in the very first post where
            he responded to me and he made a very ugly first impression ,
            and since you have only an unique occasion to make
            a first impression…..

            • ronch
            • 7 years ago

            If there’s one thing I’ve learned here in the TR forums, it’s how to forget the past. I’ve clashed with Neely and Chucky on several occasions myself, and there are a few others apart from them too. Thing is, I’m not gonna get all riled up by folks who I don’t even know personally. In fact, I’ve come to be quite fond of some of the characters here and feel quite at home with the active ones. Welcome to the TR community. It sucks here, but it’s also fun.

            And yeah, if you’re after getting respect from people and being nice to each other, don’t go to TR, go to Facebook, where you know everyone in person (at least that’s the way it should be) and everyone’s careful not to say stupid things because folks who know them personally see what they post.

            • abw
            • 7 years ago

            In this case there s nothing that could be painfull ,
            it s just like shooting a barking dog…

    • chuckula
    • 7 years ago

    I post a comment asking for more Kabini reviews in the shortbread, and one appears here!

    TR truly performs magic.

      • abw
      • 7 years ago

      Chucktrolla is already here , only TrollyCam is missing….

      But not for long…

        • chuckula
        • 7 years ago

        Have you ever made a substantive post on this website?
        Have you ever even made a somewhat on-point or satirical troll?
        Are you even capable of forming complete sentences that are terminated with appropriate punctuation?

        I gather that the answer to all three of these questions is a rather emphatic no.

        • chuckula
        • 7 years ago

        [quote<]We were surprised by the inclusion of a 13" 1080p panel on the A4-5000 whitebook, but AMD says we can expect similar goodness from $499 retail offerings.[/quote<] And abw claims that only Intel makes outlandish predictions that will never come true.

          • abw
          • 7 years ago

          Screens prices will collapse like all mature electronics based items.

          There s a current trend toward cheapness and manufacturers will
          have to provide products that are both cheap and with good screens ,
          that or they will rapidly die.

          Edit : so you have at least three different accounts….

            • NeelyCam
            • 7 years ago

            What’s the third one?

            I.e., you have no proof!

            • abw
            • 7 years ago

            He posted the first post in the thread and i answered him
            no more than a minute later , it only took another minute
            or so to have him posting a response and click 3 down votes
            while there was only our 3 posts and TR article was published
            for a few minutes.

            Trolls are rarely smart, indeed.

            • NeelyCam
            • 7 years ago

            Ah – you’re going by the voting. Sorry – the clan is strong, and the votes fly fast.

            You have no proof that I’m chuckula/MadManOriginal.

            • abw
            • 7 years ago

            I didnt involve you other than as some kind of quasi symmetric
            image of Chucktrolla where he stand in the ugly side of the mirror
            but i never suggested that you were both a single character.

            As for the clan i guess that the said troll has many seats
            in this desesperate assembly.

            • chuckula
            • 7 years ago

            If abw thinks that Neely and I are the same person, then I take back everything I have ever said about BaronMatrix being the stupidest troll on this site.

            • ronch
            • 7 years ago

            I think I was the one who thought you and Neely are the same person. Or was it Bensam and Neely? Ah…

            • abw
            • 7 years ago

            They have different IQs but they converge to the same FUD ,
            arent Intel meds wonderfull.?..

            • ronch
            • 7 years ago

            [quote<]wonderfull[/quote<] Funny how you attempt to insult their IQs when you fail to even spell a common word correctly.

            • abw
            • 7 years ago

            English is not my native language i dont even use
            it elsewhere than in a few forums , neverless it s
            spelled well enough considering the said trolls IQs…

            • ronch
            • 7 years ago

            Ok. Then switch to French or something. TR folks can speak at least a dozen languages. Part of the training at Camp Gerbil.

            • abw
            • 7 years ago

            Une interessante idée , malheureusement la plupart des membres
            ne comprendraient pas grand chose.

            • ronch
            • 7 years ago

            I think I was the one who started this trend of suspecting or accusing people of using alternate usernames or having alter egos, to put it in a slightly jokingly manner. Guess it caught on.

            Do you have any idea how much this BOOSTS my self esteem??? 😉

            LOL

            • Meadows
            • 7 years ago

            Wait a minute. You believe that you said something [i<]nobody would think of downvoting[/i<] except for the accused?

        • ronch
        • 7 years ago

        Have a chill pill.

      • Sargent Duck
      • 7 years ago

      Don’t know why you got down voted for that. I gave you an upvote to try and balance that out.

Pin It on Pinterest

Share This