AMD’s A8-3500M Fusion APU


Computer chips become more complex over time. We know this in our bones by now, in various ways, whether it’s watching ever more functionality get crammed into smart phones or the constant drumbeat being sounded for, well, the constant drumbeat of Moore’s Law. In recent years, we’ve watched the CPU rise from a single core to two, four, and even more. Cache sizes, clock speeds, and performance have grown over time, as well.

Even so, the sheer scope of AMD’s new processor—code-named “Llano” and creatively dubbed an “accelerated processing unit” (APU) rather than a CPU—may cause you to do a double-take. This one chip incorporates a whole host of elements, many of which used to reside in other parts of a PC: up to four traditional CPU cores, a north bridge, a DDR3 memory controller, a bundle of PCI Express connectivity, a moderately robust Radeon GPU with an associated UVD block for video acceleration, and a pair of display interfaces. That’s a mighty long list of capabilities consolidated into one piece of silicon, almost a system on a chip rather than a CPU surrounded by many helpers.

By integrating so many pieces together, Llano follows a trajectory for CPUs established long ago, when they first incorporated floating-point units. L2 caches were next to be assimilated, followed by memory controllers in AMD’s K8. The integration trend has really picked up steam in recent years, though, and the most fully realized example has been Llano’s primary competitor, Intel’s Sandy Bridge processor. Even though it follows Sandy Bridge by roughly half a year, Llano still feels like a notable milestone on the integration path, in part because AMD has covered a lot of ground in this single step—and in part because Llano has absorbed a familiar and relatively formidable Radeon GPU.

Integration is the hot trend because it offers two main types of benefits. First, bringing ever more components on the CPU die can reduce the size, cost, and power consumption of a computer system. Laptops have grown dramatically smaller and more capable in recent years, with longer battery life, thanks to creeping integration. Second, situating key computing resources together on the same die has the potential to improve performance substantially, especially if those components can take advantage of a shared pool of memory.

By christening Llano a “Fusion APU” and talking about the possibility of tools like OpenCL allowing the execution resources of the CPU and GPU to work together, AMD’s marketing machine has chosen to emphasize the second class of benefits. Make no mistake, though: Llano is about that first class of benefits, through and through.

Fusion’s first steps

Intel has been shipping CPUs based on its own 32-nm manufacturing process for well over a year, but Llano is the first chip from AMD and its manufacturing partner, GlobalFoundries, to ship in volume at 32 nanometers. GloFo’s 32-nm process is distinct from Intel’s in several ways, including the use of silicon-on-insulator layering and a “gate-first” approach to the construction of high-k metal gates. Together, these techniques have helped create the benefits one would hope to see from a process shrink. According to Dr. Dirk Wristers, GloFo’s VP of Technology and Integration, this 32-nm process offers a 100% increase in transistor density, along with a 40% increase in switching speed and a 40% reduction in energy required per switch, versus its 45-nm predecessor.

The upshot of these changes for Llano is room for more toys—a vastly increased transistor budget—and the potential for achieving higher performance in a relatively small power envelope.

Code name Key

products

Cores Threads Last-level

cache size

Process node

(Nanometers)

Estimated

transistors

(Millions)

Die

area

(mm²)

Penryn Core 2 Duo 2 2 6 MB 45 410 107
Bloomfield Core i7 4 8 8 MB 45 731 263
Lynnfield Core i5, i7 4 8 8 MB 45 774 296
Westmere Core i3, i5 2 4 4 MB 32 383 81
Gulftown Core i7-980X 6 12 12 MB 32 1168 248
Sandy Bridge Core i5, i7 4 8 8 MB 32 995 216
Sandy Bridge Core i3, i5 2 4 4 MB 32 624 149
Sandy Bridge Pentium 2 4 3 MB 32 131
Deneb Phenom II 4 4 6 MB 45 758 258
Propus/Rana Athlon II X4/X3 4 4 512 KB x 4 45 300 169
Regor Athlon II X2 2 2 1 MB x 2 45 234 118
Thuban Phenom II X6 6 6 6 MB 45 904 346
Llano A8, A6, A4 4 4 1MB x 4 32 1450 228
Llano A4 2 2 1MB x 2 32 758

The unnecessarily well-populated table above shows how Llano compares to a broad range of today’s desktop processors. As you can see, AMD actually has plans for two very different versions of Llano silicon, one with quad cores and another with two cores and just over half the transistors. The quad-core version is first out of the chute, and initially, AMD will offer dual-core models of its A-series APUs made from the larger chip with a couple of cores disabled. Eventually, the native dual-core variant will take over, because it should be much more economical to manufacture. (Since it’s not here yet, AMD hasn’t seen fit to divulge the dual-core Llano’s die size.)

Somewhat surprisingly, Llano’s transistor count eclipses all of its contemporaries, including the six-core Gulftown chip with 12MB of L3 cache. However, the larger concern is die area, because that determines the cost to make the thing. As you can see, the quad-core Llano at 228 mm² is slightly larger than the 216 mm² quad-core Sandy Bridge. The difference doesn’t seem so notable—until we consider that the bigger Llano will mostly do battle against the mid-size, 149 mm² Sandy Bridge. Of course, higher costs for AMD don’t necessarily mean higher prices for consumers—just lower profits for AMD.

An annotated look at the “Llano” die. Source: AMD.

Llano itself may be new, but the individual components that make it up are largely familiar. The CPU cores are based on the now-venerable “Stars” microarchitecture used across the current Athlon, Phenom, and Opteron lineups. In Llano, each of those cores has a full megabyte of L2 cache associated with it, double the amount used in Propus (Athlon II) and Deneb/Thuban (Phenom II). That addition may, in part, help offset the loss of the 6MB L3 cache used in the Phenom II. Mike Goddard, Chief Engineer of AMD’s client solutions, said the L3 cache was nixed for two reasons. First, the L3’s performance advantages were limited by the latency it added to memory accesses. Second, and probably most notably, the L3 cache presented a power consumption problem, because it had to stay awake when any one of the CPU cores was awake. The power-performance tradeoff apparently wasn’t worth it.

Block diagram of the AMD “Stars” CPU core. Source: AMD.

Goddard claimed Llano’s implementation of the “Stars” core achieves over 6% higher instruction throughput per clock than prior versions due to a number of small refinements. The biggest contributor there may be the larger L2 cache. The algorithm that speculatively pre-fetches data into that cache has been beefed up, too. Llano’s cores have larger reorder and load/store buffers, and the execution resources have been enhanced with the addition of a hardware divider unit. Those are the headliner tweaks, though Goddard hinted a number of more minor changes were included during the port to 32 nanometers, as well. The 6% figure doesn’t sound like much, but it is more than we expected out of probably the last hurrah for this microarchitecture, before Bulldozer takes over later this year.

Sumo wrestling among the Redwoods

Block diagram of the “Sumo” IGP. Source: AMD.

Llano’s integrated graphics processor is code-named “Sumo,” which is mildly disturbing because it offers us a glimpse of our code-named-spangled future, in which every portion of a chip has a proper name we can’t remember. Fortunately, Sumo is easy to describe with reference to another code name, Redwood, which is entirely familiar as a discrete graphics processor from the Radeon HD 5000 series—namely, the Radeon HD 5670. Sumo shares Redwood’s graphics architecture—with five SIMD engines, a total of 400 shader ALUs, 16 texels per clock of texture filtering capacity, and eight pixels per clock of ROP throughput—and feature set—including robust DirectX 11 support with hardware tessellation, up to 8X multisampled antialiasing in hardware, and additional AA possibilities in software. (The Sandy Bridge IGP, by contrast, supports only DX10 and 4X multisampled AA.)

Sumo’s one upgrade over Redwood is an updated video processing block, dubbed UVD3, that’s also used in Radeon HD 6000-series discrete GPUs. UVD3 adds support for Blu-ray 3D playback, MPEG4 decode acceleration, and fuller acceleration of MPEG2 video streams to the previous generation’s acceleration of the VC-1, H.264, and MPEG2 formats. AMD points out the MPEG4 support, in particular, is noteworthy because Intel’s Clear Video block doesn’t have it.


When AMD starts talking about how Llano comes with “discrete level” graphics—a phrase we’ve heard often in reference to this product—one must remember that discrete graphics cards come in many forms.

Although the Llano IGP has the same array of graphics resources as a Radeon HD 5670, it has to operate under a considerably different set of constraints. The discrete desktop Radeon HD 5670 runs at a very healthy 775MHz, while the fastest mobile variants of Llano’s IGP tick along at 444MHz. (The desktop versions run as fast as 600MHz.) That means the best mobile Llano IGP has theoretical peaks of 3.6 Gpixels/s of fill rate, 8.9 Gtexels/s of texture filtering, and 355 GFLOPS of shader compute power. That’s a little more than half the corresponding rates for a discrete Radeon HD 5670. The more notable constraint, though, is memory bandwidth. Thanks to its GDDR5 memory, a discrete Radeon HD 5670 has 64GB/s of bandwidth all to itself. The Sumo IGP, meanwhile, has to share two channels of DDR3 memory with Llano’s four CPU cores. With dual 1333MHz memory modules, Llano’s shared memory subsystem has less than a third of the 5670’s dedicated bandwidth.

Those limitations don’t make Llano’s IGP a poor one. On the contrary, this is surely the best integrated graphics solution we’ve ever seen. Still, when AMD starts talking about how Llano comes with “discrete level” graphics—a phrase we’ve heard often in reference to this product—one must remember that discrete graphics cards come in many forms, down to the $49 Radeon HD 6450, which is pretty anemic. The beefier Radeon HD 5670 easily outpaces the Llano IGP, but it will set you back $77 online. In terms of both graphics power and dollars, the stakes involved here are relatively low.

AMD appears to be acutely aware of how critical memory bandwidth will be to the graphics performance of Llano-derived APUs. The dual-channel DDR3 memory controller will support 1333MHz memory, both in its stock and low-power (1.35V) incarnations, across the entire A-series APU mobile product line. A few variants will support 1600MHz memory, and the desktop versions will push their DIMMs as high as 1866MHz. Capacity will top out at two DIMMs and 32GB in the mobile chips, while the socketed desktop versions will support four DIMMs and up to 64GB. Then again, those are some really big honkin’ DIMMs, as we say in the industry, so the practical limits may be lower for the time being.

Glue for adhesion, not Fusion

The final major components in the Llano die are the four PCI Express controller blocks. Each of them can feed eight lanes of second-generation PCIe connectivity, but one of those blocks of eight is dedicated to driving a pair of digital display outputs. The remaining 24 lanes can flex into various configurations. A common one would use 16 lanes to talk to a discrete GPU, four lanes to talk to the FCH or south bridge chip, and leave four lanes for general-purpose use.

Much of the rest of Llano is glue, finding a way to make all of these disparate components talk to one another and function together properly. This chip doesn’t have any major architectural modifications geared toward efficient integration; unlike Sandy Bridge, there’s no internal communications ring, no shared last-level cache, and no IGP participation in the Turbo mechanism. Instead, Llano’s internal links look much like the external links used before. In place of the Radeon’s dual memory controllers is a connection to Llano’s north bridge. In fact, Goddard said there are actually two links from the IGP into the north bridge, which makes sense historically given that the Redwood GPU has two 64-bit memory interfaces. A separate connection, dubbed the “Fusion compute link,” serves the same purpose as a PCIe interconnect between a CPU and a discrete GPU, allowing the IGP to access system memory coherently—that is, without spoiling the complex dance involving multiple CPU cache levels holding multiple copies of data, potentially in different states. Goddard stated that this communication channel will be important in the future for GPU computing applications, but he admitted the engineering team didn’t plumb Llano’s Fusion Compute Link to be especially high bandwidth. Instead, he expects AMD to invest more in this link going forward—that is, in future APUs.

When asked about the thorny problem of how Llano arbitrates between CPU and IGP requests for memory access, Goddard chose his words carefully. To paraphrase, he noted that fewer CPU-based algorithms require high bandwidth, while GPUs tend to be more tolerant of high latency. Some applications also have isochronous requirements (that is, they need a guaranteed stream of data at a certain rate). The result is a “very complex algorithm.” Goddard admitted the team wasn’t able to do everything it wanted to do on this front. “We think you’ll struggle to find a problem, but there are things we’d like to do differently next time.”

If you’re getting the sense that Llano’s brand of fusion is more like a couple moving into adjacent apartments in the same complex rather than moving in together, you’re on the right track. The plan is to move in together, eventually, but that’s down the road.

With that said, AMD Graphic CTO Eric Demers did note a couple of compute-focused provisions in the IGP that point to a more fully fused future. The first provision, called “pin-in-place,” allows the GPU to reserve a portion of system memory that it can access without traversing any operating system storage buffers—a performance enhancement. Discrete GPUs can use this function, as well; the data transfers then happen over a PCI Express link. The second, known as “zero copy,” works in conjunction with pinned memory and lets kernels running on the GPU modify the system’s virtual memory directly, rather than copying the data to graphics memory for modification. For systems where the CPU and IGP share the same physical RAM, the use of zero-copy pinned memory can potentially offer some nice performance benefits. Demers said this capability could be used both for 3D graphics, via an OpenGL extension, and for GPU computing via OpenCL. Then again, both pin-in-place and zero-copy have also been available in Nvidia’s CUDA toolkit since version 2.2, so developers can employ them on ION-based netbooks, too.

Of power gating and ceramic impellers

Although the block diagrams for Llano are a mosaic of known quantities, AMD told us the major focus of its work on this chip was power savings. AMD has long trailed Intel on this front, and buying an AMD-based laptop has generally meant getting a bit of a discount on the overall system at the expense of shorter run times on battery. With Llano, the firm believes it has reached parity in this crucial arena with its much larger competitor.

One key to making it happen is the addition of a new type of logic: a power gate, which shuts off all power to a portion of the chip when tripped, eliminating not just active power but leakage power, as well. Intel has gated power for the individual cores of its processors since Nehalem, but to date, AMD has lacked that capability. No more. All four of Llano’s cores share the same voltage supply, but each core has a power switch associated with it. Whenever one of the cores becomes idle and enters the C6 power state, all power to that core is shut off. Even on what may feel like a busy system to the end user, there could be billions of cycles of unused time on multiple cores, a huge target for power savings.

Additionally, Llano is capable of entering a package-level C6 sleep state when all four cores are idle. In this state, voltage is lowered across the entire CPU rail, saving even more power.

Green indicates leakage power only; no clocks are running. Source: AMD.

Llano has a second power plane for its entire “uncore”: the IGP, UVD block, the graphics memory controller, and the north bridge. The uncore can operate at varying voltages and multiple, varying frequencies. According to Goddard, the uncore voltage is dynamically determined by a number of different inputs, including the north bridge’s power state, the GPU power state, PCI Express speeds, and the UVD workload. Several uncore elements have power gating, as well. The GPU and its memory controller are separately gated and will be powered down dynamically at idle, while the UVD block can simply be turned off by software when it’s not in use.

Besides saving power when idle, Llano is tuned to make the most of its available power envelope when active, thanks to AMD’s dynamic power scaling tech, known as Turbo Core. As you may know, Turbo Core is an answer to Intel’s Turbo Boost technology. The two are designed around the same basic principle, opportunistically grabbing more clock speed when the thermal headroom is available, yet they operate rather differently.

Intel’s Turbo Boost relies on a network of thermal sensors on the chip to help determine how much it can range up in clock frequency, while AMD’s Turbo Core uses only activity sensors on the die. Given this limited input, AMD must add additional intelligence offline, so it characterizes the power draw of its chips based on activity—Goddard called this a “big pre- and post-silicon exercise.” The firm then sets a Turbo Core policy for each model of CPU based on that research. By its nature, this estimate must be relatively conservative, because it must cover the whole range of chips selected to represent that model.

Turbo Core adds only one more P-state to the CPU’s repertoire, a single higher clock speed step; it then dithers between the two top clock frequencies as the activity-based power estimate will allow. In our Llano test chip, an A8-3500M processor, the difference between the two is rather large. The base clock speed is 1.5GHz, and the Turbo speed is 2.4GHz. There are no intermediate states like, say, 1.8GHz for four lightly-loaded cores. Typically, the Turbo Core policy only lets the chip range above its base clock speed when a portion of the total cores are actively at work. In the Phenom II X6, for instance, Turbo allows up to three active cores to range to higher frequencies. We’re unsure what the policy is for our Llano APU, since we lack a utility that will properly report its clock speeds.


The two are designed around the same basic principle, opportunistically grabbing more clock speed when the thermal headroom is available, yet they operate rather differently.

Our sense is that processors equipped with Turbo Core spend substantially less time resident at higher clock frequencies than those with Turbo Boost. Still, Goddard touted several of Turbo Core’s attributes as advantages over Intel’s approach. Among them is the fact that activity is measured digitally and therefore more precisely. Also, Turbo Core behavior is consistent and deterministic across all copies of a certain model of CPU, and performance doesn’t vary with the quality of the thermal solution in use. All of those things sound great on paper, but our sense is that AMD will abandon those principles just as soon as it can produce a chip with a thermal sensor network comparable to Intel’s.

Naturally, AMD’s activity-based power estimates for Llano include both the CPU and GPU cores on the chip. As we’ve already noted, the GPU doesn’t participate in Turbo Core’s clock frequency scaling. GPU activity may, however, eat up thermal headroom that would otherwise be available to the CPU cores; the GPU gets priority in such a case.

Another possibility is that programs causing particularly high power consumption could be run on one or both of Llano’s two major processor types, pushing the chip to exceeed its total thermal envelope. In that case, a legacy CPU thermal throttling mechanism will kick in on the CPU side of the fence, reducing the CPU cores to a lower P-state and limiting the chip’s overall power draw and heat production. The IGP will continue to chug along as ever, true to its Redwood roots. AMD’s graphics division did introduce a power-based throttling feature called PowerTune in its Cayman GPU, but that mechanism hasn’t trickled down to its smaller GPUs yet, nor to the Sumo IGP.

The platform

Block diagram of the “Sabine” platform. Source: AMD.

The Llano platform has its own code name, as well: “Sabine.” Thanks to Llano’s broad integration of components, the Sabine platform is a two-chip solution that should have a smaller footprint than AMD’s prior efforts in this segment. AMD calls the single support chip in its APU platforms a “Fusion controller hub” or FCH, although the FCH is essentially the same thing as a traditional south bridge.

AMD offers two FCH options for Sabine. The A60M chip, already widely used in the low-cost “Brazos” platform, may see duty in relatively inexpensive Llano systems. We’d expect the A70M to be more prevalent thanks to its support for up to four USB 3.0 ports, a nice-to-have and much-needed feature distinguished by the fact that it’s not natively provided by Intel’s Sandy Bridge platforms. For external hard drives and other devices capable of high-speed transfers, USB 3.0’s roughly 10X theoretical improvement over USB 2.0 can be a godsend.

AMD has built several features into the Sabine platform to make it more attractive. One is a dynamic screen brightness capability known as adaptive backlight modulation (ABM). ABM analyzes the image to be displayed and, when possible, reduces the backlight strength while raising the brightness of the pixels being displayed. The goal is to deliver a similar-looking on-screen image while using less power to drive the LCD, extending battery life.

Another feature of note is a dynamic switchable graphics facility. Although it has a relatively powerful IGP, as these things go, Llano can be paired with a discrete GPU for higher-performance graphics. As with prior platforms, the system can switch between the integrated and discrete GPUs in order to save power or to deliver better frame rates, either via user direction or by making an automatic swap to the IGP when on battery power. What’s new here is another alternative, a dynamic switching capability that will choose the most optimal GPU based on application profiles. For instance, a session of web surfing in the GPU-accelerated IE9 might use the lower-power Sumo IGP, which is adequate to the task, but firing up a game would cause the system to hand off rendering duties to the discrete GPU.

In our experience, the dynamic switching feature works pretty seamlessly, transitioning between the responsible GPUs with little delay or drama. However, changing from one type of switching mechanism to another—from manual to dynamic or vice-versa—involved some garbled screens and big, hairy delays on our review system. Still, we expect most folks will choose dynamic switching and never look back, especially because the system prompts the user to pick the appropriate GPU for unrecognized applications.

Dynamic switching operates in conjunction with another intriguing feature, the innocuously named Dual Graphics, a mobile version of AMD’s CrossFire multi-GPU teaming technology. AMD says the Llano IGP can cooperate with discrete Radeons in both the 5000- and 6000-series lineups. We’ve seen various attempts at teaming IGPs with discrete GPUs in the past, and they’ve been pretty uneven in terms of long-term support and compatibility. This incarnation comes with a big caveat right out of the starting gate, because it only works with games using DirectX 10 or 11. A great many games still use DX9, so Dual Graphics’ applicability is narrower than we’d like. Still, if the difference in throughput between the IGP and the discrete GPU isn’t too large, GPU teaming potentially makes some sense.

On evenly matched discrete GPUs, the preferred and most common method of divvying up the workload is to assign even-numbered frames to one GPU and odd-numbered ones to the other, a method known as alternate-frame rendering (AFR). On a pair of equally fast GPUs, AFR can achieve nearly twice the frame rate of a single chip. In the case of an IGP + GPU pairing that’s somewhat asymmetrical, performance isn’t likely to scale as well. AMD quotes a figure of “up to 75% additive performance” with Dual Graphics, and that’s a best-case number. To overcome more extreme asymmetry between IGP and GPU, Dual Graphics can use a 2:1 split in frame assignments between the discrete and integrated GPUs. You’re looking at more modest frame rate gains in such a configuration, but as Demers pointed out in a bit of a competitive dig, that’s better than Nvidia’s discrete GPUs, which can’t gain any additional performance by splitting the workload with IGPs from Intel or AMD. Whether Dual Graphics is useful for things other than scoring marketing points is something we’ll have to explore for ourselves.

The A-Series APUs and a whole bundle ‘o Radeons

AMD is spinning Llano into a host different models for the laptop market, most of them quad-core parts. Here’s a look at the lineup.

The mobile A-series APU line. Source: AMD.

All of these models fit into one of two power bands, either 35W or 45W, both aimed at mainstream laptops. AMD tells us a 25W Llano derivative is a possibility, too, but it hasn’t chosen to introduce one yet. We expect that introduction is much more likely to happen once the native dual-core Llano silicon is shipping.

As if the amount of technology stuffed into Llano weren’t complex enough, AMD’s product segmentation efforts have yielded three different tiers of A-series APUs—A8, A6, and A4—along with a corresponding trio Radeon brand names for the IGP configurations. Although AMD hasn’t revealed exact prices for these mobile GPUs (and isn’t likely to do so, since its customers are big PC builders, not consumers), we do have a sense of the basic product positioning, right in the meat of the laptop market. A4 APUs should go into laptops starting at around $500. A6-based systems should start at $600, and A8 systems at $700. (AMD’s Brazos-based E- and C-series APUs will continue to compete with Intel’s Pentium, Celeron, and Atom processors in systems below $400.) John Taylor, AMD’s Director of Product Marketing, reckons the A4 series will compete with low-end mobile Core i3 processors, while the A6 will straddle the Core i3 and i5 lineups, and the A8 will face higher-end Core i5s and lower-end Core i7s.

We’re sure you’ve fully absorbed all of those APU and IGP model numbers and their related specifications, so we’ll move on to the next step of your education in Llano branding. Not only do the three IGP configurations get their own Radeon model numbers, but adding a second GPU for Dual Graphics gives you two more models to track: the discrete GPU’s, and a new model number that AMD marketing has generated to reflect the combined power of the Llano IGP and the discrete GPU together.

Long-time readers might think I am making this up, but alas, it is not a joke. Here’s a matrix from AMD that explains the whole scheme.

Source: AMD.

I think that explains it, at least. Say, for example, you have a laptop with an A6 processor and a Radeon HD 6520G integrated GPU, and that laptop also has a discrete Radeon HD 6630M GPU on board. Their combined wonder-twin powers would add up to a “Radeon HD 6680G2” label on the box.

If, like me, you’re going to forget these model numbers in about 15 seconds, it may be useful to remember that an “M” at the end of the model signifies a discrete GPU alone, a “G” indicates an IGP alone, and “G2” denotes a Dual Graphics configuration. If that has you feeling more confident, this will knock you back down a peg. Only Llano systems equipped with dual-channel memory configurations are eligible for Dual Graphics operation and branding. The drop in IGP performance with a single DIMM apparently throws things far enough out of balance for AMD to scuttle the whole deal.

The test systems

Our attempts to place Llano in context involve a couple of very similarly configured laptops—one based on an A8 APU and another based on a Sandy Bridge Core i5—and a host of data from other sources. Our Llano test system is a pre-production Compal whitebook supplied by AMD.

The pre-release Llano review system from AMD

Behold, a USB 3.0 port!

This system is equipped with an A8-3500M APU. That’s one of the more desirable Llano-derived APUs, since it has a 35W TDP, quad cores at 1.5GHz with a 2.4GHz Turbo peak, dual channels of DDR3 at 1333MHz, and the fastest version of the IGP, the Radeon HD 6620G. This laptop is also equipped with a discrete GPU, a Radeon HD 6630M, and is capable of running in a Dual Graphics config. AMD would call this GPU tag team the Radeon HD 6690G2.

HP’s ProBook 6460b with a Core i5-2410M

For comparison to the Llano review unit, we ordered up the closest analog we could find in stock at Newegg, the HP ProBook 6460b pictured above. Like the Llano system, the ProBook has a 14″ 1366×768 display with a matte coating, a Hitachi 7K500 mobile hard drive, 4GB of RAM, an optical drive, and Windows 7. The processor in the ProBook is a Core i5-2410M, which we believe to be the closest competitor to the A8-3500M APU. Like the A8, the Core i5-2410M has a 35W TDP rating, but the i5-2410M has only two cores to Llano’s four. Thing is, these are much more potent cores, with a base clock of 2.3GHz, a Turbo peak of 2.9GHz, and quad threads thanks to the magic of Hyper-Threading. The i5-2410M also has the full-fledged HD Graphics 3000 edition of the Sandy Bridge IGP.

Although we couldn’t find a system with the exact same battery rating as the Llano test unit, we did get awfully close. The Compal whitebook has a 58 Wh battery, while the HP ProBook’s battery is rated for 55 Wh.

One major place where the ProBook differs from the Llano whitebook is its lack of a discrete GPU. We wanted to focus primarily on Llano and Sandy Bridge, so we didn’t bother with discrete graphics on the Core i5 system. We then disabled the discrete GPU in the BIOS on the Llano system for most of our tests. Both discrete and dual graphics will make an appearance, though, as you’ll see.

Oh, and I suppose this is as good a place as any to talk about the follies that went on behind the scenes as we prepared this article for publication. One of our major show-stoppers was the fact that we didn’t realize until after practically all of our testing was ostensibly complete that the HP ProBook had shipped with a single 4GB DIMM. That configuration robs its Core i5 processor of the bandwidth supplied by a second memory channel, reducing performance at times. We were forced to re-test everything, but since we already had results for the single-channel config, we’ve included those throughout the review, as well. That is, after all, apparently a valid, shipping configuration in pre-built systems.

We also ran into some rather grievous problems with the Compal whitebook’s power consumption. We think the primary problem was simply having used the wrong combination of BIOS settings in an attempt to disable the discrete GPU for battery life tests. Finding the correct setting, using careful observation on a watt meter, nearly doubled the Llano system’s run times in our battery life tests. We believe the scores we’ve finally reported are valid and reflect what you could likely expect from a similarly configured production system.

Our testing methods

With the exception of battery life, all tests were run at least three times, and we reported the median of those runs.

The test systems were configured like so:

System AMD A8-3500M test system HP ProBook 6460b
Processor AMD A8-3500M APU 1.5GHz Intel Core i5-2410M 2.3GHz
I/O hub AMD A70M FCH Intel HM65
Memory size 4GB 4GB
Memory type DDR3 SDRAM  DDR3 SDRAM at 667MHz
Memory timings N/A 9-9-9-24 1T
Audio IDT codec IDT codec with 6.10.6328.0 drivers
Graphics AMD Radeon HD 6620G + AMD Radeon HD 6630M

with Catalyst 8.862 RC1 drivers

Intel HD Graphics 3000 with 8.15.10.2361 drivers
Hard drive Hitachi Travelstar 7K500 250GB 7,200 RPM Hitachi Travelstar 7K500 320GB 7,200 RPM
Operating system Windows 7 Home Premium x64 Windows 7 Professional x64
OS Updates Service Pack 1

DirectX Runtime (June 2010)

Service Pack 1

DirectX Runtime (June 2010)

As I said, our comparative results for this article came from multiple sources. For our comparisons to desktop systems, you can see our test configurations on this page of our Core i7-990X review. For the configurations of the other mobile systems, see our review of the Asus K53E laptop.

Many of our performance tests are scripted and repeatable, but for some of the games, including Battlefield: Bad Company 2, we used the Fraps utility to record frame rates while playing a 60-second sequence from the game. Although capturing frame rates while playing isn’t precisely repeatable, we tried to make each run as similar as possible to all of the others. We raised our sample size, testing each Fraps sequence five times per video card, in order to counteract any variability. We’ve included second-by-second frame rate results from Fraps for those games, and in that case, you’re seeing the results from a single, representative pass through the test sequence.

The tests and methods we employ are usually publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Battery life

Since so much of Llano’s focus involves improving battery life, we might as well get those results on the table. For our first two run-time scenarios, we’ve compared our A8-3500M and Core i5-2410M test laptops against a range of other systems from past reviews. Obviously, our two main systems are most comparable to one another, with similar battery sizes and other specs, as we’ve noted. The first test is our own home-cooked web browsing test, TR Browserbench 1.0, which consists of a static version of the TR home page that cycles through different text content, Flash ads, and images, refreshing every 45 seconds. The next one is our video test, which involves continuous, looped playback of an episode of CSI: New York encoded with H.264 at 480p resolution (taken straight from an HTPC).

We aim to keep display brightness consistent across all of our test systems, where possible. In this case, our common touchstone was an Acer 1810TZ laptop at 50% brightness. Many of the other test systems had glossy display coatings and were at 40-50% brightness, as well. To match that illumination level with our primary A8 and Core i5 test systems with matte display coatings, we had to dial the brightness up to 70% on each. Oh, and we conditioned the batteries on all systems by fully discharging them and then recharging prior to testing.

The HP ProBook results marked “1C” are the single-channel memory configuration. As you can see, using a dual-channel config (similar to the one on the Llano system) reduces run times somewhat. In fact, the direct competition between the dual-channel config of the HP ProBook and the Llano test system looks mighty close to parity. The Core i5-2410M system manages 30 minutes more run time while web surfing, but the A8 lasts longer during the video test, perhaps thanks to its UVD block efficiently offloading H.264 decoding and playback from the CPU cores.

AMD also made some strong claims about Llano’s battery life while playing games, so we decided to test that, as well. We pulled up Battlefield: Bad Company 2 and left it running, full-screen, to see how long each laptop would last. The Llano test system’s discrete GPU was disabled in the BIOS, so we were relying entirely on both processors’ IGPs. Here’s what we found.

AMD wins this one by a mile. Now, perhaps one reason Llano has an advantage here is because its IGP isn’t capable of pushing up to higher clock frequencies when there’s thermal headroom available, while the Core i5’s can. That said, one solution is delivering clearly superior performance to the other in this scenario, and it’s not the one with shorter battery life, as we’ll soon see.

Versus desktop processors

Our first round of performance tests will compare our two mobile systems against a range of desktop processors in many of the components of our CPU test suite. We think these comparisons can be a nice backdrop for our A8-3500M-versus-Core i5-2410M contest, but remember the mobile processors have to work within much smaller power envelopes. We’ve thrown in results for a couple of the higher-end Sandy Bridge mobile CPUs where possible, to provide some additional context.

Stream memory bandwidth

This synthetic test measures the throughput achieved by the CPU and its memory subsystem. Although it’s not a real-world application, it helps us better understand the capacity of the CPUs and platforms being tested. As you can see, although the A8-3500M has dual channels of DDR3 memory at 1333MHz, it’s considerably slower in this bandwidth test than the dual-channel Core i5-2410M config.

Lest you think we have a problem with the Llano system, look closer at the rest of the results. The Phenom II X4 840 is a quad-core processor with an architecture very similar to Llano’s. The X4 840 achieves a little bit more throughput—but then its core clock speed is 3.2GHz, over double that of the A8-3500M’s base frequency of 1.5GHz. This test uses all four CPU cores, so Turbo Core can’t really help, either. Even with its enhanced pre-fetching algorithm, Llano can’t overcome the fact that it’s based on a relatively older core at a rather low clock speed.

SunSpider JavaScript performance

Ouch. In our first real benchmark, the A8-3500M finishes well behind ye olde Core 2 Duo E6400 (that’s a 65-nm Conroe, folks) and uncomfortably close to the Pentium 4-derived Extreme Edition 840. Then again, those are desktop processors subject to fewer thermal constraints. At least two of the A8’s quad cores are essentially no help here—check the other results, and it’s clear this test isn’t widely multithreaded—and for whatever reason, Turbo Core doesn’t seem to be doing much, either.

7-Zip file compression and decompression

This is a nicely multithreaded application, and the A8-3500M delivers a more respectable showing. Still, Llano’s two cores at low frequencies can’t match the Core i5-2410M’s two higher-speed, higher-IPC cores.

TrueCrypt disk encryption

Yeowch! Close contest. This one would have been a clean kill for Intel had it not disabled the hardware AES acceleration in the Core i5-2410M for the sake of product segmentation. Because it did, both of our contenders are in the same boat: unable to achieve throughput to match the speed of a SATA 3Gbps disk interface.

The Panorama Factory photo stitching

Here’s yet another result where a nicely threaded test runs faster on dual Sandy Bridge cores than on quad Llano cores. This is a result you can feel, too. When stitching together a panorama, you’ll be drumming your fingers for 19 seconds longer with the A8-3500M.

x264 HD video encoding

Windows Live Movie Maker 14 video encoding

Depending on the program you’re using and the stage of the encoding process in question, the A8-3500M has the potential to be somewhat competitive with the Core i5-2410M, but the A8 is clearly slower overall.

Cinebench rendering

Valve VRAD map compilation

In both of our rendering tests, Cinebench and VRAD, the A8-3500M with four cores at 1.5GHz is slower than the desktop Phenom II X2 565, which has two similar cores at 3.4GHz. That makes intuitive sense, I suppose, but it tempts one to wonder whether AMD’s decision to give Llano four slower cores instead of two faster ones was really the right choice. The thing is, slower clock speeds generally mean lower voltages and thus much lower power consumption, so AMD’s choice was probably an easy one to make, especially for the mobile market.

In case you were looking for evidence that Turbo Core actually does something, look no further than the Cinebench results. CPU performance tends to scale nicely when going from a single thread to multiples in an easily parallelizable task like rendering. For instance, the Phenom II X4 840 is almost four times as quick in the multithreaded test as it is in the single-threaded one. The A8-3500M, though, is quite a bit faster in the single-threaded test than one might have guessed by looking at its multithreaded results. Looks like Turbo Core is offering a bit of a frequency boost when only one core is busy.

Source engine particle simulation

This test is intriguing for a couple of reasons. One, because it’s a fully multithreaded particle simulation ripped from a game engine where Llano is dramatically slower than the Sandy Bridge competition. Llano promises big things for mobile gaming thanks to its Radeon IGP, but it is possible those low-frequency CPU cores will hamper its gaming performance somewhat. Two, particle simulations in games are nicely parallel by nature, and consequently, I believe many games now use the GPU to handle such work. If so, Llano’s more robust IGP may be just what the doctor ordered. So point, counterpoint.

Versus mobile processors

Now we’ll consider the A8-3500M against a range of recent mobile CPUs, including AMD’s own Brazos-based APUs.

This is a different version of SunSpider running on a different browser than we saw a couple of pages back, so don’t be surprised by the much larger numbers.

When removed from the company of desktop processors and placed exclusively alongside other mobile CPUs—many of which, in this case, are relatively lightweight, low-power affairs—the A8-3500M’s CPU performance doesn’t look nearly as dire. In fact, the A8 keeps up pretty well with the Arrandale-based Core i3-370M and Core i5-450M, both of them dual-core, 32-nm CPUs. Only the Sandy Bridge-based processors, with their much more efficient CPU microarchitecture, really distance themselves from the Llano-derived APU. Meanwhile, the A8-3500M clearly outclasses the lower half of the processors tested, including the Turion II Neo and Pentium CULV chips.

GPU texture filtering quality

We’ve looked at Llano’s CPU performance in a couple of different contexts. Before we consider graphics performance, we need to understand another set of issues, though. You see, CPUs are all required to do the same work and must produce the correct answer every time. GPUs don’t work that way. Instead, they are constantly looking to fool the human eye, to produce enough frames to maintain a steady illusion of motion and keep things looking good—good, but not necessarily perfect, because there aren’t clearly defined standards for exactly how a rendered frame ought to look. Yes, Microsoft has ratcheted up some of the rules for DirectX graphics over time, and the top two GPU makers, AMD and Nvidia, have reached a sort of equilibrium on image quality in certain respects. However, not everybody plays by the same rules. As we’ve found out, Intel happens to play by its own rules, which are considerably more lax.

Yep, I’m gonna bust out the atomic flowers now. Have a look.

Output from the Radeon HD 6620G IGP

Output from the Intel HD 3000 IGP

The images you see above are the output from the Direct3D AF Tester, a little tool we use every time a new GPU architecture debuts in order to understand how its texture filtering hardware works. We’ve set this tool to use the highest possible filtering quality it can request, 16X anisotropic filtering with trilinear blending, and captured the result for posterity.

If you’re familiar with the output of this test, you’re probably shooting your Coke out of your left nostril all over your screen while pointing at the Intel result. If not, allow me to explain.

In the images above, you’re peering down a 3D-rendered cylinder or tube, and the inside surface of that tube has been covered with a simple texture map. The colored bands are what are known as mip maps, or increasingly lower resolution copies of the base texture mapped to the walls of the cylinder. The further you move from the camera, the lower the resolution of the mip level used. In the pictures above, the different colors show different mip levels. (Of course, mip maps don’t normally come in different colors. They look very much like one another and like the base texture. This test app colors them in order to make them easily visible.) Mip maps are a helpful tool in texture filtering because sampling from a single copy of the original, high-res texture can be work-intensive and, in a constrained grid of pixels, can produce excessive high-frequency noise, which is visually disruptive. In other words, a little bit of blurring and blending in the right places can be beneficial to the final result.

Alongside mip mapping, we’re layering on a couple of additional techniques to improve image quality. We’re using trilinear filtering to blend between mip levels, so that we don’t see abrupt transitions or banding. That’s why the different colors transition gradually from one to another. We’re also using anisotropic filtering, grabbing more samples for textures that exist at certain angles on the Z or depth axis—typically on surfaces stretching away from the camera, like floors, walls, and ceilings—in order to preserve sharpness that simple mip mapping would destroy. All of these things we take for granted in modern GPUs, which have custom hardware onboard to perform these functions.

Trouble is, doing all of the sampling and blending required to combine these techniques, especially with anisotropic filtering, is really hard work. In the bad old days, GPU makers used a particular shortcut in order to avoid some of that work, skimping on the amount of sampling for surfaces that aren’t parallel to one of the screen’s edges by reducing the level of mip map detail used. This optimization would generally leave floors and walls looking crisp, since they’re parallel to a screen edge, but other surfaces would lose detail.

Our tunnel test with colored mip maps is a designed to show which surfaces are getting more or less detail. The Radeon IGP’s results are very close to ideal—nearly a perfect circle, indicating all surfaces receive the same filtering treatment regardless of their angle of inclination. That circle is also relatively small and tight, indicating that the Radeon isn’t transitioning to smaller textures (and thus reducing texture detail) until it becomes beneficial to do so for the sake of filtering quality.

The Intel IGP’s results, on the other hand, are pretty horrific. The colors flare out to the four corners of the screen in an indication that mip map detail is reduced dramatically on surfaces at a 45° angle from the screen edges. Floors and walls might look OK, just so long as you don’t lean and cause the camera to tilt. Do that, and texture sharpness will drop dramatically.

Default quality
Radeon X1950 XTX Radeon HD 2900 XT GeForce 8800 GTX

We’d have to reach pretty far back in time in order to find a pattern this poor on a GPU from Nvidia or AMD, prior to the DirectX 10 generation at least. Even the Radeon HD X1950XT’s filtering hardware was crafty enough to avoid skimping at 45° angles of inclination, instead aiming for in-between angles like 22.5° and 67.5°.

The Intel IGP’s pattern also shows some funny, irregular lines in it, indicating a fair amount of error in its level-of-detail calculations at certain angles. In motion, that’s likely to produce some additional texture crawling or visual noise.


Output from the Radeon HD 6620G IGP

Output from the Intel HD 3000 IGP

Above is one quick example of how the Intel IGP’s reduced level of detail affects in-game image quality. This is a shot from Portal 2, and I’ve magnified the center of the screen to twice its original size in each dimension. In the top shot, from the Radeon, the gray walls of the tunnel are covered with a detailed, stone-and-concrete texture. Below, in the Intel IGP image, the walls are a blurry, muddy mess of gray. Granted, my example isn’t the greatest. The impact of Intel’s reduced quality would be more obvious on a texture with higher color contrast or with a tight, repeating pattern. The effect is also more noticeable in motion, when you’re looking up this shaft, you spin a bit, and the detail disappears. Still, the difference here is fairly dramatic, if you squint long enough to see it.

The point of this little exercise is simply this: as we evaluate GPU performance, we should keep in mind that the Intel IGP is doing less work—less sampling and blending—and thus producing a lower-quality result than the A8’s integrated Radeon. The bar charts full of benchmark results won’t show you that, but it’s real, and it’s an important component of overall GPU quality. Also, I should note that texture filtering is just one aspect of graphics image quality. Although we haven’t had time to explore it fully, it appears the Intel IGP lacks internal mathematical precision in other respects. Using it, we regularly noticed obvious “screen door” dithering artifacts that weren’t visible on the Radeon IGP.

Synthetic GPU performance

We’ll kick off our IGP performance tests with a quick look at some synthetic benchmarks. The test above is intended to measure texture filtering performance. For the reasons we discussed on the last page, the results tell an interesting story. At lower filtering quality levels, the A8’s IGP has a big lead over the Intel IGP. As the filtering quality level rises, the performance of the two solutions converges—no doubt because the Radeon IGP is doing increasingly more sampling and blending work than the Intel HD 3000.

Notably, the discrete Radeon is measurably faster than either of the integrated solutions, thanks to higher GPU throughput rates and higher memory bandwidth.

These next two tests are meant to gauge GPU shader arithmetic. We used the defaults for ShaderToyMark, since it’s all about stressing shader performance. We configured Unigine Heaven to use DirectX 10—common ground for these GPUs, since the Intel IGP can’t handle DX11—and set the shader quality level to “high” while the texture and filtering settings were at “low.” The screen resolution was set to 1366×768, native for both of the test systems.

In both of these shader-oriented tests, the A8’s integrated Radeon HD 6620G is about a third faster than the Intel HD 3000. The discrete Radeon HD 6630 is again faster than either, and Unigine gives us our first glimpse of Dual Graphics in action. In this case, going dual offers a noteworthy increase in frame rates.

Civilization V
Civ V offers a range of opportunities for performance testing, and we’ll avail ourselves of all of them. First up is a texture compression test that uses DirectCompute shaders. This is one of those Fusiony-type application that plays to Llano’s strengths. It also wouldn’t run on the Intel HD 3000 IGP.

The next test populates the screen with a large number of units and animates them all in parallel. Civ V will run this benchmark without updating the screen, to test raw CPU performance, and with a full slate of graphics updates, to measure the performance of the total solution.

If you remove the graphics portion of the workload from the equation, the Core i5-2410M is clearly the faster CPU. Once you involve the IGPs, the Core i5’s performance drops to a quarter of its prior level, and the A8 is faster overall.

Civ V offers two further in-game benchmarks, and we tested them at the settings you can see below. The first of those tests comes from the scenes in the game where you see your leader character, richly rendered with lots of shader effects. These scenes aren’t really part of the core gameplay, but they do make for a decent test of pixel shader performance.

Finally, there’s the in-game benchmark proper, which involves a late-game scenario where the screen is richly populated.

Again, if we don’t ask the GPUs to render the scene, then the Core i5 runs the game’s core simulation quicker than the A8-3500M. The 3500M does manage to produce updates at a clip of nearly 60 FPS, though, so it’s certainly adequate to the task. When running the entire game with the CPU and GPU working together, the A8-3500M simply trounces the Core i5-2410M.

You may have noticed that the Dual Graphics solution hasn’t fared well in Civ V. All we can tell you is that those results are correct, and AMD tells us there are some teething problems with Dual Graphics in the BIOS of our pre-release Compal review laptop. We’ll have to test a production system and see whether its Dual Graphics implementation is more solid.

Portal 2

Since Portal 2 isn’t too hard on the GPU, we tested it with its max quality settings at different levels of edge antialiasing. As you can see, the Llano IGP is faster with 8X AA than the Sandy Bridge IGP is without any antialiasing.

Battlefield: Bad Company 2

Here’s one of those instances where the performance gap between the AMD and Intel IGPs means playable frame rates on one but not the other. We’ve seen similar gaps in Civ V and Portal 2, as well. It is possible to dial back the quality settings in at least two of those games (though not Civ V) and squeeze better frame rates out of the Intel IGP, but the Llano IGP removes any doubts.

DiRT 3

The A8-3500M’s proves to be about twice as fast as the Core i5-2410M in DiRT 3‘s DirectX 9 mode. You can get a little more eye candy out of the Llano IGP by using DX11 mode, but then we couldn’t compare directly to the Intel HD 3000, obviously. We had hoped to allow Direct Graphics to strut its stuff for us in DX11 mode, but unfortunately, we ran into major screen corruption problems. Again, AMD pointed to a BIOS-level issue with our test system and Dual Graphics, so we weren’t able to work around it.

Borderlands

Borderlands is, uh, borderline on both of these IGPs. As you can see, we didn’t have much room left to reduce image quality in order to gain higher frame rates. The answer might be dropping to a lower, non-native display resolution. We expect at least the Llano IGP to run the game well with that concession.

USB transfer rates

We have one last round of tests for those who aren’t feeling completely inundated by now: a quick look at USB transfer rates via HDTach. The A70M FCH chip’s native support for USB 3 is a potential platform level advantage for AMD. To see whether it delivers, we docked an Intel X25-E 64GB SSD (yes, that’s an enterprise-class SLC drive—very fast) into a Thermaltake BlacX 5GB.

I think it’s safe to say USB 3.0 could be a major selling point for Llano-based laptops. Many folks with laptops this powerful will want to make use of external storage, and the A75M’s USB 3.0 ports achieve real-world transfer rates four to five times as fast as USB 2.0. CPU utilization is reasonable, too, considering how much more data is snaking through that pipe. Of course, it’s possible for Sandy Bridge laptops to add competent USB 3.0 support via a peripheral chip, but it will come at the expense of a little added power draw, motherboard real-estate, and cost—and it probably won’t be as widely offered. Our HP ProBook test system, for instance, lacks USB 3.0.

Conclusions

We’ve had a sense of what Llano might be for many months now, and the conventional wisdom has been fairly well established. Everyone expected a CPU that couldn’t keep pace with Sandy Bridge and an integrated graphics processor that would surely outdo Intel’s HD 3000. The key issues, then, would be about the tradeoffs involved and about which you value more, CPU power or GPU power.

To me, the question with CPUs these days often seems to be, “Are they fast enough?” For many parts of our daily computer use, CPUs really do seem to be sufficiently quick that performance no longer feels scarce. Quite a few laptop buyers have been willing to compromise on CPU power for the sake of portability, with Atom-based netbooks being the extreme (and rather popular) example of that compromise.

Meanwhile, the question about IGPs is similar, but inverted. “Are they fast enough?” That is, is any integrated graphics solution fast enough to matter, or should we simply recommend a discrete GPU to the would-be mobile gamer? What is the value of a superior IGP if it’s still too slow to make any sort of difference in regular use?

Both of these questions vexed me before I got my hands on a Llano-based system, because I lacked the context to answer them. Now that we’ve considered our test results, I have a better sense of the landscape. You are, of course, free make what what you will of our data, but this is my answer.

The A8-3500M’s four CPU cores are routinely and consistently slower than the Core i5-2410M’s two cores, sometimes by margins that are borderline embarrassing. Intel has opened up a monstrous lead on this front, and the few architectural enhancements in Llano’s cores aren’t sufficient to narrow the gap appreciably. However, you’ll need four threads to take best advantage of the Core i5-2410M, and with four threads in play, the A8-3500M doesn’t look too bad. The A8’s performance is comparable to Intel’s older Arrandale-based Core i5-450M, a generation back architecturally from Sandy Bridge. That puts the A8 solidly in the same class as other big-boy mobile CPUs, a clear cut above budget ultraportable chips like Intel’s Consumer Ultra Low Voltage models and such.

Meanwhile, AMD has Intel utterly outclassed in integrated graphics. You’ve seen our discussion of texture filtering quality and our performance results. Where the Llano IGP delivers playable frame rates in some of the latest games, Intel’s HD 3000 treads on the edge of uselessness. Add in a host of other considerations, including the vastly superior hardware feature set of the Radeon IGP and its ability to partake of the sweet, sweet stream of Catalyst driver updates, and this is a difference between the products that truly matters. Now, that’s probably more true in the consumer market than the corporate one, but any buyer who thinks he might want to play games or make use of 3D graphics in any capacity should strongly consider a choosing Llano-based laptop.

That choice is made considerably easier since AMD appears to have achieved rough parity with Intel in terms of battery life—maybe the biggest surprise of the day, and it’s a pleasant one. If the production laptops turn out right, we may no longer need to advise our friends and readers that buying an AMD-based laptop means receiving an iffy discount in exchange for accepting shorter run times. We’ve never liked that tradeoff.

The deciding factor, of course, is the quality of the production laptops. If the big PC makers can translate what we’ve seen of the A-Series APUs into systems that are as sleek, cool-running, and endurance-endowed as their Sandy Bridge counterparts, AMD should have a hit on its hands.

Comments closed
    • Mr Bill
    • 8 years ago

    Well, this review convinced me to try a new laptop. I picked up the HP dv6-6140us; This model has the A8-3500M. I’ll let you know in the forums how it performs.

    • link626
    • 8 years ago

    since amd can’t keep up with the SB i5, it will have to price this against an SB i3 with nvidia gt525 graphics.

    since an SB i3 + optimus gt5xx can be had for about $600, you just can’t justify paying more than $600 retail for the A8-3500 alone (with no discrete gpu).

    in order to be a warm or good deal then, the A8-3500 would have to be sub-$600 retail, with $499 sale price.
    a good Core i3 SB laptop can be had for $450 now.
    so a $50 premium for ati 6620igp is reasonable.

    how much extra is the 6620igp worth to you ?

    • Antias
    • 8 years ago

    I wonder how it will handle Skyrim?
    I’m poised ready (any time over next 4 mths) to buy a new laptop around the 12-13″ size range (I travel a LOT and that size is perfect for plane fold down trays) and I want an “adequate” gaming laptop to amuse myself on long plane flights and late night hotel room distraction.
    By adequate I mean I’m hanging for Skyrim/Diablo 3 etc and medium settings would keep me happy.

    I’m prepared to wait till just before Skyrim comes out (11/11/11) though so I’m guessing by then i’ll have plenty of laptops to choose from and Llano will have matured to a point where I can make an informed decision…

    Waiting is NOT something I like doing… LOL

    • UberGerbil
    • 8 years ago

    I’m a little puzzled about the nature of the “Fusion compute link” between the CPU and GPU parts of this design. Given that it’s cache-coherent, I wonder if it is some derivative of Hypertransport, or if it’s something else entirely; it seems unlikely that it’s a PCIe derivative. But it would have been (technically) interesting if the Llano designers had gone all-out for full cHT, especially if they’d included another set of links in the design. Then we could have dual-socket Llano implementations, where each socket has its own GPU — and wouldn’t that make for some fun Crossfire setups (particularly along with a discrete GPU)? Even more fun, a pair of dual-core Llanos might actually offer better performance than a single quad, not just because of the paired IGPs but also because each socket would have its own memory controller and memory channels, mitigating the contention between the CPUs and IGPs.

    Of course, I said [i<]technically[/i<] interesting, because there are all sorts of other issues: drivers for that kind of crossfire implementation, the costs involved with dual-socket motherboards, the unsuitability of this for mobile platforms, the undesirability (from AMD's point of view) of making pairs of (presumably less-profitable) dual-cores more performant than a single quad, etc. Given that most Llano chips would end up getting used in a single-socket setup anyway it would be added die and design cost for little return. But if AMD were to do a high-end design with a new socket it would be an intriguing way to go -- and the Crossfire aspects would be something Intel would be ill-positioned to match -- which of course makes me wonder if this is a direction they've considered for Trinity.

    • BaronMatrix
    • 8 years ago

    And remember that this is the 3500M @ 1,5GHz. There’s the 3530MX @ 1.9GHz and AMD tends to scale fairly linearly with clock so add 400\1500×100 for close to 30% increase plus a few % for BIOS optimizations.
    And the graphics scores would be even better with DDR3 1600 – though it’s only available on the 3530MX.

      • Joe Miller
      • 8 years ago

      Also, it seems the turbo does not work well on Llano. Once they improve it, we will see much better results. Though we might have to wait for Trinity to have good turbo.
      I do wonder is it that Intel has patents over his implementation, or that AMD wanted to have more battery life, hence tried to implement really efficient, but complicated algorithm for TurboCore.

      • maroon1
      • 8 years ago

      No, AMD doesn’t scale linearly with GHz in most benchmarks (same thing for any other processors)

      Also, if you look at the turbo clock of 3500M (2.4GHz) and 3530MX (2.6GHz) you would notice that they are very close

      You are only looking at base clock and ignoring the turbo clock which is close between the two

    • Chrispy_
    • 8 years ago

    Now that’s what I call a gaming laptop. [b<]*Enough*[/b<] gaming horsepower to last for a sensible amount of time. My gaming attention span is about two hours, not one. It's also nice to see that AMD spent so much transistor, bandwidth and power envelope on the IGP. Civ5's renderless and rendering tests show that the HD3000 hampers the i5's performance by a factor of TEN, where the A8's performance is merely halved. That was an eye opener for me - why bother with an IGP at all when the primary use of it (gaming) cripples your shiny sandy bridge architecture to the point of uselessness? Balance is key, and whilst the few non-gaming uses of a GPU (most notably video decoding) are legitimate, gaming is the obvious and most important use of a GPU. To put it in context, I still use an E6600 in my media PC with a GTX460 (768MB) driving an HDTV. At 4xAA and medium to high settings, pretty much every modern game I own runs well on that 5-year-old CPU. I have no doubt that the occasional bout of low framerate is still GPU limited, too, since the issue goes away if I drop down to 720p Gaming requires so much GPU power and so little CPU power these days that Sandybridge is overkill in terms of cost and speed when used with anything other than the very highest current mobile GPU's.

      • maroon1
      • 8 years ago

      Not everyone use laptop for gaming. And some people play only web gaming, or play light games like worms reloaded or Torchlight. Intel graphics can easily handle those games. So, you don’t need anything faster for those types of games

      Also, remember that techreport benchmarked most of the games on high or medium settings. Portal 2 was benchmarked at high settings, and yet HD3000 got 29 fps (with AA off) which is somewhat playable. fps is going to be much higher if they tested it on medium settings.

      My point is that intel graphics are not as bad as AMD fanboys claim here. Read comment #41, that guy claims that intel IGP can’t play anything more intense than Solitaire.

        • BlackStar
        • 8 years ago

        Intel graphics are bad, but for a different reason entirely: driver support. Intel stops releasing new drivers after too quickly and leaves their chips with known (and sometimes dangerous, i.e. bluescreen) bugs.

        Even if Intel HD graphics offers acceptable performance and image quality (debatable, given the test results here), you’ll still get problems with WebGL, graphics errors in games and missing features (no compute capabilities, GL3 rather than GL4, DX10 rather than DX11). Given the choice, an AMD or Nvidia GPU would still offer a superior experience.

          • maroon1
          • 8 years ago

          Could you prove all your claims ? Talking is easy.

          Intel is actually updating their drivers. The latest one fixes a lot of bugs and problems and does improve the performance (read the release notes)
          [url<]http://downloadcenter.intel.com/Detail_Desc.aspx?agr=Y&ProdId=3231&DwnldID=20037&ProductFamily=Graphics&ProductLine=Laptop+graphics+controllers&ProductProduct=Intel%C2%AE+HD+Graphics&lang=eng[/url<] You need to show me a proof that sandy-bridge graphics have problems with WebGL. I searched google I didn't see any results. Intel HD3000 supports DX10.1, and the latest driver adds support for openGL 3.1 Also, DX11 support is not going to be useful in real world when using integrated graphic. First of all, most games are still DX9. Second, of all DX11 kills performance. Try to run Dirt 3 in DX11 instead DX9 you would get drop in fps with little difference in image quality. Try to run dragon age 2 on very high (DX11) instead of high (DX10), you would get huge drop in performance (not even llano can handle it) DX11 argument is very very weak when you are talking about integrated graphics. DX11 is not going to be useful for causal gamers.

            • swaaye
            • 8 years ago

            I think that if a notebook user knows he is going to play mainstream D3D9+ games that he should probably avoid Intel. Intel simply is not as serious about gaming as AMD or NV. Maybe their practices about driver updates and driver quality are changing now with Sandy Bridge but I’m not going to bet on that yet.

            But for anyone who isn’t going to be playing 3D games, I think any IGP is good enough at this point as long as it can accelerate Flash and the usual DVD/BD compressions. I’m not sure that I’d recommend Llano to these people because I don’t believe in buying a CPU that is clearly slower than the alternatives just because it has a neat gamer IGP.

            For that matter, if someone is looking to buy a notebook for gaming, I’d avoid Llano here too unless the chip ends up in something small where its high integration actually provides value (ie more perf per watt over discrete options) beyond being cheap.

            • sweatshopking
            • 8 years ago

            you’re talking about brand new chips. nobody said they didn’t make drivers for brand new chips. they said they STOP quickly, and leave relatively new chips with bad drivers, and no support.

    • Joe Miller
    • 8 years ago

    Great review! No other site did comparison of image quality.
    And I will again buy AMD for the graphics – I can manage with slower CPU, but I do not want to use that poor Intel graphics.

    • spigzone
    • 8 years ago

    If one doesn’t NEED a workstation, AMD ‘fusion’ chips trounce Intel ‘fusion’ chips for the simple reason AMD chips can do everything Intel ships can do but Intel chips can NOT do everything AMD chips can do. Intel’s CPU time advantage is all but irrelevant in a multi-core/multi-tasking age when those cpu/time intensive operations can be done in the backround while one carries on with other tasks whereas AMD’s GPU advantage is VERY relevant when playing current top tier computer games which are playable on AMD chips and functionally UNplayable on Intel chips.

    AMD achieving parity with or even exceeding Intel on run time make it a no brainer.

    • loophole
    • 8 years ago

    I believe there’s a small error in the table on the first page with regards to the transistor count for Thuban – it should be 904M rather than 751M which would correlate with the die size numbers.

    Other than that looking great so far – thanks for the review!

      • Damage
      • 8 years ago

      Fixed. Thanks.

    • maroon1
    • 8 years ago

    Did techreport use the latest graphic driver for sandy-bridge ?

    The latest driver boost fps in some games
    [url<]http://communities.intel.com/thread/21735?wapkw=%28hd3000%29[/url<] )

      • Damage
      • 8 years ago

      We did grab the latest version before testing. Version number is on the testing methods page.

        • A_Pickle
        • 8 years ago

        You guys are so damn pro, it makes me cry every time.

    • dpaus
    • 8 years ago

    from p. 2:
    [quote<]one of those blocks of eight is dedicated to driving a pair of digital display outputs. The remaining 24 lanes can flex into various configurations. A common one would use 16 lanes to talk to a discrete GPU, four lanes to talk to the FCH or south bridge chip, and leave four lanes for general-purpose use. [/quote<] Hmmm, does that mean that the A8-3500M is limited to just two simultaneous displays? (not counting a potential discrete GPU, that is). Any word on what the limits of those two displays are? I'm thinking for business applications here, not gaming.

      • Hattig
      • 8 years ago

      I want to know if Llano can support daisy-chained displayed via DisplayPort.

      Two DisplayPort outputs could therefore support multiple displays each.

    • dale77
    • 8 years ago

    Seemed like a very good article when I read it last night. I did wonder why the llano seemed to perform worse than sandy in “media encoding” as I would have expected this to be GPU accelerated. Did the techreport choose software accelerated for Sandy but not for llano? I did also wonder why the desktop parts were in the frame.

      • Farting Bob
      • 8 years ago

      In fact most media encoding is still CPU based, despite GPU’s being damn near perfect for them then theory. The CPU though allows for some quality encodes, the GPU only seems to work on more basic instructions. Its also not as mind numbingly fast as i was expecting when it first came out. Twice as fast (with far less quality and customization allowed) rather than 10 times as fast as some thought before).
      Dont know if its CUDA/OpenGL not being ideal yet for the various methods used to encode h.264 or just nobody has really sat down and worked out how to make an encoder as efficient as the CPU based ones are. Only a few (commercial) encoders support GPU acceleration, (AMD has one, Nv has one and a few third parties) it just hasnt caught on yet.

    • Spotpuff
    • 8 years ago

    Great review and really interesting approach from AMD. I had counted them out for a while now but it seems like their approach makes more sense for the mobile space.

    Also, I read the last line as “AMD should have shit on their hands” and was like O.o

    • Beomagi
    • 8 years ago

    Very curious to see if dual channel may affect llano IGP performance.

    • LiquidSpace
    • 8 years ago

    How much did Intel,pay,you for this review?

      • OneArmedScissor
      • 8 years ago

      How,many,commas,can,we,abuse,,,,,,,,,,,,,,,,,,,,,,?

        • sweatshopking
        • 8 years ago

        ALL OF THE COMMAS!!!!!!

          • anotherengineer
          • 8 years ago

          You are alive? Where did you go? Find a new job πŸ˜€

          I wish they would have have a dual core at 2.5ghz stock and 2.8 turbo with 400 shaders, I think that probably would have been even better for games.

          The sata 6 and usb 3.0 is looking nice though, pick up one of these for 600 and drop in an ssd and you have a very decent all around laptop.

      • khands
      • 8 years ago

      I would guess, given there rather glowing review, $0, possibly -$50.

    • jensend
    • 8 years ago

    Since the GPU performance is very bandwidth-limited (less than 1/3 of the bandwidth the discrete part has; see also Anandtech’s preview of the desktop parts where scaling with faster memory is shown), I bet that in a lot of games the dual-core A4s with only 160 shaders (40% of the A8 reviewed) will perform nearly as well. A4 parts which aren’t just A8s with parts disabled will also have more room for higher clocks and Turbo. So I think the smaller-die A4s may be quite interesting on the low end for both laptops and desktops.

      • Hattig
      • 8 years ago

      The mobile A4s have 240 shaders, and I agree that they will probably perform very much like the 400 shader mobile chips except in shader-heavy (i.e., mostly DX11) games. Don’t ask me why the desktop A4s will only have 160… albeit running a lot faster presumably. I am looking forward to the desktop reviews, especially overclocking of the GPU (what can 32nm SOI do for graphics?)

      • mczak
      • 8 years ago

      I think you overestimate that slightly how well the lower-end parts will perform.
      Don’t forget the desktop version should be more bandwidth limited than the mobile one at ddr3-1333, since the desktop one is 1/3 faster.
      So while a 160 shader part will perform better than the 40% of the shaders may indicate, it’ll probably still be only about 60% or so of the performance of the 400 shader part.
      The 320 shader version though would probably have been quite close to the 400 shader version, hence the lower clock…

      • dragosmp
      • 8 years ago

      The A6 could also be a good match – same CPU cores as the A8, but only 320 Shaders. Memory bandwidth should be less of a bottleneck. Can’t wait to see the whole range of Llano APUs tested.

      • CBHvi7t
      • 8 years ago

      check out: [url<]http://www.realworldtech.com/page.cfm?ArticleID=RWT042611035931[/url<] there you will find a discussion of the memory impact.

    • slaimus
    • 8 years ago

    It looks like both the design and manufacturing hit the mark.

    The CPU/GPU combination is extremely balanced. You have to take into account the much higher gaming performance is the result of both CPU and GPU. Intel also seems to have a power consumption problem on the GPU side for SB, as even on the desktop SB reviews, using the integrated graphics significantly increased the power draw.

    Also, after the horrible 65nm node, it looks like AMD’s manufacturing side has gotten much better. The power consumption numbers are great, especially considering it is a much larger chip. We just need to wait until Bulldozer to see how much high-end headroom is in the 32nm node though.

    • Vasilyfav
    • 8 years ago

    Just as expected: way ahead on IGP performance and image quality, hopelessly behind on CPU performance, in fact scarily so, uncomfortably close to 2004-2006 era processors and AMD’s own E-350.

    K10.5 just cannot compete with nehalem/sandy architecture.

      • flip-mode
      • 8 years ago

      Most people would say, “so what”. A8-3500M offers plentiful CPU performance for most people. So while it looks bad in the graphs, and while many of us strongly dislike such performance disparity, for the average person it just doesn’t matter and they’d probably be better off with the stronger GPU of the A8-3500M than the stronger CPU of the Core i5 2410M.

      As for my own tastes, I would not be comfortable getting the A8-3500M. I’d rather have a i5-2410M machine and discrete graphics. Heck, the real sweetness is the i7-2620M. That’s the best choice for a mobile CPU out there in my opinion, but expensive.

        • swaaye
        • 8 years ago

        I think that “most people” couldn’t care less about the GPU as long as it runs the GUI ok. But these folks might appreciate a SSD and high CPU speed to make one super smooth notebook.

          • OneArmedScissor
          • 8 years ago

          That’s kind of like saying most people would be happier with a big LCD touch screen to control everything in their car instead of a cigarette lighter lol.

          This GPU is not exactly a huge chunk of silicon. They were using about that much for their 55nm IGPs, so it’s just something that’s going to be there, whether you “value” it or not, and it’s not putting the cost of your computer through the roof like a SSD might.

          It’s not as if AMD really traded CPU capability to have it. These are their new mass produced CPUs, and that’s what their market share dictates they have to stick to in laptops. It’s no different than how Intel mass produces the comparitively much weaker Pentium and Celeron chips for most of what they sell.

          I’m sure AMD would love to build SSD controllers and flash into their CPU chips, but we’re going to have to wait a little bit longer for that. :p

            • swaaye
            • 8 years ago

            My point is simply that an SSD will bring clearly tangible gains while a 400 shader gamer GPU will not for people who aren’t playing games. For a lot of computer users any IGP beyond one that can accelerate Flash and HD video is immaterial.

            Once SSDs hit that $1/GB mark I think notebooks are going to see a mini performance revolution. A 120GB SSD is all that a lot of people really need. How many people do you know with 320+ GB drives and 30GB in use? πŸ˜‰

            • kc77
            • 8 years ago

            An SSD also adds SUBSTANTIAL cost to any light notebook, laptop, or desktop. They are pricing these things below the cost of i7/i5 + discrete which means around the 500 – 700 mark which is the mainstream. SSD’s are not in the mainstream yet. Maybe at some point, but they would have to drop quite a bit before their impact offsets the cost.

            Now in terms of the GPU for anyone to even suggest that it’s not needed is unbelievable nor is it accurate. People might not know what a GPU is but you can best believe that people at Best Buy will get questions like, “Can you play games on this?” I try not go to Wors…I mean Best Buy often but that question gets asked all of the time.

            Hell I get that at work when people need recommendations and you would be surprised how many people use laptops as their main family computer. I don’t know how people do it… but they do.

            • swaaye
            • 8 years ago

            You just mostly repeated OAS.

            I don’t really care about how much SSDs cost or whether clueless people want to buy a GPU that will sit mostly idle. What I’m saying is that an SSD will bring tangible performance gain whereas a 400 shader gamer GPU will not for non-gamers.

            If one plans to play actual 3D games then of course the Llano GPU has value. I’m not disputing that. I am however disputing that it has some magical advantage over Intel hardware when you aren’t a 3D gamer.

            • flip-mode
            • 8 years ago

            I’m telling you: people don’t notice SSD. I’ve installed them for three people here at work – besides myself – and the results are 3 for 3: they can’t tell the difference.

            I myself can tell the difference, but it does not change the computing experience.

            Neither will an i5-2430M change the experience, and the average person is likely never going to be able to tell the difference at all – so why bother.

            A better IGP stands a good chance of changing the experience for casual gamers (that’s a lot of people) and for people who watch video on their computer (that’s a lot of people).

            My mother is a good example of the “average person” I’m talking about. She watches /tons/ of video on her laptop. That is quite honestly the most intensive thing she does with her laptop. If she were to drop her laptop today and need a recommendation – why would I recommend Sandy Bridge to her when it’s more expensive, she doesn’t need the power, and Llano is going to be better with video related tasks?

            • swaaye
            • 8 years ago

            I really wonder how much better Llano is for video than even say GMA X4500 or HD 3200 (IGP). I mean once you have the basic H.264/VC-1 and Flash accel, you’re pretty much good to go.

            You may be right about the SSD not getting noticed by some people. It’s the most obvious perforance improvement I’ve felt in years, but yeah I have seen it go unnoticed by those who sit in just web browsers and email apps all day long.

            In the category of unnoticed performance, I also know of people who happily play games at 15 fps and those Intel IGPs can probably do that for them. That’s not to say that they wouldn’t be happy to go faster but they often don’t research why their games are slow or even care.

            I need to stop pondering what the masses want or will notice in their computers. For me, Llano is only interesting if it goes into a 12-13.3″ notebook because it would bring interesting gaming performance down there. And I would probably put a SSD in the machine. πŸ˜‰

            • Hattig
            • 8 years ago

            Llano does DivX too. Admittedly Intel’s video decode/encode is now very good, then again even your basic ARM SoC can do 1080p decode these days without a problem. It’s all about gaming and compute (e.g., physics), and Llano has that advantage over Intel right now, and in many ways that capability makes it more future proof.

            I’m looking forward to Trinity myself though – that should fix the CPU aspect – not that Llano is ‘bad’, indeed for most people it would be more than adequate.

            • boomshine
            • 8 years ago

            “GMA X4500” i wanna throw my laptop in the trash can because of that chip!

            • swaaye
            • 8 years ago

            Trying to game on it? πŸ˜‰

            • boomshine
            • 8 years ago

            yeah 3D facebook game called Godswar.

            • WaltC
            • 8 years ago

            It’s not only that–but synthetic cpu benchmarks have become somewhat of a very bad joke in recent years. Just to point out a couple of ostentatious examples in this otherwise very fine review:

            [b<]Java Spider bench:[/b<] We are contrasting the difference between ~1/4 of one second's time lapse with ~1/2 of one second's time lapse, and purporting that this result is a meaningful differential value. However, there are no consumers anywhere who can "notice the difference" between 1/4 and 1/2 of one second's duration...;) Yet, the benchmark itself paints the very misleading portrait of "2x as fast" (~250ms vs. ~500ms) while failing to point out that, quite unlike frame-per-second gpu game benches, the difference between 250ms and 500ms is completely imperceptible by human beings. Despite the appearance of the benchmark results, the difference is so small that it can only be measured by a machine as human beings would not register such a microscopic difference. [b<]Panorama Factory photo stitching[/b<] [quote<]Here's yet another result where a nicely threaded test runs faster on dual Sandy Bridge cores than on quad Llano cores. This is a result you can feel, too. When stitching together a panorama, you'll be drumming your fingers for 19 seconds longer with the A8-3500M. [/quote<] Wow, I mean "19 seconds of finger drumming" just seems like an eternity, when expressed as it is above, doesn't it? The fact is, though, that the great majority of people running such software in real life will not, under any circumstances, sit and stare blindly at their monitor screens, drumming their fingers, until their "photo stitching" completes. No, they will either run it in the background while they do something else, or else run it on another machine while they do something else on their primary machine, or run it on their primary machine at night while they sleep--etc. In all of those far more probable real-world examples, a difference of "19 seconds" amounts to nothing, literally (most especially if the AMD solution can be had for much less $ than the Intel solution.) While I am certainly not trying to detract from the fact that Sandy Bridge is certainly faster than Llano at particular tasks, especially while running particular cpu benchmarks, it is also a fact that for the majority of people the difference between gpu performance is far more perceivable and "noticeable" than is what differences they can perceive between cpus of similar economic class. Which brings me to my last point: [b<]cpu Benchmark/compiler optimization[/b<]: I believe that Tech Report was one of many web sites running the recent story about AMD's recent resignation from BAPCO, and that TR suggested that VIA and nVidia might also have resigned as BAPCO partners. The resignations are the ultimate form of protest when cpu benchmarks [i<]are known to be slanted[/i<] in terms of favoring one cpu architecture/compiler over another as AMD has demonstrated about BAPCO's SysMark (a synthetic cpu bench I have never liked despite the fact that AMD was a partner for many years.) The reasoning here is fairly simple: Intel has a vested interest in designing and supporting synthetic cpu benchmarks which paint its cpus in a far better light than the cpus of its only cpu competitor of note, AMD. Not only that, but Intel has far more money to spend in the creation and proliferation of such synthetic cpu benchmark code than does AMD. By contrast, and I think the contrast here is dramatic, it is far more difficult for Intel (or AMD) to so optimize gpu software so that one architecture/compiler can be seen to systematically out-perform its competition--unless it actually does systematically outperform its competition. So it [i<]may[/i<] well be that even the Llano vs. Sandy Bridge cpu story has yet to be accurately told, and we already know this is true for the Bulldozer vs. Sandy Bridge story (since BD has yet to be released.) Far more importantly, though, is the fact that when the economics of such comparisons are ignored or otherwise trivialized, then [b<]true comparisons[/b<] will remain forever beyond the pale, regardless of whatever amount of synthetic cpu benchmark optimization is or is not occurring at the present time. Edits: typos

            • Damage
            • 8 years ago

            I realize responding to this message could end up with me being sucked into a whirling vortex of pure crazy, but I’d like to make a few quick points.

            -The Panorama Factory is a real application, and timing it while it stitches together a panorama isn’t in any known sense of the term a “synthetic benchmark.”

            -Arguing that 19 seconds’ difference in a *single operation* is not a meaningful performance delta is astoundingly dishonest.

            -That test stitches together four or so eight-megapixel pictures from my rather old camera (note the CRT display in the picture). My new camera takes 18-megapixel pictures, so the difference in a single panorama stitch 2011-style would likely be over twice what we measured here.

            -I give up.

            • Hattig
            • 8 years ago

            What would be a good benchmark for future casual gaming capability (i.e., how good is a CPU, GPU) looking ahead would be to do some JavaScript canvas and WebGL tests. A lot of casual games will soon be provided on the web without Flash (or Java), and canvas and WebGL will feature heavily here. As people buy a laptop for three or four years, it would be good to get an idea of future capability.

            • swaaye
            • 8 years ago

            Isn’t that essentially purely CPU and browser determined? One thing I’ve noticed is how much faster even my old EeePC is these days thanks to Firefox’s speed gains since 4.0. It’s great to see an application actually getting faster over time. Usually it’s the opposite.

            If this performance is CPU determined, for games it could end up with one wanting the fastest CPU possible for the best experience vs. a middling, acceptable experience.

            • Hattig
            • 8 years ago

            Generally Javascript canvas operations and WebGL are accelerated by the GPU – IE, Firefox and Chrome already provide this, and Safari will on Mac OS X Lion. Of course the browser has to hook into the OS to accelerate the functionality, but once done the graphics will be nice and fast. The game engine would run in Javascript though, which is CPU dependent. Therefore it would seem to be a fair platform test!

            Also it would show how good the OpenGL drivers are…

            • flip-mode
            • 8 years ago

            Your post is dead on, but it also shows a conversation going strange places. Why? If you’re using a Llano machine for intensive workstation workloads, then you’re not doing it right.

            That doesn’t negate TR’s benches at all. TR’s benches accruately reflect the status of Llano as a pretty darn good mobile “APU” for “the average person”. Llano is for watching videos, playing some low intensity games, doing some office productivity tasks, and surfing the web.

            For any computationally intensive task, a fricking 1.5 GHz [i<]anything[/i<] is a silly choice. Beyond that, Llano is a poor choice. Sandy Bridge is the correct answer. And again, I'd like to give props to the i7-2620M. Much love to you Damage!

            • Arag0n
            • 8 years ago

            It’s not like you are all the day using panorama factory damage… most of the day you are usually doing other things…

            The biggest drops in performance ever for me have ever been the “out of memory problems”, when you need 4GB and you have only 2Gb of RAM. Everything else makes the things slower but you can still multitask and do something while waiting for the task to be done…. usually when somethings takes more than 2-3m I try to keep working and just go back for the result. I wouldn’t care if it takes 2 or 3m at all as soon as takes more than 3s to do it.

            • kc77
            • 8 years ago

            The question isn’t what Damage does all day, but what does the average consumer do all day and what does that workload look like. The average workloads are moving towards the GPU. This is why Nvidia, AMD, and yes even Intel are concentrating on it.

            I have an i7 and the only time I can see the difference is when I do video encoding, or compilation of source code and even a lot of that is attributed to the raided SSD’s. They really shine in enterprise settings but who is running multiple VM’s other than those in tech? Otherwise there just isn’t that much of a difference that the end user is going to notice in the day to day operations of a mainstream user.

            What they will notice is how playback looks and runs on YouTube or HD movies. They will notice the diff between 15 FPS vs 30+ when gaming. Some of the differences are so large you are talking about the diff between AA or no or native res vs not. That’s VERY noticeable even to a child.

            • WaltC
            • 8 years ago

            I’m amazed that you have such a hard time with my posts…;)

            The context of the thread is that synthetic cpu benchmarks play a far smaller role in the systems people purchase than do demonstrable gpu benchmarks, and my comments were to that end. IE: what I said was *even if* the synthetic cpu benchmarks were accurate depictions of raw cpu processing power, and *even if* there is no evidence of synthetic cpu benchmark/compiler optimization, people will *still* find the gpu specs more compelling at the time of purchase. If that’s “pure crazy” then I suppose we need a lot more of it…:)

            • A_Pickle
            • 8 years ago

            [quote<]For a lot of computer users any IGP beyond one that can accelerate Flash and HD video is immaterial.[/quote<] The almost constant growth of the PC gaming market disagrees with you. That argument worked three years ago. It doesn't anymore -- people are demanding richer and richer content from their PC's. If they didn't, why do phones have better graphics chips than PC's have had for awhile?

          • flip-mode
          • 8 years ago

          I have seen first hand that most people don’t care about or even notice the difference between SSD and HDD.

        • Beomagi
        • 8 years ago

        Rather more power sure – but in a mobile I rather have a small machine. A chip like this an a super slim sub 12 machine will handle my gaming fine. Use the space saved for batteries and a smaller chassis.

      • jensend
      • 8 years ago

      Did you even read the review? In the mobile CPU tests it traded blows with the Arrandale i3 and i5 mobile processors. It’s generally not as fast as Sandy Bridge, but it was decidedly faster than the two Penryn-based notebooks and much, much faster than Brazos.

      No clue why anyone would pay attention to the pages where it’s compared to desktop processors. –*NEWS FLASH*- desktop processors are faster than laptop processors!- News at 11!– Llano desktop parts will be clocked significantly higher (e.g. 2.7 GHz instead of 1.8), will often be paired with faster chipsets and memory, and may have other differences as well.

        • swaaye
        • 8 years ago

        Ok so AMD’s new mobile quad core sometimes gets close to Intel’s >1 year old dual cores. Neat.

        I like how it obliterates the Turion Neo X2 though. 12/13″ subnotes are one place that Llano needs to go. It would bring unseen gaming performance and a relatively capable CPU to that form factor.

      • OneArmedScissor
      • 8 years ago

      “K10.5 just cannot compete with nehalem/sandy architecture.”

      No, a 1.5 GHz CPU obviously can’t keep up in a lot of things with nearly 3 GHz CPUs. There is no reason to spin this into another, “AMD can’t design competitive CPU cores,” tall tale.

      Look at the battery life every website is getting out of this rather large chip. AMD didn’t specifically go with a modified Athlon II core because they don’t know what they’re doing any more than Intel didn’t know what they were doing when they pushed the Core 2 CULV platform instead of a Nehalem variant. Nobody questioned that, and yet, here we are…

      This has already been the reality for at least the past two years. No CPU core design is inherently “better” than another.

      • Hattig
      • 8 years ago

      You might want to check the mobile performance page comparisons, not the desktop performance comparison page.

      Mobile Llano is meant to hit a sweet spot of CPU performance (enough, i.e., mainstream), GPU performance (viable gaming at laptop native resolution) and power consumption (6hrs+ browsing, 5hrs+ video). The review seems to suggest that the aren’t too far off target with Llano.

      The review also highlights a major issue with Intel’s HD3000 graphics – when you do use them, they use a lot of power (look at the gaming battery life graph), yet you get poor image quality and low frame rates. HD3000 is nothing more than a desktop graphics accelerator in reality.

      A couple of outstanding issues with Llano. One is turbo core not seeming to be enabled for single/dual-threaded benchmarks. The other is dual-graphics. I look forward to seeing a review of A4, A6 and A8 based laptops, pitted against comparably priced Intel laptops – we can then get performance/dollar scatter graphs!

      Thanks for the review.

      • ronch
      • 8 years ago

      We all realize what Llano is and what it is not. However, it’s about being able to buy a product that presents a curious advantage over the Core i3 or i5, particularly in mobile usage. And that’s where Llano looks mighty more compelling at Best Buy over Intel + HD graphics.

        • helboy
        • 8 years ago

        It all depends on how much clout Chipzilla has over the OEM’s,really!! The OEM’s like HP,Dell,Toshiba and Acer know they have a performer in Llano.But as is the case now,they will be arm-twisted by Chipzilla not to market any good configuration and offer those that are absolutely menaingless in terms of value.And YES I have done my research.more than anybody I beleive bcos I have been looking for a good notebook with the right combination (atleast for me) – Dual core,powerful battery,good graphics,value for money – for two years now.I had to settle for something that I knew could have been better.I could have put together better.
        No manufacturer ,ABSOLUTELY no one has come out with any of this with AMD platform.Why???!! If the OEM’s start producing the best configurations with AMD platforms AND fearlessly market them,then Chipzilla will be history in no time flat!! Start betting now … hehehe πŸ™‚

        NB:yea I know I am bordering on fanaticism but I couldnt help after reading the Llano’s performance πŸ˜‰

    • wierdo
    • 8 years ago

    Great review, the IGP image tests were quite shocking lol.

    I’m curious if you guys had the chance to test the Llano’s IGP using high performance (1800?) memory, just to see if it’s a worthwhile upgrade to a laptop of this sort.

    Would be nice if whichever OEM sells these will list high performance memory as an upgrade option (yeah I know it’ll be overpriced, but still).

    • tahir2
    • 8 years ago

    Nice article, covers the gaming aspect really well I think.
    Especially gives a nice indication into the fact that comparing with SB graphics is not really oranges to oranges ref: texture filtering.

    Some questions if I may, what RAM speed did you use in the Llano system? Not listed in the test system page. Did you try different speeds of RAM as it seems to make a difference?

    Nice review, even though had read reviews at Anand and Tom’s place already, I found this one easier to read with all the information I would find important tested quite thoroughly. Still a bit confused about the battery life – hopefully you can go back to that once a production and bug free unit is available to retest.

    Cheers Scott…

    • cjcerny
    • 8 years ago

    If AMD’s marketing department was smart, they would realize that the IGP of this APU is powerful enough to stand on it’s own, and totally do away with the confusing dual and discrete GPU stuff. Consumers don’t have a clue about that stuff, and the IGP on this thing is strong enough to just slim down all the marketing confusion by doing away with those other options.

    • sweatshopking
    • 8 years ago

    Really, it’s not going to matter. Intel OWNS retailing. AMD can make the best chip on the market, and has, and moved nowhere. stores sell INTEL. amd will need a few YEARS of superior chips before they start to grow. cpu’s aren’t gpu’s. people don’t know what they’re buying and listen to what the sales team are saying. and what sales teams say is “intel is better”

    oh, and you can welcome me back. i’ll post in the forums about my trip, and some fundraising we want to do to put a well in for my daughters village….

      • derFunkenstein
      • 8 years ago

      welcome back, brah.

      • wiak
      • 8 years ago

      well, thats called monopoly and didnt intel sign “not do that” with fcc?

        • derFunkenstein
        • 8 years ago

        Yeah, like Intel ever abides by that sort of thing. They’ll tiptoe right back to the line and taunt regulatory bodies with abandon.

        No different than any other company, mind you.

        • sweatshopking
        • 8 years ago

        that’s not going to fix 2 decades of sales education.

    • colinstu
    • 8 years ago

    What’s with dropping the i5-2520 from the TrueCrypt bench graph? (and other AES equipped mobile chips)

    • esterhasz
    • 8 years ago

    First, a big “thank you” for the detailed transistor count / die size chart. That really helps with understanding cost context. Highly appreciated.

    Llano seems to be a rather good compromise for many consumer usage patterns. On the professional side thing’s are a bit more complicated I guess: office work does not (yet?) profit from the GPU in any tangible way (I honestly do not notice a difference between GPU acceleration on and off in a Web browser) and many tasks in analytics, content creation, programming, etc. still profit the most from high single thread performance.

    Ironically, Llano would be a fine match for the OpenCL friendly Mac OS X and programs like the new Final Cut Pro X. The 65W desktop version in a Mac mini would make a neat entry level editing station…

    • ronch
    • 8 years ago

    Compared to Llano, Sandy Bridge looks too unnecessarily powerful on the CPU side but too weak on the GPU side. Meaning, you have the power to do lots of Excel number crunching but can’t play a game at decent frame rates. Llano, at least, lets you play after work.

    Personally I think AMD makes a good point with Llano. Most people will find the K10.5 cores adequate for most tasks (future-proof is another matter). Laptops with discrete graphics have always been around but Llano makes it possible to have practically the same level of graphics performance at mainstream prices. I expect most people to find it more attractive.

      • shank15217
      • 8 years ago

      What do you mean by unnecessarily powerful?

        • khands
        • 8 years ago

        The cost/balance/performance ratio is skewed towards performance for SB at the loss of cost and balanced performance, while Llano is more even.

      • Chrispy_
      • 8 years ago

      take the Civ5 tests. When not using the IGP at all, the i5 runs ten times quicker than when the IGP is involved.

      Using incredible over-simplification, it demonstrates that [i<][b<]90% of the i5's CPU power is being [u<]utterly wasted![/u<][/b<][/i<]

    • Krogoth
    • 8 years ago

    It looks like AMD got themselves a serious contender in the mainstream arena.

    CPU performance doesn’t matter that much anymore in the mainstream arena, it hasn’t for years. Llano was never meant to be a performance CPU, it was meant to be a mainstream part from the get go. It still achieves adequate CPU performance. A Bulldozer-based Fusion will likely have more respectful performance. (Bloomfield-Lynnfield level performance on a per clock basis)

    Llano’s decent IGP performance permits gaming with acceptable performance and image quality. Sure, it isn’t going to handle 2 megapixel gaming, but you would have gotten a discrete solution for that.

    Intel is going to have answer back with an even more potent IGP without sacrificing power efficiency. πŸ˜‰

    Competition FTW.

      • derFunkenstein
      • 8 years ago

      WHAT THE FUCK KROGOTH IS POSITIVE ON SOMETHING

        • Krogoth
        • 8 years ago

        [url<]http://www.youtube.com/watch?v=vFgXF0a_Yw4&feature=related[/url<]

          • derFunkenstein
          • 8 years ago

          mad? no. mocking you? yessss

    • DeadOfKnight
    • 8 years ago

    Meh, I’ll wait for Trinity. Hopefully by then they will have turned dual-graphics into something more impressive.

      • willmore
      • 8 years ago

      If my laptop wasn’t dead, I may, too. Maybe even IVB. But, it’s dead. Good luck to you.

      • ronch
      • 8 years ago

      Me too, although Llano is looking mighty attractive. Time to turn on my EQ-boost feature.

    • obarthelemy
    • 8 years ago

    It seems LLano performance is heavily RAM-constrained. What are the characteristics of the AMD Laptop’s RAM ?

    • DeadOfKnight
    • 8 years ago

    [quote=”Damage”<]When AMD starts talking about how Llano comes with "discrete level" graphicsβ€”a phrase we've heard often in reference to this productβ€”one must remember that discrete graphics cards come in many forms.[/quote<] In this case they are talking about mobile discrete graphics. It may not perform as well as the discrete desktop Radeon HD 5670, but with these numbers it looks like it comes in close to the Mobility Radeon HD 5650. Impressive.

      • jjj
      • 8 years ago

      the thing here is that Intel was talking about “discrete level” in SB and AMD kept saying “we’ll show them what discrete means”,so now they got to gloat about it.

    • willmore
    • 8 years ago

    [b<]Yeowch! Close contest. This one would have been a clean kill for Intel had it not disabled the hardware AES acceleration in the Core i5-2410M for the sake of product segmentation.[/b<] Yeowch, indeed, sir. Nice.

      • basket687
      • 8 years ago

      I can’t really see how hardware AES acceleration benefits the average (or even above average) user.

        • A_Pickle
        • 8 years ago

        That’s the fault of software developers, who routinely avoid encryption at all costs. EVERYTHING should be encrypted.

        • willmore
        • 8 years ago

        Hardware acceleration of AES pretty much makes Truecrypt free. I wouldn’t have a machine that didn’t encrypt its drives. Hardware AES keeps that policy from costing me a bunch of CPU and, in mobile applications, *power*.

    • odizzido
    • 8 years ago

    This is a pretty nice chip. If I were in the market for something above a c-50 laptop this is what I would get.

    • RtFusion
    • 8 years ago

    I would totally get a Llano based laptop to replace my Acer ASPIRE 5710-5517, 15.6 inch laptop, with an Intel Core i3 330M because the GMA HD GPU that is coupled with it just can’t do proper DXVA acceleration with some of my H.264 content. When it works well, I can play back 3-4 720p videos at the same time but at other times, performance drops, picture lags behind, gets blocky, and in some videos, playback stops entirely when I try to change subtitle languages (I use Haali media splitter to handle my subs). When I run into these problems, I have to change the decoder from DXVA acceleration to one that uses the CPU. I don’t like this since it increases CPU usage, means more power usage, means more heat output, means more fan noise. And to stop from changing decoder all the time, I just have to settle for one that uses the CPU, ffdshow video decoder (the DXVA ones I’ve used are MPC-HC internal one, ffdshow DXVA decoder, and Microsoft’s DTV-DVD video decoder; CoreAVC decoder refuses to recognize the GMA HD as being capable of doing DXVA acceleration and defaults back to using the CPU, but works great on my desktop which uses an HD 4870).

    I’d also like to ditch this laptop soon anyway for more graphical performance, longer battery life (this thing only last less than two hours browsing). I am not concerned with CPU performance as 4 cores is more than enough for my mobile computing needs.

    Anyway, great review on the chips, I really can’t wait for the top-end Llano mobile parts to come out soon.

    • BobbinThreadbare
    • 8 years ago

    The comparison of texture filtering in Portal 2 would be a lot easier to view if the images were side by side instead of top and bottom.

      • Disco
      • 8 years ago

      Using a ‘mouse-over’ effect would also be much better than top and bottom. I seem to remember that you guys used that technique in some image quality tests a few years ago.

    • kizzmequik_74
    • 8 years ago

    Better power figures usually mean better temperatures, so I wonder how small a chassis this chip could be stuffed in?

    An 11.6″ or 12″ machine with this chip is a mightily tempting proposition.

    • willmore
    • 8 years ago

    Okay, I want to know what sociopath thinks that a single memory channel configuration is a valid one.

    Stand still, laddy!

    Okay, I thought of *one* valid situation–when the maker realized how *stupid* it is to max out the slots by putting in low density DIMMs that the user will just replace with higher density memory so that they can get an acceptable system.

    So, now, yeah, I think I want a Llano laptop with just one 4G SO-DIMM in it as I can’t imagine using a system with anything less than 8G these days. (Will Win7 even boot in a 4G system?)

      • FuturePastNow
      • 8 years ago

      I’d be pissed if I bought an HP Probook and it came with 4GB spread across two DIMMs. With a 4GB DIMM and an empty slot, it can be easily upgraded.

      With 2x2GB, I’ve not only got to buy [i<]two[/i<] 4GB DIMMs, I've got two uselessly small ones that I have to throw in a drawer and see every time I open that drawer for the next ten years, until I finally throw them in the trash.

        • willmore
        • 8 years ago

        *facepalm* Did you read the part of my post that starts with “Okay, I thought of *one*….”?

        Twitch too early, much?

        • yuhong
        • 8 years ago

        Ha, I have a laptop that originally came with two *512MB* DDR2 modules. Later one was replaced with a 1GB DDR2 module for a total of 1.5 GB of RAM..

      • odizzido
      • 8 years ago

      If someone doesn’t run any games and just uses a word processor it makes sense….but then you might as well get an E350 or something.

        • willmore
        • 8 years ago

        True, the 1C intel box does well on battery performance and even a few benchmarks!

          • mczak
          • 8 years ago

          I am actually impressed how little performance hit the IGP takes with 1C.
          The cpu benchmarks are 0-20% lower (less than 10% on average) which is about what could be expected (it’s actually a bit more than I expected).
          But GPU results are “only” 20% or so lower too – rob Llano of half its bandwidth and you’d get little more than half the performance (for the GPU benchmarks – I doubt it would make any difference for the cpu). Granted one reason for this is that HD3000 has lower absolute performance, but I’m thinking the use of L3 cache for the IGP also play some part here to make it less dependent on memory bandwidth.

      • NeelyCam
      • 8 years ago

      It improves battery life somewhat, as shown in the article.

        • willmore
        • 8 years ago

        I was actually impressed by how much it effected battery life. It almost makes me want to see tests for: 2x2G, 1x4G, 2x4G configs. They already have all but the 2x4G. The quesiton that would answer is “will more memory save more power (HD spindown) than it will use?”

    • FuturePastNow
    • 8 years ago

    I wonder if AMD will sell a “workstation” version of Llano branded as an Opteron processor with FirePro graphics. That would have to be worth a few sales to businesses.

      • dpaus
      • 8 years ago

      Good idea, but it’ll have to be Bulldozer-based, I think.

        • willmore
        • 8 years ago

        Check back next year when Trinity comes out. You never know. πŸ™‚

          • khands
          • 8 years ago

          I’m so looking forward to trinity πŸ˜€

        • FuturePastNow
        • 8 years ago

        I don’t think AMD can wait, the more I think about it, the more I think they have to do something like this.

        It would allow companies like HP and Dell to sell business desktops and notebooks that hit nearly every marketing bullet point: low cost, low power consumption, quad cores, “professional graphics” and brand names that someone somewhere may not associate with crap. For no additional engineering work.

        Of course, it won’t be made by Intel, which means businesses still may not buy it, but AMD’s got to try.

      • shank15217
      • 8 years ago

      Why do you need bulldozer for that? Desktop Llanos are sufficiently powerful for most CAD/CAM stuff.

    • NeelyCam
    • 8 years ago

    Good review.

    It would’ve been nice to see the battery life listed as “minutes/Whr”, to take into account the different battery sizes for all the systems (of course, that would lose some meaning if the screen illuminations varied).

    • Arag0n
    • 8 years ago

    If I had to buy a laptop right now, Llano would be my choice defnitly,

    good review guys!

    When do you expect to have ready the review about the desktop versions? A8-38XX?

      • srg86
      • 8 years ago

      Just to give an opposite view. My only need of the IGP is for kwin desktop effects, everything else I do likes a faster CPU, so personally I’d go Intel.

      For me it’s meh, but Llano would be a great buy for my parents etc.

        • Arag0n
        • 8 years ago

        Most of applications benefit from a proper gpu even if you don’t know why. Windows feel much more responsive with good GPU’s and then you work with less unhappy feeling of laggish. With no big tradeoffs i would go Llano, but if you are pointing for +650$ laptops, intel it’s still the way to go.

      • willmore
      • 8 years ago

      I agree. My laptop died suddenly 9 months ago. I though “Oh, I’ll wait for SNB, it’ll have good graphics and a great CPU.” The graphics were a bust. Even at laptop resolutions, I can’t play many of the *modest* games that I want to. Okay, maybe Llano will be the ticket……..

      Yes! Four cores that work as well as two with HT and *way* better graphics.

      If there was a dual core SNB with discrete graphics that was power competetive with Llano, I’d consider that, but noone seems to put discrete graphics on a laptop unless they stuff a quad core in there and price it over a grand. *sigh* Llano seems just fine.

    • thesmileman
    • 8 years ago

    “…you’re probably shooting your Coke out of your left nostril all over your screen while pointing at the Intel result.”

    My initial reaction was that the image was encoded wrong or that it might be some weird browser issue where it mis-loaded the image.

    It looks like a 1 year old working on a coloring book.

      • willmore
      • 8 years ago

      I think the term is “bad slide show”.

      • willyolio
      • 8 years ago

      ye gads. framerates are bad enough, but the fact that the image quality is about 5 generations behind AND slower just made me rule out any Intel graphics, period. I wouldn’t play anything more intense than Solitaire with an Intel IGP.

      • esterhasz
      • 8 years ago

      Yeah, a 1 year old on LSD.

    • willyolio
    • 8 years ago

    AMD really, really needs to play up the graphics capabilities to market this processor. We know it can do something Intel can’t: play modern video games. We know visuals sell things much better than pie charts and numbers.

    sadly, I also know that AMD’s marketing sucks and they’ll just slap a dozen newly designed stickers on the laptops and consider that brand awareness…

      • ronch
      • 8 years ago

      Well, what do you expect from the guys who came out with ‘Phenom’, ‘Duron’, ‘Sempron’ and ‘Athlon’?

      Now they’ve realized their names suck, given up, and confuse us with purely alphanumeric branding.

      Someone let AMD borrow a marketing guy who can come up with good branding schemes/names.

        • BlackStar
        • 8 years ago

        I don’t know what you are smoking, but the Duron and Athlon names are legendary and well-respected all around.

          • ronch
          • 8 years ago

          Why so defensive? Do you understand what I was saying? I’m saying Intel comes up with prettier names for their products. AMD product names aren’t nearly as flashy as Intel’s. And for good reason: Intel spends more money on marketing. They even pay other companies to come with their logos and names.

          You’re talking about the actual product. I’m talking about product names. Read.

            • Krogoth
            • 8 years ago

            Both companies bring up brand names that are extremely “dorky” for a better lack of a term in the eyes of average joe. Core and Phenom aren’t exactly words you would use to describe something prestigious or powerful.

            Average joe only see numbers and assume bigger numbers = better!

            • FuturePastNow
            • 8 years ago

            They make up wacky names because they can’t trademark real words.

            • derFunkenstein
            • 8 years ago

            Dude, the term “phenom” is used to describe young stud athletes all the time.

            • willyolio
            • 8 years ago

            i was facepalming so often in the weeks after AMD announced “phenom” and 95% of the users on tech sites didn’t realize it was an actual word and most of them couldn’t figure out how to pronounce it.

            • UberGerbil
            • 8 years ago

            Well, the intersection between stud athletes and users on tech sites is pretty minimal.

            • willyolio
            • 8 years ago

            *sigh*

            phenom doesn’t specifically mean star athlete. it’s generally refers to any person who shows amazing skills or amazing potential. There are chess phenoms and video gaming phenoms.

            I would have hoped that the intersection between users on tech sites and users who know about dictionary.com would be larger.

            • UberGerbil
            • 8 years ago

            *sigh* I’m well aware of that. I was just playing off derFunk’s post. Unfortunately, the intersection between tech site users and people who so obsessively nitpick others that they completely miss the joke is distressingly high.

            • derFunkenstein
            • 8 years ago

            It was the first thing I could think of that wasn’t PHENOMinal. Such a large word is probably too much for Kroger.

            • Krogoth
            • 8 years ago

            ROFL, all of you guys are taking this silly argument too far.

            Brand names are serious business! Marketing drones would love you guys, since you are providing them job security!

            Protip: I always knew the meaning behing the Phenom, Sempron, Athlon and Duron brand names. It still doesn’t make them any less silly.

            • derFunkenstein
            • 8 years ago

            We’re humans. We give things names. It makes much more sense to those who actually buy things than the (arguably more logical) alphabet soup of motherboard names that Gigabyte, MSI, and ASUS employ. The alphabet soup makes sense (ASUS P5 is always socket 775, P6 is 1366, P7 is 1156, and P8 is always 1155, for example) but it’s difficult for the average person to parse. So we get names.

            • BlackStar
            • 8 years ago

            You consider Core 2 Duo and i3/5/7 good product names? Nice one, thanks!

            In my country i3/5/7 are military designations for people with health issues or special needs. i7 refers to people completely unfit for service; i5 to people who are not allowed to hold guns. That’s some great brand naming right there.

            As for Core 2 Duo, it sounded stupid when it was introduced and it still sounds stupid now that it’s been phased out.

            Intel shouldn’t have abandoned the Pentium brand. It was the last good name they managed to come up with.

            As an aside, Phenom, Duron and Athlon all have roots in real words that actually mean something. Look them up if you ever get the chance.

            • swaaye
            • 8 years ago

            You don’t enjoy Itanium and Xeon? πŸ™

            • willyolio
            • 8 years ago

            does anyone enjoy itanium? I thought it was a money loser for intel.

            • swaaye
            • 8 years ago

            I’ve thought it might be neat to buy an old one off of ebay to play with.

            • UberGerbil
            • 8 years ago

            Well, some folks at Intel have claimed it didn’t lose money overall. In a narrow sense, that’s probably not true, especially when you consider the many other ways those talented people and resources could’ve been employed. But in a broader, strategic sense, it may very well be fair. Itanium enabled Intel to get a toehold in an industry segment that even now x86 is only starting to grow into, it educated them on what those customers will demand from x86 systems, and it enabled them to hang onto some of those customers who otherwise would’ve departed to IBM or Sun or Fujitsu or whatever non-x86 thing HP might’ve done when they wound up PA-RISC The Itanium business was a multi-billion-dollar yearly business, albeit considering full system and service revenue, most of which was going to HP. But at least the core silicon money was going to Intel rather than one of its competitors. And meanwhile it was learning a lot. They got to experiment with a lot of interesting CPU ideas that a lot of smart people thought might be hugely important (and thus dangerous for Intel if they were successfully first implemented by someone else) And in the event that many of those ideas didn’t work out, at least those failures weren’t attached to x86; and anyway, most experiments end in failure, but that doesn’t make them not worth doing.

            • dpaus
            • 8 years ago

            [quote<]most experiments end in failure, but that doesn't make them not worth doing[/quote<] Wasn't it Tom Peters who said "If you're not failing now and then, you're not really learning anything' ??

            • UberGerbil
            • 8 years ago

            He probably did, but he would be neither the first nor the only.

            • bhtooefr
            • 8 years ago

            Of course, I’ve heard claims that, had Intel not gone with Itanium, Itanium still would’ve existed, it would’ve just been fabbed by someone else – TI, IBM, TSMC, one of those – and only used in HP systems.

    • boomshine
    • 8 years ago

    decent CPU performance for a laptop (for a quad 1.5GHz cores) plus great IGP performance πŸ™‚

    • swaaye
    • 8 years ago

    I think that this chip is a prime candidate for building a very nifty 13.3″ notebook.

    Look at it decimate that Turion Neo X2. I had a 12″ HP dv2z with that and a discrete Radeon 3450 and this would just walk all over that machine and potentially still have 50% more battery life.

      • derFunkenstein
      • 8 years ago

      It needs to drop down to 25W before you can do that. Maybe a “real” dual-core instead of a crippled quad core will help.

        • OneArmedScissor
        • 8 years ago

        13.3″ isn’t really very different from 14.” Lots of laptop lines run the same configurations from 13.3″ to 17.” Even the much smaller 12″ Lenovos use 35w TDP CPUs. There’s no separate northbridge to pile on top of that, so it sort of is like having a 25w CPU of a previous generation.

          • derFunkenstein
          • 8 years ago

          My comment was in the context of the HP swaaye was talking about, which is one of those “axe head” tapered designs.

        • swaaye
        • 8 years ago

        I was thinking about the power output of that dv2z. The CPU was something like 17W, the 3410 about 8W and then there were also 690M and SB600. It was a rather steamy little machine and also rather slow, especially with Flash because the discrete Radeon 3xxx series can’t accelerate it.

      • FuturePastNow
      • 8 years ago

      I’d love a 13″ notebook with the A8-3500M. It doesn’t need a discrete GPU or an optical drive (in fact, I’d prefer that space be used for more battery). A 35W processor shouldn’t be a problem in a smaller chassis, especially as the FCH doesn’t use much power, it doesn’t need to be stupidly thin.

    • r00t61
    • 8 years ago

    Unfortunately, PC laptop vendors are still going to give us cheap plastic chassis with keyboard flex, and low-resolution, 16:9 aspect ratio TN panels with washed out colors and bad viewing angles whose deficiencies will be papered over with a glossy screen. Not to mention bad touchpads that still don’t recognize multi-touch gestures and fans that sound like my old 60 mm Delta at load.

    This is all part of the death spiral of reduced quality that has infected most laptops over the years.

      • OneArmedScissor
      • 8 years ago

      Look at business laptops, not Best Buy shelf models.

        • kamikaziechameleon
        • 8 years ago

        Its the fact that someone would produce a product that they know is dysfunctional garbage. That’s what is so annoying about the glut of trashy mobile solutions. Desktops are guilty of the same horrid design too. While you can circumvent the horrid desktop offerings by constructing your own to spec machine the laptop market is still a giant pile of compromises. Apple wins for having one of the best builds out there right now but it is still short of being my desired product and not because of price. Its the fact that they don’t have a Discrete graphics card in any 13 solution they offer (my preferred mobile size since it fits into most bags easily and isn’t annoying to actually carry around) The day a company offers the marraigs of materials, components and features I want all in a size/weight I’m down to buy it at almost any price under 2,500. rigth now no one offers that so I don’t have a laptop.

          • OneArmedScissor
          • 8 years ago

          You’re preaching to the choir. I probably hate how modern PCs are built more than anyone here, and I’m regularly chastized for it.

          Why? Because people buy it, anyways. You can’t win.

      • Anvil
      • 8 years ago

      And there my friends, is Apple’s whole business model.

        • r00t61
        • 8 years ago

        I’ve been looking for a laptop for a while, with a 17″ 16:10 matte screen, which I assumed was going to be an easy search. Apparently my ONLY choice is the friggin’ Macbook Pro at $2.5K?!? Not a single PC laptop vendor sells this configuration anymore; it’s all 16:9 glossy TN crap.

        What kind of a topsy-turvy world are we living in where only Apple is meeting my product criteria. Next it’ll be dogs and cats living together.

          • swiffer
          • 8 years ago

          You shop badly. Look at the HP EliteBook 8740w for ~$2K

            • Anonymous Coward
            • 8 years ago

            $2k vs $2.5k, I’d definitely get the Apple. I would probably pay 50% more for an Apple that seemed to have similar specs.

            • DeadOfKnight
            • 8 years ago

            Enterprise notebooks are even more overpriced than Apple. I’d probably go with the MBP as well.

            • no51
            • 8 years ago

            They do have warranty built in to the price. My HP nw8240 had a 3 year warranty. Had to send it in because wifi was crapping out, and they fixed it without taking money from my pocket.

            • r00t61
            • 8 years ago

            I looked at it on your suggestion. It has last year’s non-Sandy Bridge based CPU and sort of middling 1680×1050 panel resolution, plus a workstation-class Firepro that I wouldn’t use.

            The integrated fingerprint sensor definitely makes up for its otherwise general overpriced enterprise crappiness, however.

            Besides, I’ve had too many good friends laid off from HP over the years to seriously consider using their stuff.

            You give advice badly. But try again soon.

      • Krogoth
      • 8 years ago

      Because laptop used to be business-only (starting at $1,499 for low-end) Pre-2000 era. OEMs paid extra attention to chassis quality and details.

      Once we enter into 2001-2010 era, more and more people want portaiblity without costing an arm and leg. Compromises had to be made in order to make laptops more affordable. Build quality was one of the first victims. It is no surprise that you cannot find IPS panels, metal chassis in non-business grade laptops. Obtaining the aforementioned features costs an arm and leg.

    • dpaus
    • 8 years ago

    17 pages!?! Wow, when you promise to ‘do it right rather than first” you really deliver πŸ™‚

    • thesmileman
    • 8 years ago

    In the conclusion you say about CPUs:
    “Are they fast enough?”

    Then about IGPs you way the say the question is similar but inverted however you say the same thing as before:

    “Are they fast enough?”

    I am guessing on the first one you mean ” Aren’t they fast enough?”

    Good article!

      • Hdfisise
      • 8 years ago

      Also looks like the Civ 5 Late Game View No Render graph is missing but that is just me nitpicking possibly.

      Have to say I enjoyed the article as well though and I am glad you took the time to source a dual channel laptop as well!

    • Da_Boss
    • 8 years ago

    At first, I was disappointed by the results. Then, I reminded myself that CPU performance has been pretty much sufficient for years now.

    I like the approach AMD is taking by committing such large silicon real-estate to the GPU. I think it’s a more forward-looking philosophy than Intel’s, and will hopefully pay off in the end.

      • willmore
      • 8 years ago

      I’m sort of curious how much we’re all going to kick ourselves/pat ourselves on the back when we look back a year or so from now when GPUs are used for a lot more common tasks and people with Llano devices are beating the piss out of more expensive SNB devices.

      Don’t wake me, I like this dream!

    • ssidbroadcast
    • 8 years ago

    I think the Windows Movie Maker performance was bad because the CPU recognized that you were trying to encode [i<]Big Bang Theory[/i<]. Performance slowed down because the CPU was wincing at how horrible that show is.

      • thesmileman
      • 8 years ago

      What are you talking about this show is as awesome as a Justin Bieber concert marathon!

        • ssidbroadcast
        • 8 years ago

        Yeah, now benchmark your discrete soundcard with a Bieber album.

      • Palek
      • 8 years ago

      I actually really enjoy Big Bang Theory. The writers are well-versed in matters of nerd-dom and actually show some respect for nerds. I find the dialogue fast and quite funny. And, of course, Sheldon is the best character in sitcoms since, well, forever. (Disclaimer: I’m only up to the first few episodes of Season 2.)

      Out of curiosity: can you give me an example of what you consider a funny sitcom/show?

        • obarthelemy
        • 8 years ago

        I think Big Bang is OK, for a sitcom. I hate the laughs though.
        I prefer regular comedy shows: Community is still better (you might like Abed); and Arrested Development (discontinued) tops both by far (you might like.. nope, it’s impossible to like any of them).. 30Rock, Modern Family, The Middle are OK.

          • Palek
          • 8 years ago

          The laugh track comes with the sitcom territory, so it’s pointless to complain about it. Studio bigwigs don’t like taking chances; the absence of the laugh track will confuse the, how shall we say, cranially unburdened viewers out there and make them think the show is not funny. That’s why the likes of AD, 30Rock, Scrubs etc are so few and far between.

          Besides, I find Big Bang consistently funny so the laugh track is not a distraction.

          Oh yeah, for comedies without the laugh track, Scrubs is great. Dr. Cox and the Janitor are awesome.

            • obarthelemy
            • 8 years ago

            I can see why you like Seldon…
            and, you’re welcome.

            • Palek
            • 8 years ago

            Oh, sorry, thanks for the suggestions! I really should check out AD. Haven’t had the chance but it receives so much praise I might have to buy it off Amazon.

            And by the way, I’m no scientist nor do I pretend to be and would probably regularly punch Sheldon in the face if I was forced to live with him. But as a comedy character he is simply brilliant.

        • no51
        • 8 years ago

        The IT Crowd.

      • BobbinThreadbare
      • 8 years ago

      *audience erupts into laughter*

Pin It on Pinterest

Share This