AMD’s A8-3850 Fusion APU

Ever get the feeling you’re in an ever-repeating scenario like the one from the movie Groundhog Day? That’s been the mood here in Damage Labs lately. We finished our review of AMD’s mobile A-series APUs, and then we turned around and began testing the desktop variants of the same chip. Apparently, we’re being given the chance to write about APUs every day until we get it right. Trouble is, if our 10,000-word treatment of the mobile version wasn’t sufficient, we’re doomed to more sequels than a Ubisoft video game franchise. You see, the chip code-named “Llano” makes perfect sense in laptops, but its placement among desktop processors is somewhat more precarious.

The desktop A-series APUs

To understand why Llano’s footing on the desktop isn’t quite so sure as it is in laptops, you’ll first want to understand the chip itself. Of course, we’re going to point you to our massive and righteous coverage of the chip’s architecture and implementation, first and foremost. For those of you who aren’t, you know, really strong readers, we’ll offer the Cliff’s Notes version.

AMD calls this chip an APU, or accelerated processing unit, but most of the world would simply call it a highly integrated CPU, very much like its direct competition from Intel, the Sandy Bridge processors. Llano is AMD’s first processor based on GlobalFoundries’ 32-nanometer-plus-acronyms (SOI HKMG DSL) chip fabrication process, and the incredible transistor density offered by this process has allowed the integration of a whole host of functions into a single die. The chip’s four CPU cores are a tweaked version of the “Stars” core found in Phenom II processors. A move to 1MB of L2 cache per core and some incremental improvements to the execution engine have netted a claimed 6% increase in per-clock instruction throughput. The “Sumo” integrated graphics processor is the spitting image of the discrete “Redwood” GPU used in the Radeon HD 5670, though its UVD video processing block has been updated with broader codec acceleration. An integrated memory controller gives both the CPU and GPU cores access to twin channels of DDR3 memory, and 24 lanes of second-gen PCI Express connectivity offer a path to expansion cards and peripherals. Oh, and there are dual display output paths, so the chip can drive two monitors at once.

Phew. That’s a lotta stuff.

There are fairly straightforward reasons why a highly integrated chip like this one is best suited for laptops. Integration saves space and power, both of which are at a premium beneath a laptop’s keyboard. Also, Llano’s value proposition relies pretty heavily on its relatively beefy integrated graphics solution. In laptop, you’re generally stuck with whatever GPU the system’s manufacturer chose to use. On the desktop, where popping in a video card is literally a snap, questions must be asked about the worth of that IGP.

Anyhow, we’ll explore those potentially difficult questions soon enough, but first we should discuss a number of other matters, starting with the basic specs of the new A-series APU lineup.

The A-Series models and key specs. Source: AMD.

The mobile versions of Llano are constrained by relatively tight 35W and 45W power and thermal envelopes, but the desktop versions get to breathe freely. As a result, both CPU and IGP clock speeds are substantially higher. The mobile A8-3500M we reviewed had a base CPU clock of 1.5GHz and a 444MHz IGP. By contrast, the desktop A8-3850’s four cores run at 2.9GHz, and the IGP clock is up to 600MHz. Those gains come with a price, in the form of exponential increases in power consumption. The A8-3850 APU we have for review sports a very, um, healthy 100W TDP, nearly triple that of the A8-3500M.

We were expecting to see AMD’s Turbo Core dynamic clock speed technology deployed across the Llano lineup, but it hasn’t worked out that way. Only the 65W parts feature Turbo Core, while the 100W versions stick with the tried-and-true formula of aggressive base clock speeds.

AMD tells us the two 100W APU models shown above will start selling soon, although we’d better relay the exact wording, since it’s a little slippery. The official statement says: “Global availability will begin to ramp July 3rd.” It’s possible availability may be wider in some international markets, particularly China, initially. When it does hit stores over here, the A8-3850 will set you back $135, while the A6-3650 will list for $115. (We don’t yet have pricing on the 65W models, but AMD expects them to be priced pretty close to their 100W counterparts.)

The A8-3850 is the model we have on hand for review today, and its $135 list price puts it smack-dab between a couple of Sandy Bridge-based offerings from Intel. The most obvious competitor is the Core i3-2100, a dual-core, 3.1GHz processor that currently sells for $125. However, the i3-2100 is gimpy in one key area: graphics. The closer competition is a slightly newer model, the Core i3-2105, which has the exact same CPU specs but includes the fully realized HD 3000 version of the Intel IGP, with all 12 execution units enabled. Intel very quietly introduced the Core i3-2105 in May, probably because Llano was close on the horizon. The i3-2105 is going for $140 online, making it the most direct competitor to the A8-3850. We have both the i3-2100 and i3-2105 on hand for testing, of course.

Despite their similar pricing, the matchup between the A8-3850 and the Core i3-2100/2105 involves a little bit of asymmetrical warfare. Llano has four cores, while the Core i3 has two—though it does support four execution threads, thanks to Hyper-Threading. The Intel processors are based on a much smaller slab of silicon, and they’ll fit into a much tighter 65W thermal envelope, as well.

For a lot of do-it-yourself PC builders, the A8-3850 may have an even more formidable challenger in the form of an incumbent offering from AMD. The Phenom II X4 840 has four cores at 3.2GHz, a 95W TDP, and sells for just $105 right now. That formula has made the X4 840 a favorite system guide rec here at TR. Of course, the X4 840 lacks integrated graphics, but for anyone looking to build a system with a discrete GPU, it may be the better choice.

As you might have guessed, Llano chips are probably more likely to make their way into relatively low-end systems, especially the type where a small footprint might be prized—small-form-factor desktops, all-in-one (AIO) systems a la the HP Omni series, and so on. For what it’s worth, AMD expects A6-based systems to start at about $500 and A8-based ones to start at around $600. Also for what it’s worth, we doubt many of those smaller PCs or AIOs will incorporate the 100W parts. The 65W parts are better suited to small-form-factor desktops and are likely to be more popular with the big PC builders. What’s more, we think the 35W/45W mobile APUs are the better fit for all-in-ones.

We have one more sad duty to perform on the pricing and positioning front, and that relates to the Dual Graphics feature of the A-series APUs. As you may know, the Llano IGP can team up with a discrete GPU of similar power via AMD’s CrossFire technology to deliver higher frame rates than either single GPU could achieve. In response to this fact, AMD’s marketing folks have created a host of new brand names—some for the Llano IGPs, and others representing the combined power of the Llano IGP and a discrete GPU.

Confused? You’ll be more confused once you understand it. Here, have a look at this table.

So here’s the deal. The left column shows various models of discrete Radeon GPUs. Across the top are the two Radeon brands assigned to A8 and A6 APUs, Radeon HD 6550D and 6350D, respectively. In the middle, the strings with “D2” at the end show the model numbers of the combined Dual Graphics solution, which may be affixed to a PC at Best Buy.

This is the point where Scooby looks into the camera, tilts his head, and says “Ruh?”

But think about it. Now, all you need to do is look at the “Radeon HD 6550A2” sticker on a retail system, write it down, come home, and Google for it. Eventually, you’ll find a chart like the one above. Then, you can correlate the 6550A2 to the combination of the integrated Radeon HD 6530D and the discrete Radeon HD 6450A. After that, you can attempt to find a desktop graphics card analogous to the Radeon HD 6450A in order to get a feel for its approximate performance. Several hours later, you can give up, secure in the knowledge that no one really knows.

We do, however, know that the A8-3850’s IGP and our discrete Radeon HD 6670 interlock to become the Radeon HD 6690D2. Test results are forthcoming.

Turn your dial to Socket FM1

Because we can never get enough code names, let’s throw out a couple more. The mobile platform for Llano processors is code-named “Sabine,” as you surely recall, and its desktop equivalent is named “Lynx.” Lynx has, er, a few different spots than AMD’s prior desktop platforms. Since Llano incorporates so many types of connectivity, a reworked CPU socket was unavoidable. Ladies and gentlemen, say hello to Socket FM1 and one of the first chips to drop into it, the A8-3850 APU:


The Socket FM1-based A8-3850 (left) and the Socket AM3-based Phenom II X4 840 (right)

We’re a little surprised to see how much AMD went for physical continuity amidst the electrical change. FM1 remains a traditional socket, with the pins still protruding from the underside of the CPU package. Since virtually all of Intel’s socketed products and AMD’s own Opterons have made the transition to an LGA-style scheme where the pins sit on the motherboard, we half expected a change. Socket FM1 is even compatible with the same heatsinks and retention mechanisms used for other recent AMD sockets, from AM2 through AM3+. We had no trouble fitting an older tower cooler onto our A8-3850 APU. The FM1 pinout is clearly different though, with a total of 905 pins and a different gap arrangement, so the astute user shouldn’t have any confusion about which chips fit into which sockets.

The non-astute user will still totally mash those pins, but that can’t be helped.

Aside from the obvious physical differences, the desktop versions of Llano have a few other enhanced talents, including the ability to support higher DDR3 memory speeds of 1600 and 1866MHz. That capability is noteworthy because the higher-clocked IGP in those desktop APUs is almost certainly being held back at times by memory bandwidth limitations. Even so, we’d expect the vast majority of pre-built Llano desktops to ship with 1333MHz memory, at least initially.

For enthusiasts rolling their own PCs, though, the price difference between quality branded 1333MHz and 1600MHz DIMMs is almost nothing, so 1600MHz RAM could prove popular in that narrow window where folks care about graphics performance but not enough to purchase a discrete graphics card. (Both of those dudes will be really pumped about it.) On the other hand, 1866MHz memory still carries a substantial price premium—about $45 for a pair of 4GB DIMMs. That will change over time, though. The folks at Corsair tell us the spec for higher-speed DDR3 DRAMs has been approved by the JEDEC standards body, and DRAMs conforming to that spec are on the way. Once 1866MHz-capable DRAM chips are available in sufficient volume, that price premium should evaporate.

Of course, an all-new CPU platform requires all-new motherboards, like the one we used for testing, the Gigabyte A75M-UD2H, pictured above. The UD2H is a microATX board loaded with ports and connectors of all types.

AMD offers two “Fusion controller hub” south bridge chips for the Lynx platform, the A75 and the A55. Call me crazy suspicious, but I believe these are the exact same chips known as the A60M and A70M on the mobile side of the fence. The A75 is the pricier, higher-end model, distinguished by its support for two newer I/O standards, SATA 6Gbps and USB 3.0. Obviously, the Gigabyte board above is based on the A75M. A whole host of the usual suspects will be introducing Lynx-based motherboards, and we have a roundup of some of the first boards today.

Our testing methods

We ran every test at least three times and reported the median of the scores produced.

The test systems were configured like so:

Processor
Athlon II X3 455 3.3GHz

Phenom II X2 565 3.4GHz

Phenom II X4 840 3.2GHz

Phenom II X4 975 3.6GHz

Phenom II X4 980 3.7GHz

Phenom II X4 1075T 3.0GHz

Phenom II X4 1100T 3.3GHz

Pentium
Extreme Edition 840 3.2GHz
Pentium
G6950 2.8GHz
Core
i7-990X 3.46 GHz
Core
2 Duo E6400 2.13GHz
Core
i3-560 3.33 GHz

Core i5-655K 3.2GHz

Core i5-760 2.8GHz

Core i7-875K 2.93GHz

Core
2 Quad Q9400 2.67GHz
Motherboard Gigabyte
890GPA-UD3H
Asus
P5E3 Premium
Asus
P7P55D-E Pro
Intel
DX58SO2
North bridge 890GX X48 P55 X58
South bridge SB850 ICH9R ICH10R
Memory size 8GB
(4 DIMMs)
8GB
(4 DIMMs)
8GB
(4 DIMMs)
12GB
(6 DIMMs)
Memory type Corsair

CMD8GX3M4A1333C7

DDR3 SDRAM

Corsair

CMD8GX3M4A1600C8

DDR3 SDRAM

Corsair

CMD8GX3M4A1600C8

DDR3 SDRAM

Corsair

CMP12GX3M6A1600C8

DDR3 SDRAM

Memory speed 1333 MHz 800
MHz
1066 MHz 1333 MHz
1066 MHz
1333 MHz
1333 MHz
Memory timings 8-8-8-20 2T 7-7-7-20 2T 7-7-7-20 2T 8-8-8-20 2T
7-7-7-20 2T
8-8-8-20 2T
8-8-8-20 2T
Chipset

drivers

AMD
AHCI 1.2.1.263
INF
update 9.1.1.1025

Rapid Storage Technology 9.6.0.1014

INF
update 9.1.1.1025

Rapid Storage Technology 9.6.0.1014

INF update 9.1.1.1020

Rapid Storage Technology 9.5.0.1037

Audio Integrated

SB850/ALC892 with

Realtek 6.0.1.6235 drivers

Integrated

ICH9R/AD1988B with

Microsoft drivers

Integrated

P55/RTL8111B with

Realtek 6.0.1.6235 drivers

Integrated

ICH10R/ALC892 with

Realtek 6.0.1.6235 drivers

Processor Core
i7-950 3.06 GHz

Core i7-970 3.2 GHz

Core i7-980X Extreme 3.3 GHz

Core
i3-2100 2.93 GHz

Core i5-2400 3.1 GHz

Core i5-2500K 3.3 GHz

Core i7-2600K 3.4 GHz

AMD
A8-3850 2.9 GHz
Atom
D525 1.8 GHz
AMD
E-350 1.6GHz
Motherboard Gigabyte
X58A-UD5
Asus
P8P67 Deluxe
Gigabyte
A75M-UD2H
Jetway
NC94FL-525-LF
MSI
E350IA-E45
North bridge X58 P67 A75 NM10 Hudson
M1
South bridge ICH10R
Memory size 12GB
(6 DIMMs)
8GB
(4 DIMMs)
8GB
(4 DIMMs)
4GB (2 DIMMs) 4GB (2 DIMMs)
Memory type Corsair

CMP12GX3M6A1600C8

DDR3 SDRAM

Corsair

CMD8GX3M4A1600C8

DDR3 SDRAM

Corsair

CMD8GX3M4A1600C8

DDR3 SDRAM

Corsair

CM2X2048-8500C5D

DDR2 SDRAM

Corsair

CMD8GX3M4A1333C7

DDR3 SDRAM

Memory speed 1333 MHz 1333 MHz 1333 MHz 800
MHz
1066 MHz
Memory timings 8-8-8-20 2T 8-8-8-20 2T 8-8-8-20 2T 5-5-5-18
2T
7-7-7-20 2T
Chipset

drivers

INF update 9.1.1.1020

Rapid Storage Technology 9.5.0.1037

INF update
9.2.0.1016

Rapid Storage Technology 10.0.0.1046

AMD
AHCI 1.2.1.296

AMD USB 3.0 1.0.0.52

INF update 9.1.1.1020

Rapid Storage Technology 9.5.0.1037

AMD
AHCI 1.2.1.275
Audio Integrated

ICH10R/ALC889 with

Realtek 6.0.1.6235 drivers

Integrated

P67/ALC889 with

Microsoft drivers

Integrated

A75 FCH/ALC889 with

Realtek 6.0.1.6235 drivers

Integrated

NM10/ALC662 with

Realtek 6.0.1.6235 drivers

Integrated

Hudson M1/ALC887 with

Realtek 6.0.1.6235 drivers

They all shared the following common elements:

Hard drive Corsair
Nova V128 SATA SSD
Discrete graphics Asus
ENGTX460 TOP 1GB (GeForce GTX 460) with ForceWare 260.99 drivers
OS Windows 7 Ultimate x64 Edition
Power supply PC Power & Cooling Silencer 610 Watt

Also, for integrated graphics testing, we used the following system configs:

Processor Core
i3-2100 2.93 GHz

Core i3-2105 2.93 GHz

AMD
A8-3850 2.9 GHz
Motherboard Intel
DH67BL
Gigabyte
A75M-UD2H
North bridge H67 A75
South bridge
Memory size 8GB
(4 DIMMs)
8GB
(4 DIMMs)
Memory type Corsair

CMD8GX3M4A1600C8

DDR3 SDRAM

Corsair

CMD8GX3M4A1600C8

DDR3 SDRAM

Memory speed 1333 MHz 1333 MHz
& 1600MHz
Memory timings 8-8-8-20 2T 8-8-8-20 2T
Chipset

drivers

INF update
9.2.0.1016

Rapid Storage Technology 10.0.0.1046

AMD
AHCI 1.2.1.296

AMD USB 3.0 1.0.0.52

Audio Integrated

H67/ALC892 with

Realtek 6.0.1.6235 drivers

Integrated

A75 FCH/ALC889 with

Realtek 6.0.1.6235 drivers

Graphics Intel
HD 2000 with 8.15.10.2361 drivers

Intel HD 3000 with 8.15.10.2361 drivers

Radeon HD 6670 with 8.862-110607a-12-249E drivers

Radeon
HD 6550D with 8.862-110607a-12-249E drivers

Radeon HD 6670 with 8.862-110607a-12-249E drivers

Thanks to Asus, Corsair, Gigabyte, and OCZ for helping to outfit our test rigs with some of the finest hardware available. Thanks to Intel and AMD for providing the processors, as well, of course.

The test systems’ Windows desktops were set at 1900×1200 in 32-bit color. Vertical refresh sync (vsync) was disabled in the graphics driver control panel.

We used the following versions of our test applications:

Some further notes on our testing methods:

  • Many of our performance tests are scripted and repeatable, but for some of the games, including Battlefield: Bad Company 2, we used the Fraps utility to record frame rates while playing a 60-second sequence from the game. Although capturing frame rates while playing isn’t precisely repeatable, we tried to make each run as similar as possible to all of the others. We raised our sample size, testing each Fraps sequence five times per video card, in order to counteract any variability. We’ve included second-by-second frame rate results from Fraps for those games, and in that case, you’re seeing the results from a single, representative pass through the test sequence.
  • We used a Yokogawa WT210 digital power meter to capture power use over a span of time. The meter reads power use at the wall socket, so it incorporates power use from the entire system—the CPU, motherboard, memory, graphics solution, hard drives, and anything else plugged into the power supply unit. (The monitor was plugged into a separate outlet.) We measured how each of our test systems used power across a set time period, during which time we ran Cinebench’s multithreaded rendering test.

The tests and methods we employ are usually publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Memory subsystem performance

We typically start with some synthetic tests of the CPUs’ memory subsystems, just to weed out any new readers who might be intimidated by such things. Can’t become too successful, you know. These results don’t track directly with real-world performance, but they do give us some insights into the CPU and system architectures involved. For this first test, the graph is pretty crowded. We’ve tried to be selective, only choosing a subset of the CPUs tested. This test is multithreaded, so more cores—with associated L1 and L2 caches—can lead to higher throughput.

Peer closely at the 4MB block size, and you’ll see that the A8-3850 achieves much higher throughput than the Phenom II X4 840. That’s the result of the A8’s larger 1MB L2 caches ganging up. In fact, the A8-3850’s throughput at 4MB is higher than the Phenom II X4 980’s, even though the X4 980 has a 6MB L3 cache. Otherwise, no major surprises here.

The A8-3850 is a little slower in the Stream bandwidth test than its relatives in the AMD family tree. I’d blame the need to share bandwidth with the IGP, but we’re using a discrete graphics card for the strictly CPU-oriented portion of our tests.

The 3850’s memory access latencies are a little bit higher than the rest of the Phenom II/Athlon II lineup, as well. One contributor to this result is the A8’s larger L2 caches, which have 20 cycles of latency, versus 15 cycles for the Phenom II X4 840’s L2s. That’s a relatively minor factor, though. The additional latency must be coming from other places.

Battlefield: Bad Company 2

This is our first real-world performance result, and we should set the stage a little bit. We are testing CPU performance alone here, since all of the test systems including the Llano share the same type of discrete graphics card. We’ll test the A8’s integrated graphics later.

The A8-3850 delivers playable frame rates in this game, as do most of the modern CPUs we’ve tested, save for the low-power Atom and E-350 parts. The A8’s direct competitor, the Core i3-2100, is measurably faster. AMD has made some progress, however. Despite its 300MHz lower core clock, the A8-3850 clearly outperforms the Phenom II X4 840 in this test. Our guess is that this game engine takes good advantage of the 3850’s larger L2 caches.

Civilization V

The developers of Civ V have cooked up a number of interesting benchmarks, two of which we used here. The first one tests a late-game scenario where the map is richly populated and there’s lots happening at once. As you can see by the setting screen below, we didn’t skimp on our the image quality settings for graphics, either. Doing so wasn’t necessary to tease out clear differences between the CPUs. Civ V also runs the same tests without updating the screen, so we can eliminate any overhead or bottlenecks introduced by the video card and its driver software. We’ve reported those “no render” scores, as well

The next test populates the screen with a large number of units and animates them all in parallel.

The results are fairly consistent across most of these tests. The A8-3850 largely trails the Core i3-2100, but it generally produces frame rates similar to the Phenom II X4 840’s.

F1 2010

CodeMasters has done a nice job of building benchmarks into its recent games, and F1 2010 is no exception. We scripted up test runs at three different display resolutions, with some very high visual quality settings, to get a sense of how much difference a CPU might make in a real-world gaming scenario where GPU bottlenecks can come into play.

We also went to some lengths to fiddle with the game’s multithreaded CPU support in order to get it to make the most of each CPU type. That effort eventually involved grabbing a couple of updated config files posted on the CodeMasters forum, one from the developers and another from a user, to get an optimal threading map for the Phenom II X6. What you see below should be the best possible performance out of each processor.

Our discrete GPU becomes the main performance limiter at higher resolutions, but the A8-3850 trails the Core i3-2100 by substantial margins at lower resolutions. The 3850 continues pulling off the rather neat trick of outperforming the Phenom II X4 840, at least.

Metro 2033
Metro 2033 also offers a nicely scriptable benchmark, and we took advantage by testing at four different combinations of resolution and visual quality.

You can see how the graphics card gradually becomes the primary performance limiter as the visual quality settings in the game rise. Our emerging narrative, which places the A8-3850 somewhat behind the Core i3-2100 and ever so slightly ahead of the Phenom II X4 840, isn’t disturbed by these results.

Source engine particle simulation

Next up is a test we picked up during a visit to Valve Software, the developers of the Half-Life games. They had been working to incorporate support for multi-core processors into their Source game engine, and they cooked up some benchmarks to demonstrate the benefits of multithreading.

This test runs a particle simulation inside of the Source engine. Most games today use particle systems to create effects like smoke, steam, and fire, but the realism and interactivity of those effects are limited by the available computing horsepower. Valve’s particle system distributes the load across multiple CPU cores.

Here’s a very nicely multithreaded application, and it gives us a sense of something important to note: although the Core i3-2100 only has two cores—and four threads only via Hyper-Threading—the Intel dual-core is able to outperform many AMD quad-core chips, including the A8-3850. In this age of very different CPU architectures, core counts and clock speeds are no clear indicator of how a processor will perform. In this case, the dual-core Sandy Bridge easily outruns the quad-core A8-3850, even though the application takes full advantage of four threads.

Productivity

SunSpider JavaScript performance

7-Zip file compression and decompression

Here’s a case where the A8’s four cores give it a real advantage over the Core i3-2100.

TrueCrypt disk encryption

This full-disk encryption suite includes a performance test, for obvious reasons. We tested with a 500MB buffer size and, because the benchmark spits out a lot of data, averaged and summarized the results in a couple of different ways.

TrueCrypt has added support for Intel’s custom-tailored AES-NI instructions since we last visited it, so the encoding of the AES algorithm, in particular, should be very fast on the Intel CPUs that support those instructions. Those CPUs include the six-core Gulftowns, the dual-core Clarkdales, and Sandy Bridge.

The fastest CPUs in the chart above all share a common trait: hardware acceleration of AES encryption. The Core i3-2100’s Sandy Bridge cores would, in theory, support this feature, but the i3-2100 has had that support disabled at the factory. Intel would like you to buy a more expensive processor if you want to get that feature. Thus, the A8-3850 outperforms the i3-2100 by a pretty good sized margin.

Image processing

The Panorama Factory photo stitching
The Panorama Factory handles an increasingly popular image processing task: joining together multiple images to create a wide-aspect panorama. This task can require lots of memory and can be computationally intensive, so The Panorama Factory comes in a 64-bit version that’s widely multithreaded. I asked it to join four pictures, each eight megapixels, into a glorious panorama of the interior of Damage Labs.

In the past, we’ve added up the time taken by all of the different elements of the panorama creation wizard and reported that number, along with detailed results for each operation. However, doing so is incredibly data-input-intensive, and the process tends to be dominated by a single, long operation: the stitch. Thus, we’ve simply decided to report the stitch time, which saves us a lot of work and still gets at the heart of the matter.

picCOLOR image processing and analysis

picCOLOR was created by Dr. Reinert H. G. Müller of the FIBUS Institute. This isn’t Photoshop; picCOLOR’s image analysis capabilities can be used for scientific applications like particle flow analysis. Dr. Müller has supplied us with new revisions of his program for some time now, all the while optimizing picCOLOR for new advances in CPU technology, including SSE extensions, multiple cores, and Hyper-Threading. Many of its individual functions are multithreaded.

At our request, Dr. Müller graciously agreed to re-tool his picCOLOR benchmark to incorporate some real-world usage scenarios. As a result, we now have four tests that employ picCOLOR for image analysis. I’ve included explanations of each test from Dr. Müller below.

Particle Image Velocimetry (PIV) is being used for flow measurement in air and water. The medium (air or water) is seeded with tiny particles (1..5um diameter, smoke or oil fog in air, titanium dioxide in water). The tiny particles will follow the flow more or less exactly, except may be in very strong sonic shocks or extremely strong vortices. Now, two images are taken within a very short time interval, for instance 1us. Illumination is a very thin laser light sheet. Image resolution is 1280×1024 pixels. The particles will have moved a little with the flow in the short time interval and the resulting displacement of each particle gives information on the local flow speed and direction. The calculation is done with cross-correlation in small sub-windows (32×32, or 64×64 pixel) with some overlap. Each sub-window will produce a displacement vector that tells us everything about flow speed and direction. The calculation can easily be done multithreaded and is implemented in picCOLOR with up to 8 threads and more on request.

Real Time 3D Object Tracking is used for tracking of airplane wing and helicopter blade deflection and deformation in wind tunnel tests. Especially for comparison with numerical simulations, the exact deformation of a wing has to be known. An important application for high speed tracking is the testing of wing flutter, a very dangerous phenomenon. Here, a measurement frequency of 1000Hz and more is required to solve the complex and possibly disastrous motion of an aircraft wing. The function first tracks the objects in 2 images using small recognizable markers on the wing and a stereo camera set-up. Then, a 3D-reconstruction follows in real time using matrix conversions. . . . This test is single threaded, but will be converted to 3 threads in the future.

Multi Barcodes: With this test, several different bar codes are searched on a large image (3200×4400 pixel). These codes are simple 2D codes, EAN13 (=UPC) and 2 of 5. They can be in any rotation and can be extremely fine (down to 1.5 pixel for the thinnest lines). To find the bar codes, the test uses several filters (some of them multithreaded). The bar code edge processing is single threaded, though.

Label Recognition/Rotation is being used as an important pre-processing step for character reading (OCR). For this test in the large bar code image all possible labels are detected and rotated to zero degree text rotation. In a real application, these rotated labels would now be transferred to an OCR-program – there are several good programs available on the market. But all these programs can only accept text in zero degree position. The test uses morphology and different filters (some of them multithreaded) to detect the labels and simple character detection functions to locate the text and to determine the rotational angle of the text. . . . This test uses Rotation in the last important step, which is fully multithreaded with up to 8 threads.

Despite some see-sawing back and forth in position across the individual tests, the i3-2100 and A8-3850 remain pretty closely matched overall, with the Core i3 retaining a small lead.

Video encoding

x264 HD benchmark

This benchmark tests one of the most popular H.264 video encoders, the open-source x264. The results come in two parts, for the two passes the encoder makes through the video file. I’ve chosen to report them separately, since that’s typically how the results are reported in the public database of results for this benchmark.

Windows Live Movie Maker 14 video encoding

For this test, we used Windows Live Movie Maker to transcode a 30-minute TV show, recorded in 720p .wtv format on my Windows 7 Media Center system, into a 320×240 WMV-format video format appropriate for mobile devices.

Even though the two processors we’re comparing use very different approaches to getting the work done—one is only dual-core, for goshsakes—they achieve near parity in video encoding performance.

3D modeling and rendering

Cinebench rendering

The Cinebench benchmark is based on Maxon’s Cinema 4D rendering engine. It’s multithreaded and comes with a 64-bit executable. This test runs with just a single thread and then with as many threads as CPU cores (or threads, in CPUs with multiple hardware threads per core) are available.

POV-Ray rendering

We’re using the latest beta version of POV-Ray 3.7 that includes native multithreading and 64-bit support.

Valve VRAD map compilation

This next test processes a map from Half-Life 2 using Valve’s VRAD lighting tool. Valve uses VRAD to pre-compute lighting that goes into games like Half-Life 2.

The near parity between the A8-3850 and the Core i3-2100 continues through our rendering tests.

Scientific computing

MyriMatch proteomics

Our benchmarks sometimes come from unexpected places, and such is the case with this one. David Tabb is a friend of mine from high school and a long-time TR reader. He has provided us with an intriguing new benchmark based on an application he’s developed for use in his research work. The application is called MyriMatch, and it’s intended for use in proteomics, or the large-scale study of protein. I’ll stop right here and let him explain what MyriMatch does:

In shotgun proteomics, researchers digest complex mixtures of proteins into peptides, separate them by liquid chromatography, and analyze them by tandem mass spectrometers. This creates data sets containing tens of thousands of spectra that can be identified to peptide sequences drawn from the known genomes for most lab organisms. The first software for this purpose was Sequest, created by John Yates and Jimmy Eng at the University of Washington. Recently, David Tabb and Matthew Chambers at Vanderbilt University developed MyriMatch, an algorithm that can exploit multiple cores and multiple computers for this matching. Source code and binaries of MyriMatch are publicly available.
In this test, 5555 tandem mass spectra from a Thermo LTQ mass spectrometer are identified to peptides generated from the 6714 proteins of S. cerevisiae (baker’s yeast). The data set was provided by Andy Link at Vanderbilt University. The FASTA protein sequence database was provided by the Saccharomyces Genome Database.

MyriMatch uses threading to accelerate the handling of protein sequences. The database (read into memory) is separated into a number of jobs, typically the number of threads multiplied by 10. If four threads are used in the above database, for example, each job consists of 168 protein sequences (1/40th of the database). When a thread finishes handling all proteins in the current job, it accepts another job from the queue. This technique is intended to minimize synchronization overhead between threads and minimize CPU idle time.

The most important news for us is that MyriMatch is a widely multithreaded real-world application that we can use with a relevant data set. I should mention that performance scaling in MyriMatch tends to be limited by several factors, including memory bandwidth, as David explains:

Inefficiencies in scaling occur from a variety of sources. First, each thread is comparing to a common collection of tandem mass spectra in memory. Although most peptides will be compared to different spectra within the collection, sometimes multiple threads attempt to compare to the same spectra simultaneously, necessitating a mutex mechanism for each spectrum. Second, the number of spectra in memory far exceeds the capacity of processor caches, and so the memory controller gets a fair workout during execution.

Here’s how the processors performed.

STARS Euler3d computational fluid dynamics

Charles O’Neill works in the Computational Aeroservoelasticity Laboratory at Oklahoma State University, and he contacted us to suggest we try the computational fluid dynamics (CFD) benchmark based on the STARS Euler3D structural analysis routines developed at CASELab. This benchmark has been available to the public for some time in single-threaded form, but Charles was kind enough to put together a multithreaded version of the benchmark for us with a larger data set. He has also put a web page online with a downloadable version of the multithreaded benchmark, a description, and some results here.

In this test, the application is basically doing analysis of airflow over an aircraft wing. I will step out of the way and let Charles explain the rest:

The benchmark testcase is the AGARD 445.6 aeroelastic test wing. The wing uses a NACA 65A004 airfoil section and has a panel aspect ratio of 1.65, taper ratio of 0.66, and a quarter-chord sweep angle of 45º. This AGARD wing was tested at the NASA Langley Research Center in the 16-foot Transonic Dynamics Tunnel and is a standard aeroelastic test case used for validation of unsteady, compressible CFD codes.
The CFD grid contains 1.23 million tetrahedral elements and 223 thousand nodes . . . . The benchmark executable advances the Mach 0.50 AGARD flow solution. A benchmark score is reported as a CFD cycle frequency in Hertz.

So the higher the score, the faster the computer. Charles tells me these CFD solvers are very floating-point intensive, but they’re oftentimes limited primarily by memory bandwidth. He has modified the benchmark for us in order to enable control over the number of threads used. Here’s how our contenders handled the test with optimal thread counts for each processor.

Neither the A8 nor the Core i3 is a paragon of scientific computing performance, but we’ve included these results for your edification.

Power consumption and efficiency

We used a Yokogawa WT210 digital power meter to capture power use over a span of time. The meter reads power use at the wall socket, so it incorporates power use from the entire system—the CPU, motherboard, memory, graphics solution, hard drives, and anything else plugged into the power supply unit. (The monitor was plugged into a separate outlet.) We measured how each of our test systems used power across a set time period, during which time we ran Cinebench’s multithreaded rendering test.

We’ll start with the show-your-work stuff, plots of the raw power consumption readings. We’ve broken things down by socket type in order to keep them manageable.

We can slice up these raw data in various ways in order to better understand them. We’ll start with a look at idle power, taken from the trailing edge of our test period, after all CPUs have completed the render. Next, we can look at peak power draw by taking an average from the ten-second span from 15 to 25 seconds into our test period, when the processors were rendering.

As we noted in our review of the mobile Llano, lots of the engineering effort for this chip went into reducing power consumption at idle. That paid off in the form of better battery life in the mobile system, and we can see the corresponding reduction in idle power draw on the desktop side of things. The extensive integration and core-level power gating in Llano makes it much more efficient at idle than the Phenom II X4 840. Even our Core i3-2100 test system draws a few more watts at idle than the A8-3850 sytem.

Once all four of the A8’s cores are occupied, though, we’re reminded of its 100W TDP rating. Power draw on our Core i3-2100 test rig is 38W lower under load. Also, even though it’s based on a 32-nm CPU at a lower clock speed, our A8-3850 system draws nearly as much power under load as our Phenom II X4 840 system. That suggests GlobalFoundries’ 32-nm process may not be paying the dividends one would expect, at least not at these relatively high clock speeds.

We can highlight power efficiency by looking at total energy use over our time span. This method takes into account power use both during the render and during the idle time. We can express the result in terms of watt-seconds, also known as joules. (In this case, to keep things manageable, we’re using kilojoules.) Note that since we had to expand the duration of the test periods for the Pentium EE 840, Core 2 Duo E6400, Atom D525, and AMD E-350, we’re including data from a longer period of time for those CPUs.)

We can pinpoint efficiency more effectively by considering the amount of energy used for the task alone. Since the different systems completed the render at different speeds, we’ve isolated the render period for each system. We’ve then computed the amount of energy used by each system to render the scene. This method should account for both power use and, to some degree, performance, because shorter render times may lead to lower energy consumption.

The A8-3850’s combination of performance and power draw under load doesn’t make it the most efficient of CPUs, but it does represent a very slight gain over the Phenom II X4 840.

Integrated graphics performance

Now that we’ve beaten the CPU performance horse to death and well beyond, let’s take a look at integrated graphics performance. Before we start, I should mention a few considerations.

First, you’ll notice below that we’ve added the Core i3-2105 to the mix. We didn’t include it explicitly in the CPU performance tests because its CPU specs and performance are identical to the Core i3-2100’s. However, the 2105 has better integrated graphics and is probably the closest match, price-wise, to the A8-3850.

Next, we’ve added a few more configurations. We’ve tested both the Core i3-2100 and the A8-3850 with a relatively low-priced discrete graphics card, the Radeon HD 6670. We used the 1GB GDDR5 version of the 6670, which currently sells for $99.99. Having this card in the mix allows us to see how a relatively inexpensive discrete GPU compares to the IGP solutions. The 6670 is capable of running in a Dual Graphics team with the A8’s integrated Radeon, so we’ve included that, as well. We also benchmarked the A8-3850’s IGP while using 1600MHz memory (rather than the 1333MHz speed we used everywhere else.) This config will give us taste of how faster RAM speeds affect IGP performance.

Finally, although we are comparing the performance of the Llano and Sandy Bridge IGPs head to head, there are in fact major differences in texture filtering and image quality between them. The Intel IGP isn’t doing as much work and is producing a much lower quality result. For more on this issue, see this page of our mobile Llano review.

Bad Company 2

Yes, we used a relatively high resolution of 1680×1050 for much of our IGP testing. That’s in part because we had some trouble finding a common resolution exposed in the different video drivers we were using. We’d probably have tested at 1440×900, had it been consistently available.

Regardless, the A8’s IGP cranks out acceptable frame rates, with a low of 25 FPS. Our seat-of-the-pants evaluation during testing was quite positive. Obviously, the Intel IGPs can’t keep up; the HD 3000’s frame rates are roughly half the A8’s and are nowhere near playable.

Bumping the memory clock up to 1600MHz doesn’t do wonders for the Llano IGP, nor does it make that IGP much more competitive with the discrete Radeon HD 6670, which is unquestionably superior. That big gap between the Radeon HD 6550D IGP and the Radeon HD 6670 discrete GPU probably helps explain why there’s not much performance gain when Dual Graphics is enabled. Most likely, the two GPUs aren’t splitting work evenly; instead, the 6670 probably renders two frames for every one rendered by the IGP. That means performance won’t scale as well as it would in a true 1:1 teaming config.

Borderlands

Dual Graphics doesn’t make an appearance here because, unlike regular CrossFire setups, it’s not compatible with DirectX 9 games like this one. At these settings, the A8’s IGP can’t really deliver acceptable performance, and the Intel IGPs are hopeless.

Civilization V

This test uses a DirectCompute program to compress textures, making it a true GPU computing application. We weren’t able to get this test running on the Intel IGP in our mobile Llano review, but it works here. Even so, the Intel solution is under half the speed of the Radeon HD 6550D IGP.

Civ V‘s leader benchmark is a nice test of high-quality pixel shaders, and the results track with a lot of what we’ve seen already. There is a bit of performance uplift on the Llano IGP with 1600MHz memory in this test, interestingly enough.

The unit benchmark stresses the CPU quite a bit, and the Core i3-2100 comes by its higher score honestly.

This is the true test of pure gameplay performance in Civ V, and the gulf between the Intel and AMD IGPs is enormous. The one negative for AMD here is the Dual Graphics solution’s performance, which is less than ideal, to say the least.

StarCraft II

We should point out that, unlike some of these games, one can play StarCraft II reasonably well on an Intel IGP, just not at these image quality settings.

Interestingly enough, when both systems are using the discrete Radeon HD 6670, the A8-3850’s CPU performance becomes an issue. The Core i3-2100 cranks out higher average and minimum frame rates.

Portal 2

I take it you’re getting the picture by now. The A8-3850’s integrated graphics processor is two to three times faster than the Intel HD 3000 IGP. Meanwhile, the discrete Radeon HD 6670 is substantially quicker than any integrated solution.

Conclusions

Wow, we have much work to do in order to make sense of a tremendous amount of test data. Let’s start by considering CPU performance by itself—without graphics, that is—in the context of one our famous value scatter plots.

As a CPU alone, the A8-3850 isn’t bad, but it isn’t exactly a revelation, either. Both the Core i3-2100 and the Phenom II X4 840 are slightly faster overall in our CPU test suite, and either one will set you back a little bit less than the A8-3850’s $135 asking price.

Add in the question of power efficiency, and the contrasts grow starker. The A8-3850 is very efficient at idle, especially versus comparable 45-nm AMD processors like the Phenom II X4 840. The A8-3850’s peak power draw under load, though, is dramatically higher than the Core i3-2100’s (and is comparable to the Phenom II X4 840’s.) That difference in TDPs, from 65W for the Core i3 to 100W for the A8, is for real. In order to approach the performance of the Core i3-2100’s dual Sandy Bridge cores, AMD had to push four of its cores to the hairy end of the voltage curve.

If there is a saving grace for Llano on the desktop, it’s got to be the performance of its integrated graphics processor. We didn’t factor IGP speeds into the value plot above, but if you saw the last few pages, you know that it’s no contest—Llano’s graphics are over twice as fast as the Intel HD 3000. As we saw in our review of the mobile Llano, the difference between these IGPs is often the difference between playability and futility. Llano’s combination of superior IGP hardware and real Radeon drivers simply ends the conversation about which graphics solution is best.

We asked a couple of questions about CPUs and IGPs at the end of our review of the mobile Llano. First, we wondered whether mobile CPUs had grown sufficiently fast that additional performance didn’t count for much. Second, we wondered whether integrated graphics solutions were ever fast enough to add real value. Our answers to those questions led us to give a provisional thumbs up to the mobile version of Llano.

On the desktop, though, those same questions play out rather differently. The expectations for CPU performance are much higher, for one thing. We’d happily absorb the extra speed we could get out of a quad-core Sandy Bridge chip, if possible. For another, there’s not much of a CPU performance gap between the A8-3850 and its closest competitor, anyhow. Yes, the Core i3-2100 is faster, but its true advantage is substantially lower power draw under load. The graphics question looks different, too, in light of the fact that snapping in a $99 graphics card like the Radeon HD 6670 will nearly double your performance versus the A8-3850’s Radeon IGP. Consider that you’d save 30 bucks by going for a Phenom II X4 840, and that you could put that 30 bucks toward a discrete graphics card. Suddenly, you’re awfully close to making the A8-3850 seem irrelevant.

Indeed, the key to understanding a chip like this one is finding a relevant market for its strengths. Frankly, we’re struggling a bit with placing the A8-3850. This APU’s 100W TDP makes it unfit for small-form-factor desktops and all-in-one systems, where its relatively strong IGP would be an asset. The A8-3850 might be a good match for a home theater PC that does light gaming duty, but only if that box has good, dynamic cooling that can remain quiet at idle and range up to cover the relatively high TDP. If that kind of cooling is on tap, though, the combination of, say, a Core i3-2100 and a discrete GPU might serve better.

You get the idea. We could go on like this.

The truth is, Llano’s primary strength is as a mobile processor, and a 100W desktop variant takes it to an awkward place. Our sense is that the 65W models of the A-series APUs are likely to be more successful with major PC makers and more interesting to almost anyone with whom Llano’s basic value proposition resonates. Yes, the CPU performance of the 65W versions will be lower, but those models have the potential to deliver a more compelling mix of overall CPU and GPU performance within certain constraints, which is kind of the point of an extensively integrated APU.

Comments closed
    • indeego
    • 8 years ago

    Passmark after a month of getting good sample data [url=http://www.cpubenchmark.net/cpu.php?cpu=AMD+A8-3850+APU+with+Radeon+HD+Graphics<]sees it as the best valued processor (@ stock)[/url<]

    • MrDigi
    • 8 years ago

    Looks like the A8 IGP performance isn’t strong enough to make it much of a budget desktop gamer machine. Only marketing hope is selling it as a budget quad core to those that don’t realize an Intel dual core will outperform it. Likely Llano will be short lived since it clearly needs more power efficient CPU and GPU cores. This shouldn’t be a supprise that a process shrink didn’t create a miracle for AMD. Of course they can always drop prices to keep in the game.

    • reneedescartes
    • 8 years ago
      • dpaus
      • 8 years ago

      Just got back from your first Pride Parade?

      • Krogoth
      • 8 years ago

      here comes the banhammer!

        • dpaus
        • 8 years ago

        Dun. But so far, they missed the duplicate post in the “Air Conditioner Appreciation Day” shortbread.

    • Nullvoid
    • 8 years ago

    Is anyone able to hazard a guess as to how a 9800gt would compare to the integrated graphics on the a8-3850?

      • FuturePastNow
      • 8 years ago

      The 9800 will have quite a bit more gaming power, for a lot higher power usage, but won’t be as good at offloading video playback.

      • Krogoth
      • 8 years ago

      A8-3850’s intergrated GPU is faster than an ancient 9800PRO (R350). The only thing going for 9800PRO is memory bandwidth.

        • flip-mode
        • 8 years ago

        GT! GT! GT! GT! GEEZ!

          • Meadows
          • 8 years ago

          Hah owned.

      • mczak
      • 8 years ago

      A quick search seems to indicate the 9800gt would be about 25-30% faster (based on 3dmark scores).

        • willmore
        • 8 years ago

        I’m going to guess that is you use a larger resolution screen, you’re going to see even more advantage given to the 9800gt. Memory bandwidth helps out quite a bit there. I’d guess AA would benefit from it, too. But, numbers are better than guesses. πŸ™‚

    • sluggo
    • 8 years ago

    ” Frankly, we’re struggling a bit with placing the A8-3850.”

    This may be because you’re not a high-volume PC OEM. HP and Dell have long used bottom-feeder GPUs on add-in cards in their mainstream PCs just to get to playable frame rates in games. Using this APU instead allows them to eliminate the add-in card altogether and still beat Intel-based systems that use those bottom-feeder cards in both performance and price.

    The Tech Report review of a chip like this is like Road and Track reviewing the Toyota Camry. It’s a yawn-fest, because Road and Track readers don’t really care about the Camry’s feature set. But I understand Toyota still manages to sell quite a few Camrys.

    You’ll see this part turn up in a lot of low/midrange entertainment/gaming SKUs, even if nobody here really wants one.

      • Bauxite
      • 8 years ago

      “The Tech Report review of a chip like this is like Road and Track reviewing the Toyota Camry. It’s a yawn-fest, because Road and Track readers don’t really care about the Camry’s feature set. But I understand Toyota still manages to sell quite a few Camrys.”

      /agree

      Prediction: AMD will “sell out” of llano in Q3 much like they did with brazos in Q1 (and for trolls/intel stock freaks: just like brazos it will be from high demand, not actual problems)

    • Meadows
    • 8 years ago

    The clock speed definitely killed this one.

    It’s fine and dandy that you reuse Phenom II cores on the 32 nm node, but any efficiency advantages aside, you have a 2.9 GHz chip AMD, for chrissakes. Against “mature” 3.7 GHz Phenom II chips of your own, nevermind the competition from intel.

    Don’t tell me you couldn’t at least breach >3 GHz and/or provide us with Black Edition binning. This is paltry.

      • NeelyCam
      • 8 years ago

      The clock speed was brought down to keep the chip within a certain TDP window; that IGP consumes a lot of power too.

      But I agree that the <3GHz clock speed is curious. I doubt that the 32nm SOI couldn’t handle it in a power-efficient way… was this a botched design?

      • FuturePastNow
      • 8 years ago

      Figure about 40W for a 32nm Redwood GPU and the northbridge… leaves 60 for the processor cores. About 3 GHz sounds right to me.

      I’m sure they’ll release faster ones in coming months.

      • Krogoth
      • 8 years ago

      No, the architecture isn’t up to par with Bloomfield and Sandy Bridge on CPU performance.

      Phenom II 980 BE (3.7Ghz) is still unable to catch up with the slower i5-760 (2.8Ghz) in number crunching, it barely catches up or beats it in video rendering/gaming. A 3.7Ghz Llano part would have done no better. The only thing it would have is an insane TDP.

      Llano is a mainstream part, so it is unlikely we will see a “Black Edition” variant. We will have to wait for Bulldozer units to see the next generation of “Black Edition” chips.

        • Meadows
        • 8 years ago

        [quote<]expect have an insane TDP[/quote<] Pardon me, Lord Grammar? What was that? I'm done with you Krogoth, the next time you respond to me with an error, I'll dismiss you using troll material, I promise you. I'll do that until you either learn your language, or stop responding to me. Both ways are fine by me.

          • Krogoth
          • 8 years ago

          Are you frustrated?

            • Meadows
            • 8 years ago

            Not at all.

      • derFunkenstein
      • 8 years ago

      This probably has more to do with chips that won’t run perilously closely to Bulldozer performance, I fear. And I say that as someone who has used AMD in PCs for several years and plan to continue to do so based on cost alone (performance has long been “good enough” for me).

        • Krogoth
        • 8 years ago

        I doubt it, it is probably more to do with it being expensive/difficult to make “faster” Llano bins then it is make Bulldozer chips of the same clockspeed. Llano is set as a mainstream/laptop part, so AMD has no incentive to make a unit that screams “Screw TPD, I want the MEGAHURTZ!” a.k.a P4 EE of old. When they already have X4 975 and X4 980 who already have taken up that job. πŸ˜‰

    • historyfend13
    • 8 years ago

    I agree Intel has taken AMD to the cleaners numerous times in the performance area. However when it comes down to it, having a lower price and decent performance matters most to people, not benchmarks and FPS. I work at a large retailer and people generally purchases what has the best bang for the buck. Using the lower TDP chips will offer decent performance and I bet these will sell. Hell, I bought an Acer with the AMD C-50 chip inside of it. My main desktop went down and I have to use it as my back up. Is it the best or fastest? Not even close. Nonetheless, it works in a pinch and I have typed papers on it, worked on excel worksheets, watches some ripped movies and even some very light gaming. Total price: 320 for the whole system.

    Just because it doesn’t tear ass doesn’t mean the “good enough” factor fades away. This last few generations have been about “good enough” tech. It enables people who may not have 1K+ to spend on one system to have decent system performance at low cost. And it will get better with time.

    • link626
    • 8 years ago

    “Now, all you need to do is look at the “Radeon HD 6550A2″ sticker on a retail system, write it down, come home, and Google for it.”

    actually, all you need to do is to walk over to the laptop department at Bestbuy, and get on the internet over there.

    • d0g_p00p
    • 8 years ago

    Pretty sad to see a 2 core CPU beating up a native 4 core CPU. Granted this CPU is pretty damn old. The one very big plus I see coming out of these Fusion APU’s is the fact that they ship with video that can actually play games. Maybe dev’s can start making the integrated GPU the baseline or minimum requirements with PC titles. It’s still shocking that Intel cannot design *any* type of GPU that can play anything with a decent framerate and decent visuals. I hope this makes sense I am sick.

    I like where AMD is going with this. The CPU is fast enough so let’s start looking into total system performance. I think they tried this before but I don’t remember what is was called.

    I just hope Bulldozer shows more promise.

      • willmore
      • 8 years ago

      To me, it seemed like an interesting demonstration of what apps were well threaded and what weren’t. In addition, there seem to be some apps that just run better in Intel chips. I’d love to see an article detailing why that is so.

      I was looking to rebuild my desktop machine when SNB came out, but I *am* going to have a discrete GPU in the system, so I don’t feel like paying the cost of the IGP–both in silicon area, wasted power (if any, really), and in lower die yield. I’m sort of surprised we haven’t seen any SNB chips with *no* video at all. I guess it’s not worth binning them out.

    • srg86
    • 8 years ago

    As a CPU Geek, my thought is….meh Jack of all trades, master of none. Long live Sandy Bridge etc.

    But

    Should my parent’s PC die anytime soon though, this is exactly what I’ll recommend to them, it would suit them down to the ground. Vastly faster than their K8 Sempron with graphics ideal for anything they’d throw at it.

    • StashTheVampede
    • 8 years ago

    I’m in the market for a relatively inexpensive box for a small office (browser, specific network bound app, office apps). What do I choose:

    i3-2100 … OR
    A4/6/8 variant of Llano

    Very little CPU is going to happen on this machine. Getting the i3 seems somewhat like a bad deal because browsers/plugins are now using more GPU — making the Llano more attractive. CPU acceleration for these apps/plugins will be around a while.

      • willmore
      • 8 years ago

      Looks like a job for the A4!

        • NeelyCam
        • 8 years ago

        Looks like a job for a Brazos+lowest-end SSD to me.. The cash you save by choosing Brazos instead of i3 gets you to a 64GB system SSD price point.

        Or, you can pocket the money and get an HDD. Either way, getting an i3 is a waste of money for such a light-weight usage model

          • flip-mode
          • 8 years ago

          Wow, without him mentioning any hard budget you’re going to suggest that. I’d prefer a “real” CPU and a mechanical drive over a crippled CPU and an SSD.

            • Meadows
            • 8 years ago

            Never underestimate the SSD with regards to productivity (or any non-gaming usage, really).

            • NeelyCam
            • 8 years ago

            He said “relatively inexpensive box”, “small office (browser, specific networkd bound app, office apps)” and “Very little CPU is going to happen on this machine.”

            I took that to mean that he’s looking for a relatively inexpensive box. Also, the usage model doesn’t seem to require a “real” CPU.

            • willmore
            • 8 years ago

            No I didn’t. Did you, maybe, mean to reply to StashTheVampede?

            • NeelyCam
            • 8 years ago

            I didn’t reply to you – I replied to flip-more who used “him” to refer to Stash (as far as I can tell).

    • LiquidSpace
    • 8 years ago

    So,in BC2 Atom D525 gets 14/10 and E350 gets 10/8.
    Why don’t you use less synthetic “Intel” benchmarks and more real time ones?
    this benchmark is a joke no doubt L0L

      • derFunkenstein
      • 8 years ago

      D525 has hyper threading and BC2 consistently has shown to stress >2 cores.

        • LiquidSpace
        • 8 years ago

        still Brazos is in average 50% faster in non-graphical apps and almost 2ice as fast in video playback and games as the crappy Atom.
        Atom is a trash compared to Brazos.

          • derFunkenstein
          • 8 years ago

          They’re both trash compared to real CPUs, what’s your point?

            • Krogoth
            • 8 years ago

            Brazos and Atoms are aimed at an entirely different market. It is like comparing 400-500HP muscle cars to your typical 80-120HP economy cars. Sure, the muscle cars will blow away the economy cars on the drag strip and race track. For daily commutes and driving, the muscle car isn’t much better than the economy car. The economy car doesn’t have the extra expenses that goes with owning a muscle car.

            • derFunkenstein
            • 8 years ago

            Don’t tell me that, brah. Tell LiquidSpace who think that E-350 is apparently a gaming platform.

            • LiquidSpace
            • 8 years ago

            my point is that your point is invalid and Brazos is superior to Atom in all respects,and that this benchmark is a joke.

            • NeelyCam
            • 8 years ago

            [quote<] Brazos is superior to Atom [b<]in all respects[/b<][/quote<] [i<]Your[/i<] point is invalid.

            • derFunkenstein
            • 8 years ago

            It’s not a “benchmark” it’s a real game.

            • LiquidSpace
            • 8 years ago

            I was talking about the whole benchmark.
            BC2 is a DX11 game,but there’s no mention about that,the crappy DX9 Intel IGP won’t stand a chance against the integrated Radeon used in Brazos in DX11 mode.
            secondly,he didn’t use any GPGPU programmes that support GPU acceleration.
            the quad core Liano consumes less power than the dual core I5 moblie.

            • Meadows
            • 8 years ago

            Krogoth, is that you? You spelt “moblie” again.

            • NeelyCam
            • 8 years ago

            It shouldn’t have, but it made me smile..

            You should be a freelancer editor to Semiaccurate. They [i<]desperately[/i<] need one...

      • NeelyCam
      • 8 years ago

      Atom is simply better in some cases. You should just accept facts as facts

    • NeelyCam
    • 8 years ago

    Did you use discrete graphic cards with E-350 and D-525 during idle power tests? Just wondering, because those idle power levels seem awfully low if you did.. And if you didn’t, the “Total energy over period” graph isn’t an apples/apples comparison.

    Personally, just like flip-mode, I’d be interested to see the power consumption tests with a power brick/PicoPSU and without discrete graphics… That would make it easier to compare the platform power numbers (discrete cards and high-power PSUs tend to mask the differences). Also, the point of Llano and SB is that most users don’t need discrete cards anymore, so it would make sense to show the power numbers using IGPs.

    One more thing: the “CPU performance per dollar” plot is nice and all, but I think what would be more useful is a “System performance per dollar”. There is a lot of ‘baseline’ cost in every system (case, mobo, HDD/SDD, memory…), and paying double for a CPU might only increase the system cost by 25%… while potentially providing a 50% performance boost. I admit that it’s difficult to determine what the “systems” in such comparisons should look like, but I think it’s still a more valuable metric than “CPU performance/dollar”

    If I put together a system that costs $800 without the CPU, I’ll gladly pay $300 for a CPU instead of $100 (for a system cost of $1100 instead of $900) if that gives me 2x the performance.

      • OneArmedScissor
      • 8 years ago

      Define “2x system performance.” It’s certainly not something dependent on synthetic benchmarks with arbitrary scales where one bar is twice as long as another, with no basis in reality. Nobody should be making recommendations on how to spend all the money that goes into an entire computer based off of those instruments of the devil.

      Your “system” is never made that much faster by swapping one component, so I just don’t see the point. A SSD doesn’t even do that. There are too many potential metrics of comparison that are based entirely on personal use. The emphasis needs to be on individual components and [i<]exactly[/i<] what you're going to do with each one.

        • NeelyCam
        • 8 years ago

        [quote<]Define "2x system performance."[/quote<] Now you're arguing against using performance/dollar metrics in general. You have a valid point, but I was talking about replacing one performance/dollar metric with another one. To replace the CPU performance/dollar plots, I would just use the same "performance" metric that was already used, but replace the CPU cost with a system cost. And I agree with you; the "system" could end up having >100 permutations of various component options, but some reasonable systems could be derived somewhat easily (e.g., gaming rig would focus on discrete graphics, file server on storage, HTPC on quiet cooling... however you'd want to define a system). There is no clear formula for any of this, and any metric like this could be argued for and against until the end of time. But I'm just saying that the current CPU perf/$ metric misses the key point of system cost.

          • OneArmedScissor
          • 8 years ago

          “Now you’re arguing against using performance/dollar metrics in general.”

          Well, pretty much. :p

          I don’t think this is a battle that can be won.

            • NeelyCam
            • 8 years ago

            Amen.

      • Damage
      • 8 years ago

      The test configs for the two sets of results for the Atom and E-350 are explained here:

      [url<]https://techreport.com/articles.x/20401/5[/url<] [quote<] Also, you'll see that we have two sets of results for the Atom- and E-350-based systems. The primary set comes from a configuration with a discrete GTX 460 graphics card and our 610W standard PSU connected. The second set was captured with only integrated graphics in use and our laptop-style power brick supplying the juice.[/quote<] Sorry, meant to include that in this review. Will have to add it. For power numbers for the A8-3850 and Core i3-2100 without discrete graphics, see here: [url<]https://techreport.com/articles.x/21207/10[/url<]

        • NeelyCam
        • 8 years ago

        Ok, it makes sense now – thanks!

    • flip-mode
    • 8 years ago

    Hey Mr. Wasson – is there any chance you could try testing the Llano system and maybe a SB system with the “brick PSU”? (Integrated graphics only)

    It would be interesting to see how low the power consumption can go.

      • NeelyCam
      • 8 years ago

      ^ This.

      (FYI: my i7-2600k system with a PicoPSU idles at 22W)

      • Damage
      • 8 years ago

      I considered it, but the brick PSU tops out at 80W. Under a load spike, the A8-3850 would pop it like a zit.

      See here for Llano/i3-2100 power consumption without a discrete video card:

      [url<]https://techreport.com/articles.x/21207/10[/url<] I saw similar numbers from my test systems, but didn't have time to include them. 37W at idle is nice!

        • NeelyCam
        • 8 years ago

        These can handle much more than 80W. I’m using the first one with PicoPSU 150XT

        [url<]http://www.mini-box.com/12v-12-5A-AC-DC-Power-Adapter[/url<] [url<]http://www.mini-box.com/12v-16A-AC-DC-Power-Adapter[/url<] Not to beat a dead horse, but in that IGP-only comparison you seem to make a conclusion that Llano idles at lower power than SB, but Asus boards aren't that energy-efficient. MSI is much more so, and Intel's own boards generally have the highest power efficiency. I have an SB i7-2600k with Intel DH67GD, that PicoPSU, 3TB WD Green, 160GB Intel 320 SSD and 4x4GB of DDR3-1333-CL7, and the system idles at 22W - far lower than the 37W for the Llano system.

          • OneArmedScissor
          • 8 years ago

          Pretty much every website I’ve seen test with several desktop IGPs has found Sandy Bridge to use a little more power at idle than [i<]anything[/i<] else, even older Intel platforms. In laptops, it's just a notch below Core 2 and Llano in ability to minimuze power use, so that seems to paint a pretty consistent picture of the CPU itself making that difference. Sandy Bridge is the only mainstream CPU in existence right now that was designed to run the majority of the chip's transistors at 4 GHz, with a large, always on, L3 cache included in that. The GPU was also made to run 1.35 GHz, which is ludicrously high compared to anything else. Is it really that surprising if Sandy Bridge is just ever so slightly leakier than the ones that don't do that? Completely opposite of Sandy Bridge, Llano is quite possible one of the most conservative CPU updates there's ever been. No more cache than previous mainstream CPUs, no L3 cache that can't be shut off with cores, and moving from 55nm bulk all the way to 32nm SOI + HKMG, they [i<]dropped[/i<] their IGPs from 500-700 MHz to 400-600 MHz. That's half the Llano chip, designed to run at an extremely low speed, and nothing additional tacked on that wasn't in the computer before. It doesn't make a world of difference, though. They're all very close and, considering their designs, do as well as could be expected to minimize power use and run the best they can when they're not.

            • NeelyCam
            • 8 years ago

            [quote<]Pretty much every website I've seen test with several desktop IGPs has found Sandy Bridge to use a little more power at idle than anything else, even older Intel platforms. [/quote<] From my three PCs, my conclusion is the exact opposite. I had an E4400 with G45, and it was idling at 28W. I have an i5-670 system with H55 and 2x4GB of memory, idling at 22W. My SB i7-2600k - twice the cores, twice the memory - idles at the same 22W. Regarding leakage, SB cache can drop the supply lower when idling than in the previous chips - Intel presented some fancy SRAM cell trickery in ISSCC this year. And once stuff is power-gated, it doesn't matter much how fast it is when active - the leakage is determined by the power gate.

            • OneArmedScissor
            • 8 years ago

            Your conclusion is not a comparison of countless setups. Go look back in years old SPCR reviews and you will find plenty of things that idled at or below 30w, with normal PSUs. They’d be considerably lower with a power brick. You don’t even have to look hard to find things in the 30w range on other sites that use bazillion watt PSUs.

            Intel’s sorcery doesn’t change physics, it just compensates as best as possible for the design choices they made.

            Sandy Bridge’s higher level cache is not built out of low leakage, low speed transistors like it is in every other CPU. It’s made with the same transistors as the CPU cores, and they increased the cache speed from the 2 GHz range to up to 4 GHz. That’s like adding an extra few cores, but which aren’t power gated, because it’s connected to everything and stuck on. The same applies to the GPU if you’re using integrated graphics, which also was designed to run twice as fast as a traditional GPU, and isn’t going to be shutting off.

            They took a very brute force approach to make sure their smaller mainstream chips would keep up with Llano, and even though they pulled it off, that doesn’t change the fact that compromises were made. It’s not 100% better in every way than things that came before it, just most ways.

            Here’s a more real world comparison of just Llano and Sandy Bridge IGP power:

            [url<]http://www.hexus.net/content/item.php?item=30964&page=9[/url<] Intel isn't out to make the absolute lowest power CPUs and never has been. Look at how much they avoid just making the most obvious improvements to Atom. I don't see why this is so hard to believe.

            • NeelyCam
            • 8 years ago

            [quote<]Your conclusion is not a comparison of countless setups. Go look back in years old SPCR reviews and you will find plenty of things that idled at or below 30w, with normal PSUs.[/quote<] Sure, super-old stuff was less power hungry, but it was also orders of magnitude (and I'm not exaggerating) slower. In recent history (say, the past three years), power efficiency has gone up dramatically, and idle power has come down. And when you find a quad-core Llano rig that idles at 22W or less, let me know.

        • jensend
        • 8 years ago

        I think some of the people here complaining about power consumption didn’t realize that the numbers in this review included a discrete card. 37W at idle is pretty good esp. for mobos with that much on them, and I bet the 65W parts, when coupled with mini-itx mobos designed with low power consumption in mind, will be quite reasonable indeed. Brick PSUs (rated for ~95W) should be a decent option with those.

    • Suspenders
    • 8 years ago

    I still prefer the PGA style pins on AMD’s processors over Intel’s LGA approach, so I’m at least happy it will be sticking around.

    • slaimus
    • 8 years ago

    I am also a bit confused on why the mobile Llano launch was relatively quiet, but the desktop launch was more publicized.

    It looks like what AMD wanted was to compared their top end A8-3850 with a top end i7-2600 and show the superior frame rates in games while using less power like on the mobile system.

    I think they would be better off giving reviewers the A6-3600 and compare that to the i3-2100/2105 to give a better showing.

    Intel was much smarter during the SB launch to give samples of 2100, 2400, and 2500 to reviewers.

    • derFunkenstein
    • 8 years ago

    [quote<] Now, all you need to do is look at the "Radeon HD 6550A2" sticker on a retail system, write it down, come home, and Google for it. Eventually, you'll find a chart like the one above.[/quote<] Suddenly having a phone with a data plan makes sense. [quote<]On the other hand, 1866MHz memory still carries a substantial price premiumβ€”about $45 for a pair of 4GB DIMMs.[/quote<] That can't be right. Seems pretty cheap to me. :p

    • flip-mode
    • 8 years ago

    Idle power consumption is far more important to me than load power consumption, and in that regard the A8-3850 is putting on an excellent show!

    Too bad CPU performance isn’t better. As it stands, there are better choices than the A8-3850 for compute-intensive users, and better choices than the A8-3850 for HTPC users.

    If I was going to build an HTPC, I wouldn’t need anything more than an AMD E-350.

    If I was going to build a gaming computer, I’d want SB and a nice GPU.

    If I was going to build a general productivity machine, SB’s graphics are good enough.

    Llano needs more CPU performance to be worthy for the desktop. That’s the bottom line.

    Let’s hope Bulldozer puts on a better show than the nasty rumors suggest. Otherwise, the list of market niches that AMD’s CPUs are viable options in gets way too short.

      • Hattig
      • 8 years ago

      What about the people who are only building (buying) one computer to do two or three of the things you said, who are also on a budget? This is the majority of computer buyers.

      A general productivity machine that will get left on in idle a lot, that is used for light gaming? Llano is perfect.

      • Krogoth
      • 8 years ago

      What are you smoking?

      Llano has sufficient CPU performance for general desktop usage. It only falls short if you are doing workstation related tasks. In that case, SB might not be the best choice.

      Llano sits perfectly as a direct competitor against Intel’s mainstream offerings. The primary factor in this battle is platform cost not CPU/GPU performance.

      Llano was never meant to be a workstation solution, that’s Bulldozer’s job. πŸ˜‰

        • flip-mode
        • 8 years ago

        Due to it’s idle power consumption alone, I’d call the A8-3850 the best choice for an office productivity desktop. Such a machine sits at idle almost its entire life.

        Llano’s idle power consumption really is superb.

        What would be really interesting to see would be a cut-down Llano – two cores and have the GPU – I wonder how low AMD could get its power consumption to go.

          • OneArmedScissor
          • 8 years ago

          It might change like 1w. The CPUs hardly use anything. Look at laptops. [i<]With[/i<] the screen, many of them hover around 7w now. The problem is the motherboards, with their overkill power phase/VRMs/whatever else is involved, and bazillion doodads littered all over the PCB that pretty much go unused. If you want to save power in a desktop, [b<]cherry pick your motherboard[/b<]. Ignore the rest. Desktop power use has not changed in years and years. Llano is not doing anything special, either.

      • NeelyCam
      • 8 years ago

      E-350 completely tanked in HTPC stuff – not being able to play Netflix HD is a dealbreaker for me. IMO, a dual-core SB/Clarkdale/Llano is a much better choice, as long as you can keep it quiet.

        • raddude9
        • 8 years ago

        Netflix HD is not important to most people. More important for the HTPC owning movie-buffs out there is proper 23.976fps support something that Intels chips can’t seem to do without juddering. So E350 and Llano are the much better options.

        Personally, I’d wait for the 65watt Llano, these will probably beat everything else for idle power as well as give the option of sitting-room gaming.

          • NeelyCam
          • 8 years ago

          I disagree. To me, Netflix support is paramount on a HTPC – having a small ‘glitch’ every 40 seconds pales in comparison to having constant glitching with Netflix HD streams. But these are matters of opinion.

            • OneArmedScissor
            • 8 years ago

            The issue you’re describing may only be a setup/software issue, though. I assume you’re referencing the very recent article on Anandtech reviewing a Zotac HTPC. Don’t you find it a but suspect that they’d already reviewed numerous different Bobcat computers over the last 6+ months, and never mentioned that before?

            • swaaye
            • 8 years ago

            Doesn’t Netflix judder on all PCs? Something to do with Silverlight and Windows and perhaps Netflix’s PC encode. I’ve read that some of the set top boxes are the way to go, or even a PS360. I’ve seen judder with it since day one and there are giant AVSForum threads complaining about it nothing has been improved.

            PS3 is also the only way to get 5.1 audio from Netflix, apparently. PC gets like a 2 channel 64k AAC stream.

            • NeelyCam
            • 8 years ago

            I understood the issue to be that Silverlight wasn’t hardware accelerated, and E-350 CPU was too weak to handle it in software.

            But no – I don’t find it suspect that others haven’t mentioned it; Anandtech’s review was the only one extensively testing various video streams. Then again, I don’t have a Bobcat system, so I can’t test it myself – I just have to trust Anandtech. And I certainly wouldn’t take a plunge if a trusted site says Netflix doesn’t work. I can afford a more expensive option (like SB or Llano).

          • NeelyCam
          • 8 years ago

          Just found this:

          “We have seen that Intel is able to achieve 23.976 Hz refresh of sorts by disabling UAC on Windows 7 systems. There is currently an Intel driver in the works which removes this restriction (and I have my hands on it! Expect to see how it performs when I review ASRock’s CoreHT 252B in the coming days).”

          [url<]http://www.anandtech.com/show/4479/amd-a83850-an-htpc-perspective/7[/url<] Previous AnandTech review said this was a hardware issue on Intel IGPs, but now it looks like it was a software issue after all...?

            • willmore
            • 8 years ago

            I think the operative phrase is “of a sorts”. I’d be curious to know what that is and WTF UAC has to do with it. That seems an odd subsystem to be involved in setting video refresh rates.

            • NeelyCam
            • 8 years ago

            I’m looking forward to that Asrock review;;. (I never thought I would say that). Should have some interesting insights.

        • flip-mode
        • 8 years ago

        I did not realize the E-350 didn’t handle netflix HD. Well, perhaps its loss is Llano’s gain.

          • NeelyCam
          • 8 years ago

          I could handle a lot of other stuff, but wherever the CPU was needed, it fell short.

        • raddude9
        • 8 years ago

        The E350 Not being able to playback Netflix HD seems to be caused by Netflix not enabling DXVA in their playback program, this will hopefully be fixed in the future

    • dpaus
    • 8 years ago

    What is with this fetish over power consumption? I agree that it’s worth measuring and considering, but unless I’m running a server farm with 1,000 1U blade devices (which obviously wouldn’t be using this chip anyway), the difference of 50 Watts under full load just doesn’t mean that much to me. 99% of the time, the system isn’t running at full load anyway, and at idle, the A8’s power draw is actually 6 Watts [i<]less[/i<] than the i3-2100's. Cripes, accidentally leaving the bathroom light on wastes more energy in an hour than the difference between these two CPUs over a month of typical system usage. The corollary is that I'd like to have seen what AMD could do with this architecture and a 125W TDP. A significantly more powerful IGP that the extra power would allow and dynamic 'Turbo Boost' overclocking up to 3.4 - 3.6 GHz would significantly transform this architecture, and would be a compelling low- to mid-range OEM part for systems in a price range that can't justify a Bulldozer platform.

      • wibeasley
      • 8 years ago

      Two concerns beyond power consumption itself are (1) Noise, cooling & form factors, and (2) indication of the limits/headroom of the architecture.

      I don’t understand where you’re getting the bathroom numbers. A light bulb consumes 15 to 65W. The difference in CPUs over a month is 4,320W (= 6W * 24hr/day * 30days/month).

        • Hattig
        • 8 years ago

        If you have, e.g., six 60W Halogens*, in the bathroom maybe!

        * These crappy Halogens are a total PITA.

          • dpaus
          • 8 years ago

          You’re a bachelor, aren’t you? My girlfriend’s make-up mirror alone has 4x 100W bulbs, and the overheads are 3x 100W. Having said that, it doesn’t have any windows (it’s in a condo), so that may make it ‘non-typical’ after all.

            • NeelyCam
            • 8 years ago

            Your girlfriend is a tree murderer.

            My bathroom has LEDs – four of them, each consuming either 2.5W or 3.5W. That’s because I like trees.

            • dpaus
            • 8 years ago

            [quote<]Your girlfriend is a tree murderer[/quote<] Well, after 30 minutes in front of that mirror, she certainly has the killer look πŸ™‚ [quote<]I like trees[/quote<] I [i<]knew[/i<] it!! Hey, hippie, guess what your Birkenstock sandals are made out of? Murdered trees and slaughtered animals!

            • NeelyCam
            • 8 years ago

            [quote<]Well, after 30 minutes in front of that mirror, she certainly has the killer look :-)[/quote<] Pics, or it didn't happen [quote<]Hey, hippie, guess what your Birkenstock sandals are made out of? [/quote<] I only wear sandals made of sustainably harvested bamboo and voluntarily donated pixie hair.

            • dpaus
            • 8 years ago

            [quote<]Pics, or it didn't happen[/quote<] Remember that Motely Crue concert I mentioned? Let's just say she was in appropriate period outfit - and that period was famous for thigh-high boots. Sadly, camera battery was dead, but not to worry, the Def Leppard concert is still to come πŸ™‚

            • NeelyCam
            • 8 years ago

            [quote<]the Def Leppard concert is still to come :-)[/quote<] Carry some extra batteries. You owe it to the whole TR community EDIT: I just got the "looks that kill" reference...

            • dpaus
            • 8 years ago

            [quote<]Carry some extra batteries[/quote<] No room for that once I pack the 2x4 I have to carry to keep other guys away from her. And when she started dancing to "Girls, Girls, Girls".... But I will get pics, just to keep ssk in line πŸ™‚ [quote<]EDIT: I just got the "looks that kill" reference...[/quote<] Hmm, looks like liberal usage of pictures is good idea after all...

            • Hattig
            • 8 years ago

            I’m surprised your girlfriend’s make up hasn’t melted down her face with 700W of lightbulbs around her.

            And I have a girlfriend – and natural light in the bathroom which is probably worth a few hundred Watts of artificial light.

        • Xaser04
        • 8 years ago

        I assume dpaus doesn’t mean 24/7 usage when he states “typical system usage”.

          • wibeasley
          • 8 years ago

          8 hours a day at idle is still well over 60W (1,440W = 6W*8*30).

          Dpaus, did you have “typical” in that sentence before your edit? Sorry if I missed that.

            • Goty
            • 8 years ago

            For the record, that 1440 number should not be in Watts. πŸ˜‰

            • NeelyCam
            • 8 years ago

            So true.

            I get exceedingly annoyed when I hear stuff like “this TV consumes less than 80W in an hour”

            • wibeasley
            • 8 years ago

            What units should I use to make the 60 vs 1440 comparison?

            • bitcat70
            • 8 years ago

            hehe! I’m sorry but if you don’t know what units to use maybe you shouldn’t be making that comparison. Any engineers or physics majors here?

            • Hattig
            • 8 years ago

            Squirrel-flights Per Hectare-Parsec

            • bitcat70
            • 8 years ago

            Yeah, but that second one looks like pixels.

            • dpaus
            • 8 years ago

            And you shouldn’t be mixing Imperial and Metric units!

            • NeelyCam
            • 8 years ago

            6W*8h/d*30d=1,440Wh

        • dpaus
        • 8 years ago

        [quote<]Two concerns beyond power consumption itself are..[/quote<] Two good points, but neither would drive a purchase decision for me. [quote<]A light bulb consumes 15 to 65W[/quote<] You have 15W light bulb in your bathroom? If you ever invite me over for dinner, remind me to pee before I get there.

          • wibeasley
          • 8 years ago

          15W CFL bulbs roughly have the brightness of a 60W incandescent.
          [url<]http://www.amazon.com/gp/product/B0014X5MK0/ref=pd_lpo_k2_dp_sr_2?pf_rd_p=486539851&pf_rd_s=lpo-top-stripe-1&pf_rd_t=201&pf_rd_i=B002TMQSSK&pf_rd_m=ATVPDKIKX0DER&pf_rd_r=0XDGR5ZY5Z5YVSSXJJM6[/url<] The kids' nightlight is 5W, and that's enough to not miss. (or if I do miss, it's not because of the lighting).

            • NeelyCam
            • 8 years ago

            CCFLs are even better.

            • Vasilyfav
            • 8 years ago

            Way more light than a 60W. In fact the NeoLite ultra-low mercury CFLs give off almost twice as much light as a 60w incandescent (1000 lumens to 520)

            Incandescent bulbs are essentially 19th century tech. There’s no reason to use them anywhere in your house, except to make a room warmer.

            • Suspenders
            • 8 years ago

            I wouldn’t say “no reason to use incandescents”, I find them far better for use in reading lamps for instance.

            The main problem with CFL’s is that they aren’t in fact very environmentally friendly (despite what people think), mainly because of the mercury in them. When I keep seeing people throwing them out in the trash with regular garbage, I shudder thinking of landfills with millions of the damn things rotting away and releasing their mercury into the environment. LED lighting can’t come fast enough.

            • NeelyCam
            • 8 years ago

            [quote<]LED lighting can't come fast enough.[/quote<] +1. There are places (like BuyLighting.com) if you want to be ahead of the 'game'

          • NeelyCam
          • 8 years ago

          Noise drives most of my purchase decisions these days; I hate PCs I can hear. My CULV laptop is the loudest PC I have – that whiny fan that kicks in whenever I view a website with Flash needs to go.

          Maybe I should wait until Haswell ultrabooks are out.. what’s that, 2013 or something?

            • swaaye
            • 8 years ago

            It is interesting how as the crowd here ages, we get all caught up on noise.

            Well you can always stick something like a Scythe Ninja on your CPU. It’ll very quietly cool a ~130W CPU without too much trouble. πŸ˜‰ My old Q6600 heater has been pushed far with one of those and I’m excited by the prospect of it fitting LGA 1155. They fit AM2 as well for that matter.

            • dpaus
            • 8 years ago

            [quote<]It is interesting how as the crowd here ages, we get all caught up on noise[/quote<] Having frittered away most of my youth riding motorcycles to rock concerts (where I [i<]always[/i<] had front-row seats), I now have significant hearing loss. And it just occurred to me that I don't consider the noise of a system to be a decision factor because maybe I'm simply not hearing most of it.

            • NeelyCam
            • 8 years ago

            I still waste time in metal concerts, but unlike 10 years ago, I now wear earplugs…

            [url<]http://www.etymotic.com/ephp/er20.html[/url<] I don't hear anything beyond 15kHz, so I've been re-sampling all my MP3s at 32kHz to be able to fit more metal in my MP3 player

            • dpaus
            • 8 years ago

            Just saw New York Dolls, Poison and Motley Crue night before last. Peter Frampton, Alice Cooper, Bachman-Turner, Journey, Foreigner, Steely Dan, Heart, Def Leppard and Blue Rodeo in the next 5 weeks. Most of those were on a “Summer Concert Megapass’ that also includes a Jimmy Buffet concert in Toronto on July 16th, but I don’t think I’ll go to that one; I hate parrot droppings.

            • NeelyCam
            • 8 years ago

            Saw Poison when they were opening for Def Leppard. The guitarist was pretty good, but otherwise a bit disappointing. Will see Motley Crue in August – should be entertaining. Saw Alice in Chains a couple of months ago, but I thought the first opening act (Mastodon) was much better.

            I tend to focus on heavier progressive stuff like Meshuggah, Between the Buried and Me, Periphery, Animals As Leaders, Symphony X, and my latest discovery, Protest The Hero. Sometimes I squeeze in some Citriniti.

            Somewhat unknown metal acts are cheap thrills – tickets around $25-$35

            • dpaus
            • 8 years ago

            Oh, it’s going to p!ss you off big time to learn that $25 is the [i<]most[/i<] I paid for any of the above tickets. Seriously, the whole set of them, except for Alice Cooper but including Supertramp, who we saw two weeks ago - were $200/person. A total of 8 concerts, so $25 each. Last year, we saw Steve Miller band for $10 at the same venue (and they must have lost their shirts; I felt so bad for them). I do enjoy many less-known death-metal bands, as well as some up-and-coming groups. The lead guitarist for Tokyo Police Club is a next-door neighbour; he and my older son went to school together, and he spent many nights camping out in our back yard. Doesn't get me any free tickets though πŸ™‚

            • NeelyCam
            • 8 years ago

            [quote<]Oh, it's going to p!ss you off big time to learn that $25 is the most I paid for any of the above tickets.[/quote<] $*(!&&*@#))$!!! The Poison/DefLeppard ticket was almost $100...

            • mutarasector
            • 8 years ago

            I saw Poison back before they were Poison, and still called ‘Paris’… Poison was a great band, but they were basically a rip off of Kix which a young Bret Michaels used to go see locally in PA clubs and basically copied Kix’ whole stage show right down to Kix front man Steve Whiteman and his stage schtick, and they even took their name ‘Poison’ from a Kix tune of the same name.

            • dpaus
            • 8 years ago

            So, you’re saying that Poison is the Microsoft of the metal band industry?

            • mutarasector
            • 8 years ago

            Basically, yeah πŸ˜›

            • NeelyCam
            • 8 years ago

            I had Scythe Ninja on my Q6700 desktop I just gave away; my Clarkdale and SB rigs have passive cooling.. My Dell LCD monitor is the loudest item in my system (annoying high-frequency whine…). I should get a new one; if you know of a 24″ LCD with >1920×1200 resolution, please let me know.

            • bhtooefr
            • 8 years ago

            Will 22.2″, 3840×2400 work? Although it’s not a good gaming LCD – 61 ms response time, 48 Hz max refresh (and the best way to get that is with a crapton of adapters and an Eyefinity ATI card – otherwise, you’ll have reduced refresh), but it is IPS. IBM T221 DG5, DGM, or DGP. They go for about $300-500 on Yahoo Japan Auctions, and about $1500 on eBay US.

      • mutarasector
      • 8 years ago

      [quote<]What is with this fetish over power consumption?[/quote<] Agreed. At these levels, power consumption concerns often seem to be more tail-dog wagging.

    • vvas
    • 8 years ago

    It’s simple: the 65W parts will end up everywhere inside OEM systems, which will finally receive decent graphics (since OEMs don’t really bother with discrete graphics cards these days, except for their “gaming” machines).

    The 100W parts, on the other hand, are the “benchmark” parts. I.e., their main purpose is to be sent to reviewers to make the chip look good. Other than that, I don’t expect much uptake. Same with the Core i3-2105, by the way.

      • willmore
      • 8 years ago

      This has been painfully true for laptops over the past couple of years. Since small desktops are pretty much laptops with an external screen, I imagine this same effect will be seen there. So, yeah.

      I exptect to see the 100W parts in the build-you-own-pc section of computer stores for when mom/dad ask you to build them a system. Then, if you get stuck using there machine at some point, at least you can play a *few* games on it. πŸ™‚ For me, I just put a decent graphics card in it that I had leftover. If I was to build him a new machine, I’d use one of these. Probably an A6.

    • sweatshopking
    • 8 years ago

    I know a lot of people who have computers, i think most of us do, and MAYBE 10% of them have a discrete card. they have a cpu, and plug into the motherboard, and go with that. these people don’t want to spend 99$ on a discrete card, they wouldn’t know what to do with it! they want CHEAP. that’s the only thing that matters, and if this card can deliver pretty good performance at low upfront cost, then people will/should want it. I don’t think they will, because intel is king, but it would sure make my aunt leave me alone about why can’t my computer run x,y,z!

    • Hattig
    • 8 years ago

    Gotta say, even as someone who likes AMD, Llano on the desktop isn’t making a lot of sense primarily because of the high power consumption (what happened on the 32nm process, AMD?) and low clock speeds (again, what happened on the 32nm process?).

    The GPU part isn’t much faster than on mobile Llano (indeed on the desktop A6s it is equivalent to the mobile A8s), so the majority of the power use must be coming from the cores.

    In addition it looks like the Dual Graphics drivers need some serious optimisation. It’s these that can take the combo way above the Intel + Discrete alternative.

    But as a proposition for OEMs Llano could be quite nice, and thus as a product that actually brings in money for AMD it could do well. Remember it is a mainstream part. Two of those cores can run the spyware and other crapware on the user’s PC, leaving two cores free. And the user will notice the upgrade over their old PC so they will be happy.

    These PCs will come with 1680×1050 displays at best, and the ability to run WoW and whatever mildly tasking casual games a user may play. It is also one less component for the OEM to support over the option of Intel + Discrete. In effect you get a $60 video card for a $30 increase in the CPU price, and that is quite nice for the OEM that makes the most of advertising that fact.

      • Krogoth
      • 8 years ago

      Are you reading the power consumption charts?

      Llano’s power consumption is lower than its 45nm Phenom II/Athlon II predecessors. It is almost as good as Sandy Bridge units. That’s impressive considering that Llano is a much bigger piece of silicion than Sandy Bridge.

      For power efficiency, it is not as impressive. It still beats 45nm Phenom II/Athlons II and some of the older 45nm Lynnfield and Bloomfield chips.

      The only chip that makes no sense on power consumption front is the old-fanged, Netburst-based P4 840 EE.

      The whole power efficiency angle is being overplayed. It only matters to businesses who operate scores if not hurdreds of the systems which use the same platform. It adds up with the total cost of ownership. Enthusiast never really play much attention to power consumption unless they are shooting for slience. Average Joe doesn’t know or care about power consumption.

        • dpaus
        • 8 years ago

        [quote<]The whole power efficiency angle is being overplayed[/quote<] Hear, hear! [quote<]It only matters to businesses who operate scores if not hundreds of the systems which use the same platform[/quote<] That, and the IT director for Greenpeace. Oh, and a few members of the Sierra Club. And maybe those two dudes into IGP gaming. Walmart shoppers couldn't give a rat's @ss.

          • NeelyCam
          • 8 years ago

          Whatever power the PC “wastes” will also reduce the cost of heating your house during the winter.

          Low power consumption is good for low noise and green-epeen stroking. I support both.

            • dpaus
            • 8 years ago

            [quote<]Whatever power the PC "wastes" will also reduce the cost of heating your house during the winter[/quote<] And, of course, increase the cost of cooling it during the summer. So it all boils down to whether you live in, say, Arizona, or some God-forsaken, perpetually-frozen wasteland, like, say, Timmins.

            • NeelyCam
            • 8 years ago

            Worst place on earth: Urbana-Champaign. A hot/humid sauna during the summer, a freezing hell during the winter.

            • derFunkenstein
            • 8 years ago

            Welcome to the midwest. UC’s weather is no different than Indianapolis or Peoria, IMO. I’ve been in all 3 enough to know.

            • willmore
            • 8 years ago

            Here, here, I’m just down the road in Indy. Love the midwest weather, goes from freaking cold to freaking hot with a side of humidity.

        • NeelyCam
        • 8 years ago

        EE 840 is hilariously bad. I particularly like the fact that when it [i<]finally[/i<] finishes the task, it starts idling at a higher power consumption level than the others are running with full load.

    • Anonymous Coward
    • 8 years ago

    I imagine this thing will show up in lots of generic inexpensive desktop PCs, high wattage or not. It gives AMD a potent weapon in the important “impressive looking specification checklist vs price” wars. Around here most desktop PCs in the stores are not especially small form factor.

    • codedivine
    • 8 years ago

    That load power consumption looks terrible.

      • Arag0n
      • 8 years ago

      yup, but the point is that you are underload pretty phew time in your desktop at least that you are using some calculating software.

      But! this is a problem for games that it’s a “long load time”

      • dragosmp
      • 8 years ago

      On another site they have compared power draw with a similar clocked Athlon II: Llano is better, sometimes significantly so, but still behind Intel. You can do so much with a 4yo architecture.

    • esterhasz
    • 8 years ago

    Concerning market positioning, the two things I can think about are “emerging markets” (which is misleading, the BRIC PC market has already “emerged”, and it is currently bigger in both unit sales and revenue than US and EU combined) and “facebook gaming” (these will probably get more elaborate and having a somewhat performant IGP may help).

    I’m really curious how that pans out for AMD.

    • piesquared
    • 8 years ago
      • Palek
      • 8 years ago

      Maybe if you had continued reading you would have gotten to the point where Scott mentioned he tested with both DDR3-1333 and DDR3-1600. The extra memory speed gave Llano a 2-3 fps boost in games.

      • HurgyMcGurgyGurg
      • 8 years ago

      They tested DDR3 1600 with the graphics test. Didn’t really help.

      [url<]https://techreport.com/articles.x/21208/15[/url<] Well, a 333 Mhz bump was a 7% boost. EDIT: Ninja'd

        • Arag0n
        • 8 years ago

        I saw a review with DDR3 1866 with around a 10-15% boost, it’s not so much, but points how important is the memory at this system….

      • Anonymous Coward
      • 8 years ago

      Good move!

      • raddude9
      • 8 years ago

      Yea, the results are a bit skewed compared to other review sites, mostly because they stuck with the old DDR3 1333 memory, Techspot has both DDR3 1333 and DDR3 1866Mhz memory in its review, and in some benchmarks, like Handbrake, the faster memory gives the chip a 20% boost.

      Results like that make the Techreport review look biased.

        • Hattig
        • 8 years ago

        Right now, with DDR3-1866 costing significantly more than DDR3-1333/DDR3-1600 I think that in a cost conscious system you really do need to consider that cost aspect. Combined with the additional $30 APU cost over a similarly performing Phenom II, you have enough for a discrete graphics card of similar performance to Llano.

        And it won’t stop it being perfect for mainstream users buying from retail. It is right they care little for power consumption, absolute performance, and so on. This will play WoW and whatever casual game they want to play today just fine, even with DDR3-1333. So no issue.

          • raddude9
          • 8 years ago

          Granted the 1866 RAM is a bit more expensive, but I was trying to make the point that faster Ram, even using DDR3-1600, would probably make a difference in the CPU specific tests, not just games.

            • FuturePastNow
            • 8 years ago

            I think DDR3-1600 would probably be better than 1866, if it’s really low latency. Like CAS 6 or 7, and that’d still be cheaper than -1866. I’d like to see that tested.

            • Arag0n
            • 8 years ago

            Disagree, I think that may be true for CPU proposes but I think that in this case the bandwith to load textures and models into the GPU it’s the primary point given the size of textures and 3d models usually are some hundreds of kilobytes or phew megabytes at least instead of random data.

            • FuturePastNow
            • 8 years ago

            Regardless, it’s something I’d like to see tested.

            • Palek
            • 8 years ago

            I am somewhat confused by this exchange… The article clearly states that Scott tested Llano using DDR3-1600 memory. Please look at [url=https://techreport.com/articles.x/21208/3<]page 3 of the article[/url<]. The chart at the bottom says DDR3-1333 & DDR3-1600 for Llano. The additional memory speed is useless when the Llano IGP is inactive, so Scott used DDR3-1600 only for the Llano IGP tests.

            • FuturePastNow
            • 8 years ago

            He didn’t test DDR3-1866, which other reviewers found to make a big difference, and the DDR3-1600 he used wasn’t particularly special in terms of timings and latency (the testing methods page shows 8-8-8-20 2T which is the same as it shows for the DDR3-1333 tested, so that may be a typo).

            So the other people in this thread want to see more DDR3-1866 tests, and I’d like to see tests with aggressively timed DDR3-1600, which is still cheaper than DDR3-1866 currently.

            We’re all just trying to find the best bang for the buck, mainly.

            • Palek
            • 8 years ago

            Gotcha.

            My understanding is that pure speed is much more important for graphics chips and tightened timings don’t do much since graphics chips tend to transfer large continuous chunks of data in one go rather than perform lots of small random accesses. I’m not sure improved timings would do much for the IGP directly, other than as the side-effect of speeding up transaction completions for the CPU.

            Every little bit counts, but the returns here are probably minuscule, even less than going from 1333 to 1600. I could be wrong, of course…

            • raddude9
            • 8 years ago

            nope, page 3 only says that the 1600Mhz Ram was used for the Integrated graphics testing.
            [quote<]Also, for integrated graphics testing, we used the following system configs:[/quote<] All of the other tests were done using 1333Mhz ram. Other reviews have clearly shown that the Llano chip greatly benefits from 1600Mhz or faster ram, even for CPU bound tests.

            • Anonymous Coward
            • 8 years ago

            I think we have different definitions of “greatly”.

            • raddude9
            • 8 years ago

            To me a 24% difference is a “great” difference:

            [url<]http://www.techspot.com/review/418-amd-a8-3850-apu/page9.html[/url<] The Techspot review has an example of a 24% difference in performance when going from 1333Mhz Ram to 1866Mhz, and that was not in a Integrated graphics areas, that was with Handbrake video encoding. Other CPU-boud benchmarks also show a greater than 10% difference. Which is the whole point of what I was saying. Techreport disregarded the faster Ram options in their review, only trying out the faster (but still only 1600Mhz) Ram when it came to testing out the integrated graphics. 1600Mhz ram is only a couple of $'s more expensive these days, so why didn't they test using it in all of their tests?

            • Palek
            • 8 years ago

            [quote<]nope, page 3 only says that the 1600Mhz Ram was used for the Integrated graphics testing.[/quote<] ... Which is exactly what I said in the post you just replied to. Let me quote myself... [quote<]The additional memory speed is useless when the Llano IGP is inactive, so Scott used DDR3-1600 only for the Llano IGP tests.[/quote<]

            • raddude9
            • 8 years ago

            My apologies, I thought you had taken the wrong meaning. Which I think is understandable because afterwards you said:

            [quote<]The additional memory speed is useless when the Llano IGP is inactive, so Scott used DDR3-1600 only for the Llano IGP tests.[/quote<] Which is completely and utterly wrong, faster Ram can make up to a 24% difference when it comes to CPU bound tests: [url<]http://www.techspot.com/review/418-amd-a8-3850-apu/page9.html[/url<] This was the whole point of this thread, a point I think you missed.

            • Palek
            • 8 years ago

            No worries! πŸ™‚

            And it’s my turn for the apologies, because the bench scores at TechSpot are quite impressive. I – mistakenly, as it turns out – assumed that the CPU cores in Llano were not starved for bandwidth, especially when a discrete graphics card was used. I seem to remember that even faster Phenom II CPUs were pretty much maxed out with DDR3-1333 and going with faster memory would not give them a performance boost. So, it appears that Llano can in fact make very good use of the extra RAM speed!

            Thanks for re-educating me! (^_^)b

            • willmore
            • 8 years ago

            But, Phenom II has a large L3 cache. Llano is more like a L2 cache buffed Athlon X4.

            • Palek
            • 8 years ago

            That was so obvious it was staring me in the face, yet I didn’t think of it. Thanks for pointing it out.

      • derFunkenstein
      • 8 years ago

      Damage had something to say about this today:

      [url<]http://twitter.com/#!/scottwasson/status/88662266597224449[/url<] [quote<]Llano DRAM speeds & Portal 2 FPS: 1333MHz/2T = 52; 1600MHz/2T = 53; 1866MHz/1T = 54. Wow, what a revelation.[/quote<] I'm a LITTLE surprised, because this means that either the Llano CPU isn't fast enough to drive Portal 2 to 60FPS (not true, because with a 6670, it can run up to 85 FPS) or the GPU is too slow to make use of 50% increase in system bandwidth. Either way, it seems that faster RAM is not warranted. I think that it's the GPU, because it's running at only 600Mhz. The discrete 6670 runs at 800MHz with 20% more shaders, so the 55->85 FPS jump is pretty reasonable if it's attributed entirely to the GPU.

        • Arag0n
        • 8 years ago

        Then he may explain why other websites report 10 to 20% improvement… I don’t care why Portal2 doesn’t seems to get improvements with higher speed memory, I only care that for half of the games seems to do a pretty good job improving the speed. I think it’s out of place the tweet from Scott…. does he really read the other sites reviews?

        I’m sorry but looks like the “bad student” excuse when you fail an exam and you point that “most of the class failed” as a reason despite not studying at all before the exam. They missed to do the testings with 1866, there is no excuse to that….

    • Krogoth
    • 8 years ago

    First post!

    Here comes the impending flame wars!

    Anyway, the article confirms what I have known about Llano. It is a mainstream desktop part, it was never meant to go up against Intel’s performance offerings. Despite this, it still offers sufficent CPU performance for majority of computer users who aren’t using their system for real work. Its encoding performance isn’t too bad, but there are far better choices and values in this area (Phenom X6s, non-K Sandy Bridges).

      • axeman
      • 8 years ago

      I agree. The IGP is nice, but it will still be a tough sell. If AMD’s CPU designs were even a little closer to Sandy Bridge performance wise (even if they couldn’t match it), this would be more interesting. Don’t get me wrong, having that much power on the IGP is a good thing, Intel’s IGPs are an irritating lack of real effort IMO, but AMD was already toasting Intel on that front without needing to put it on the CPU itself, and it hasn’t brought mad profits.

      When AMD was pretty much smashing Intel in the K8 days, they should have been putting every last dollar of R&D into the next generation of AMD processors while Intel was scrambling to replace netburst with something less stupid. Instead, AMD bought ATI, and brought out incrementally improvements of the K8 design. Now Intel is resting on it’s laurels while AMD seems to have lost it’s way. This is memories of Via for me. They couldn’t compete, so they focused on a niche market of ultra low power designs. AMD can’t compete, so now they’re focusing on graphics capabilities? For 95% of PCs sold, what drives the graphics output is about as important as what controls the USB ports. Not too much longer, and AMD might be as irrelevant as VIA. For Pete’s sake, get your act together AMD – anyone who remembers CPU prices before AMD brought a competitive design, probably doesn’t want AMD to slide back into irrelevance again, regardless of whether they stick with the Green or Blue camp, competition is good for us consumers.
      AMD should focus on the CPU designs, they can shrink any of their GPU devices down to put it on a chipset or CPU die at any time and smash Intel’s lousy graphics, the CPU is where they’ll live or die.

      WaltC, signing off.

        • Krogoth
        • 8 years ago

        CPU performance doesn’t as much as it used to for the majority of the computing market. Modern quad-core and dual-core chips can handle almost any mainstream task without too much fuss. The only area where CPU performance still matters is in workstation, server and HPC markets. This is where Bulldozer is going to embark against Intel’s current offerings. I suspect it will be an interesting battle here. Llano was never meant to be a workstaion part it always was meant to be a mainstream product.

        GPUs matter far more in gaming performance, while Fusion’s IGP is impressive it still falls short of a decent discrete solution. It is only good for casual gaming at a modest resolution/in-game detail settings. The technology might be useful for next-generation consoles/portable gaming devices.

        Hopefully Llano will make Intel re-evaluate the importance of having decent IGP solutions in their mainstream platforms. Compention FTW!

        • HisDivineOrder
        • 8 years ago

        Interesting take on those times. You realize that even when “AMD was pretty much smashing Intel” they still weren’t making more money because Dell was the number 1 retailer of PC’s and Dell was Intel only because Intel was using monopolistic tactics to keep Dell (and several other OEM’s) exclusively Intel with pricing that made it insane for any OEM who needed large amounts of processors to go anything EXCEPT Intel.

        AMD was smashing in those days due mostly to enthusiasts and enthusiasts (aka “Those who build their own computers”) will never make a computer processor company rich. Just ask anyone who’s ever made a CPU, GPU, APU, or PPU whether the majority of their money came from the “Build Your Own” crowd or the bulk, Pre-build crowd. nVidia and ATI/AMD were as close to the enthusiast market as any company, but they sure did put a lot of their focus on getting on those IGP’s. Intel had the largest share of GPU’s despite the fact they had not a single enthusiast product.

        Your opinion that AMD should have put more money into R&D is interesting because by your own admission, it was Intel’s mistake–not AMD’s superior R&D–that let AMD “smash them.” That mistake was Intel’s. Intel believed they could wade into the market, tell people what they should use, and that they’d just go use it, even if the rest of the industry was saying they wanted to go a different way (RDRAM vs DDR). Intel’s mistake of hubris led them to use Netburst which was better suited to using RDRAM (at least at first) and so Intel thought they could force the industry to go that way, no matter how much it cost.

        Clearly, that did not work. Even so, did Dell go AMD in the interim? No. No, they did not. In fact, Dell did not go AMD until the time of AMD “smashing” Intel was nearly done. And because AMD did not have the ability to manufacture enough CPU’s (and contractually was prevented from outsourcing said fabrication to outside parties due to the contract between AMD and Intel explicitly forbidding it), they could never outweigh the production capabilities and pricing of Intel.

        Now throw on top of all of that the fact that you want them to do some R&D, so they did the R&D and determined the future was fusion. So they had to own a GPU company. If Intel had not cheated to keep all that marketshare despite failing to deliver superior product for YEARS, perhaps AMD would have had more money and could have easily afforded their fusion future AND a new CPU design. Alas, Intel cheated, took forever to admit they did AMD wrong, and longer still to pay up. Even when they did, the payout is pretty small compared to the profits Intel makes year after year thanks to the insane headstart they got from all the gotcha’s they used against AMD back in the day.

        Your stunning inability to see all this makes me doubt your intentions.

      • flip-mode
      • 8 years ago

      I marvel at how you always already know everything and just read articles to get confirmation.

        • Krogoth
        • 8 years ago

        It is not that difficult to project Llano’s performance, once AMD spill the beans on its guts. It is just a tweaked K10.5L (Athlon II X4) with a HD 6830/6850 bolted on. Llano’s performance fits nicely with that kind of setup.

        The only uncertainty was its power consumption, since this was AMD/GF’s first commercial product build on the 32nm process.

          • swaaye
          • 8 years ago

          The GPU is Redwood-like so it’s basically a 6650/5650. It’s not Barts / 68xx.

Pin It on Pinterest

Share This