AMD’s A10-4600M ‘Trinity’ APU reviewed

One of the big stories in PC processors over the past few years has been AMD’s struggles to match the performance of Intel’s high-end desktop CPUs. The much-anticipated “Bulldozer” microarchitecture landed with a thud, unable to mount a serious challenge to the dominance of Intel’s Core i5 and i7 offerings. Meanwhile, Intel continues to crank out major improvements to these products at a pretty regular clip, as it did with the introduction of the 22-nm Ivy Bridge chips last month.

However, there is another, even bigger story unfolding in PC processors at the same time, and AMD plays a more intriguing role in it. As you may know, CPUs have swallowed up a whole host of other system components in the past few generations from the memory controller to I/O and graphics. The reasons for this trend are several. Integration can sometimes deliver higher performance—bringing the memory controller onboard provided a nice boost, for instance—but it can also cut costs, reduce the physical size of the platform, and improve power efficiency. With the rise of mobile computing, Intel and AMD have pushed ever closer to the ideal of a single-chip PC solution.

In that context, last year’s introduction of the A-series processors, based on the chip code-named Llano, was a big stride forward for AMD. Llano achieved several important milestones at once. For one thing, it essentially matched Intel’s competing products for battery run time; parity on this front had long eluded AMD. For another, Llano was the first AMD processor to incorporate the Radeon graphics technology the firm had acquired years before by purchasing ATI. As you might expect, Llano’s graphics capabilities gave it instant credibility and a clear leg up on Intel’s anemic integrated graphics processor (IGP). We liked the mobile Llano variant well enough to consider it a viable alternative to Intel’s dual-core Sandy Bridge processors—perhaps even a superior choice for most folks, given the gap in graphics capabilities.

Llano had its limitations, though. The supply of 32-nm chips from AMD manufacturing partner GlobalFoundries was spotty for quite a while. The chip didn’t translate well to desktop-class power envelopes, in our view. And it looked very much like a first-generation effort in a lot of ways. AMD talked endlessly about CPU-GPU “fusion” and dubbed the A-series products “APUs,” for “accelerated processing units,” but Llano stopped well short of making the IGP into a true co-processor.


A Trinity chip. Source: AMD.

Today, AMD is ready to take the next step with the introduction of the second-generation APU known as Trinity. Nearly everything about it is new, from the CPU cores to the IGP and the various bits of glue that hold everything together. The integration in Trinity is more mature, with more benefits and fewer visible seams between the processor’s various components. Thus, although Trinity is manufactured using the same 32-nm SOI fabrication process as Llano, AMD claims Trinity doubles its predecessor’s power-performance ratio. That claim takes several forms; most prominently, there is a 17W version of Trinity that purportedly performs like a 35W Llano variant. If true, AMD ought to have a very nice offering to slide into the ultra-thin laptops that are all the rage these days.


An overview of the Trinity die. Source: AMD.

The annotated image above points out Trinity’s main components. The CPU portion of the chip includes four integer cores and two FPUs based on the “Bulldozer” microarchitecture. In fact, Trinity is the first chip to incorporate AMD’s “Piledriver” architectural updates. More on those shortly. Also updated from Llano is Trinity’s IGP, which is derived from the “Northern Islands” generation of Radeons. The memory controller remains a dual-channel affair, capable of supporting DIMMs up to 1866 MT/s, though 1600 MT/s is the top speed for mobile parts. Trinity’s media processing block still decodes a host of video formats but has learned a new trick: hardware-accelerated H.264 encoding. And for communication with the outside world, the chip has 24 lanes of PCI Express Gen2 connectivity. Gone is the HyperTransport link used in AMD processors for ages; this chip talks to its Fusion Controller Hub I/O support chip via dedicated PCIe lanes, instead.

Code name Key

products

Cores Threads Last-level

cache size

Process node

(Nanometers)

Estimated

transistors

(Millions)

Die

area

(mm²)

Sandy Bridge Core i3, i5 2 4 4 MB 32 624 149
Sandy Bridge Core i5, i7 4 8 8 MB 32 995 216
Ivy Bridge Core i5, i7 4 8 8 MB 22 1400 160
Llano A4 2 2 1 MB x 2 32 758
Llano A8, A6, A4 4 4 1 MB x 4 32 1450 228
Trinity A10, A8, A6 4 4 2 MB x 2 32 1303 246

Trinity isn’t an especially large chip, as these things go, but it is a little larger than Llano—despite a lower transistor count estimate—and it’s quite a bit larger than the quad-core versions of Sandy and Ivy Bridge. Then again, the 22-nm Ivy Bridge quad is positively tiny.

Piledriver: somewhat heavier equipment

Trinity’s use of the Bulldozer CPU architecture gives it a host of features that Llano lacked, including AES encryption acceleration and AVX instructions for wider floating-point vector processing. Bulldozer’s basic layout also makes Trinity a very different beast than Llano. This architecture’s fundamental building block is a compute “module” that can process two threads simultaneously. Although AMD claims the module has two distinct integer cores, those cores share some key resources, including the instruction fetch and decode units, an L2 cache, and a floating-point math unit (FPU). The shared structures have been upgraded substantially from prior AMD CPUs, to better service two integer cores at once. Trinity has two of these compute modules, giving it four threads, four integer “cores,” and two FPUs. Each of those modules has 2MB of L2 cache. By contrast, Llano has four distinct cores, each with its own FPU and 1MB of L2 cache, with no sharing. (One similarity between Llano and Trinity is the omission of an L3 cache. AMD deemed the L3 a power efficiency liability in Llano, and it appears to have held to that conviction with Trinity.)

To date, Bulldozer’s performance hasn’t fulfilled the expectations created by its extended feature set. The desktop FX-8150 processor is barely quicker than the older Phenom II X6 in most cases, for instance, and its per-clock performance is actually lower than the prior-gen processor’s. Some of that is by design; Bulldozer is intended to run at higher clock frequencies, and it gives up some per-clock performance in order to do so. Still, the revised “Piledriver” CPU cores in Trinity have been tweaked for higher instruction throughput in each clock cycle.

Although some folks probably expected a quick-fix for the Bulldozer architecture that would yield some sizeable performance gains, that doesn’t appear be what’s happened. Instead, Piledriver incorporates a fairly broad range of improvements, none of which contributes much more than 1% to overall per-clock instruction throughput. (I believe the cumulative total is somewhere around a 6% IPC improvement, generally, but my notes are fuzzy on that one.)


Changes from Bulldozer to Piledriver. Source: AMD.

One of the most notable changes in Piledriver is support for a couple of new instructions. The addition of a three-component fused multiply-add instruction, FMA3, brings AMD in line with Intel’s plans for its upcoming Haswell chip. That should clear up any confusion about this workhorse of the AVX extensions. (Support for Bulldozer’s FMA4 instruction remains.) Furthermore, Piledriver allows quick conversions between 16- and 32-bit floating-point data formats via the F16C instruction, which debuted in the Intel camp on Ivy Bridge.

Among the other tweaks to improve instruction throughput, the highest-impact change is probably the doubling in size of the L1 data cache’s translation lookaside buffer. The TLB is a sort of cache index, and a larger TLB makes the cache faster and more efficient. Beyond that, nearly every part of the chip has been massaged, save for the execution units. The branch predictor is more accurate, thanks to an innovation borrowed from the Bobcat core. The integer and FP schedulers are more aggressive about retiring instructions, making them effectively larger without a structure size increase. And the hardware prefetcher can better predictively populate the L2 cache, in part because it has been tuned for client-style workloads (whereas Bulldozer is tuned for servers.)

As sweeping as the changes may look on paper, they are apparently rather modest in their cumulative effect. However, performance boosts can come from other sources, and Piledriver has been optimized to achieve higher clock frequencies at lower power levels. AMD tells us Piledriver responds much better than Llano’s cores to changes in voltage, allowing wider latitude for clock frequencies and finer-grained control over those speeds. For a mobile-focused CPU (err, APU) like Trinity, such things tend to be especially helpful.

A new IGP based on, uh, proven technology

Trinity’s integrated graphics are a generation beyond Llano’s and are, in terms of basic capabilities, pretty well up to date. They’re also based on an older generation of discrete graphics chips, “Northern Islands,” most familiar from the Radeon HD 6900 series of video cards. AMD’s current GCN architecture didn’t make the cut.


Logical block diagram of Trinity’s IGP. Source: AMD.

There’s your requisite block diagram of the graphics portion of the chip. If you have really good glasses, you could count all of the units yourself. Trinity’s IGP has six SIMD engines and sports a total of 384 shader ALUs. Each SIMD engine has a texture unit capable of filtering four texels per clock, so the IGP totals 24 texels per cycle. The two render back-ends can blend eight pixels per clock.

None of those are numbers particularly breathtaking. Llano’s IGP has 5 SIMD engines, 400 ALUs, 20 texels per clock of filtering throughput, and dual render back-ends. Still, Trinity’s IGP should make better use of its resources. Trinity’s IGP trades up to a VLIW4 shader execution unit that is more area efficient. Llano’s VLIW5 design has a fifth “fat” ALU for certain types of functions, and the other four ALUs have a subset of its abilities. The Northern Islands shader core eliminates that fifth ALU and grants full and equal functionality to the other four units. This new arrangement seems to work well aboard the Radeon HD 6900 series. Northern Islands also brings some improvements in tessellation performance, thanks to improved buffering intended to manage the difficult data flow issue created by geometry expansion.

Importantly for AMD’s plans, the Northern Islands graphics core is better suited for non-graphics computing, too. The VLIW4 shaders should map well to a broader range of data sets, and this core adds the ability to execute multiple, independent kernels (or programs, essentially) at once, each with its own command queue and address domain.

None of those enhancements is likely to provide as much uplift versus Llano as one other change: higher IGP clock speeds. The fastest mobile Llano IGP runs at 444MHz, but Trinity’s IGP operates at frequencies as high as 686MHz. When combined with the architectural enhancements and the slight bump from five SIMDs to six, the higher clock speed should make Trinity’s IGP a considerable upgrade from Llano’s. Texture filtering capacity is nearly doubled, and other key rates are up by 40-50%, with the notable exception of memory bandwidth, which depends on the DIMM speed.

Although Trinity’s IGP isn’t based on the latest architecture, its associated media processing block is AMD’s most recent vintage. The UVD3 video decode engine adds support for the MVC extension to H.264 for stereoscopic 3D, for the MPEG-4/DivX format, and for decoding dual HD streams simultaneously. The brand-new VCE block throws hardware-accelerated H.264 encoding into the mix, too—something that’s important not just for performance and power efficiency reasons, but also for enabling new features like wireless displays.

Speaking of displays, Trinity can drive as many as four at once over HDMI, DVI, and DisplayPort. AMD has blazed the trail for DisplayPort adoption among consumer systems, and this chip supports DisplayPort 1.2 operation at up to 5.4 Gbps, including the daisy-chaining of multiple monitors on a single link. The APU can bundle sound into its digital display connections, as well—as many as four 7.1-channel audio streams, with broad support for digital encoding standards, including DTS Master Audio and Dolby TrueHD.

Better integration for power savings

Llano’s battery life is pretty good, but AMD claims Trinity is even better, with run times in some configurations extending as far as 10 hours. One very impressive number in this regard is the chip’s idle power draw of 1.08W. Battery run times will, of course, depend on more than just the CPU’s power consumption, but Trinity looks to be doing its part to conserve power.

Power-efficient performance should get a boost thanks to more capable power management and dynamic clock speed scaling. Llano could only trade power in one direction, with the CPU scaling up or down via Turbo Core depending on the needs of the IGP. Trinity’s IGP can join the game, now, too, allowing the whole chip to adjust its performance in response to the current workload.


How the A10 adjusts to different workloads. Source: AMD.

The example above shows how the A10-4600M’s IGP and CPU clock frequencies change in response to different workloads. A heavy CPU load with light graphics use results in a moderate IGP clock and higher CPU frequencies. The CPU speed then varies based on the number of threads active; with a single thread, the A10 CPU can reach 3.2GHz. On the other hand, for a GPU-intensive application with modest CPU needs, the IGP clock jumps up and the CPU speed scales down.

Since they’re based on Piledriver, the CPU modules in Trinity have a much more capable implementation of Turbo Core than Llano. Llano has only one P-state above its stock clock speed. The A8-3500M’s base frequency is 1.5GHz, and when Turbo kicks in, the clock jumps to 2.4GHz. Trinity has finer-grained control, with four P-states for Turbo Core. Trinity is also able to respond much more quickly to changes in activity and die temperatures, thanks to an onboard power-management microcontroller and an architecture that’s designed to operate well at different frequencies across a range of voltages.

One way Trinity manages to achieve such low power draw at idle is more extensive power gating. In addition to power gates for the IGP and the two CPU modules, this chip adds gates for the north bridge, the PCIe interface, and the display PHY. When those portions of the chip aren’t in use, they can be shut off entirely, eliminating even the leakage power that would otherwise be going to them.

The conservation effort extends to the rest of the platform, too. Trinity’s memory controller can adjust DRAM frequencies on the fly in order to conserve energy, and it supports the low-power DDR3 standard for driving DIMMs at 1.25V. The VRMs can make faster transitions, improving efficiency. Also, the number-one activity for nearly all computer users is now more economical: staring at a static screen. Trinity can refresh a static display from a single memory module, allowing the other DIMM to scale back or to power down. The chip has more buffering for display memory, too, which should save power that would otherwise be spent on memory I/O.

Accelerating accelerated computing

AMD has talked a good game about CPU-GPU convergence and accelerated computing for a while now, but it is also laying the foundation for true GPU-IGP cooperation. One key bit of plumbing on that front is something called the Fusion Compute Link. The FCL replaces the PCIe communication channel between the CPU and GPU in a merged chip like Trinity. Llano’s first-generation FCL had only modest bandwidth, but AMD promised to invest more in this connection over time. Trinity’s FCL is 128 bits in each direction. This connection allows the IGP to access the CPU’s memory space coherently, and it gives the CPU a window into the IGP’s dedicated frame buffer. Given the right programming model, which AMD is pioneering with its software work on the Heterogeneous System Architecture, the FCL could become important in future converged applications, where the IGP and CPU might team up to manipulate data in the same memory.

The FCL augments the IGP’s primary path to system memory, which is two pairs of 256-bit links (one in each direction) between the graphics memory controller and the north bridge.

Trinity is ready to support merged applications with discrete GPUs, too. Its IOMMU will allow the shaders in PCIe graphics cards to operate directly on main memory, and it’s capable of supporting GPU virtualization.

The new A-series APUs

Model TDP Cores CPU
clock
(GHz)
L2
cache
Graphics
ALUs
Graphics
clock
(MHz)
A10-4600M 35W 4 2.3/3.2 4MB 384 497/686
A8-4500M 35W 4 1.9/2.8 4MB 256 497/655
A6-4400M 35W 2 2.7/3.2 1MB 192 497/686
A10-4655M 25W 4 2.0/2.8 4MB 384 360/497
A6-4455M 17W 2 2.1/2.6 2MB 256 327/424

Naturally, AMD has a range of Trinity-based APUs on offer. The fastest model is the A10-4600M, which we’ve already seen in our dynamic power scheme example. The A10-4600M is also the chip we have for review today. As you can see, it has all of Trinity’s cores and cache enabled, running at aggressive clock speeds. With its 35W TDP, the 4600M will serve nicely to illustrate AMD’s progress since the Llano-based A8-3500M we reviewed last year—and have brought back here for an encore. AMD expects the A10 series to make its way into laptops costing $700 and more, where it will compete with the lower end of the mobile Core i7 line and the high end of the Core i5 lineup.

We have a couple of laptops based on Intel chips in the same basic class as the A10-4600M for comparison, too. The Core i7-2670QM is a quad-core Sandy Bridge with a 2.2GHz base clock and a 3.1GHz Turbo peak that is selling in laptops costing $659 and up at Newegg. The Core i7-3720QM is an Ivy Bridge-based quad-core in the same price range, although it’s too new to have a robust selection of systems available. Those few that are available currently cost quite a bit more than $700. A bigger wrench in the works is the fact that both Intel chips have 45W TDP ratings, so they have more room from which to extract performance than the A10-4600M. The most direct competition from Intel at present may be the Sandy Bridge-based Core i7-2640QM, which has a 35W TDP and costs about $30 less than the i7-3720QM, but it is a near run thing. AMD has positioned the A10 very close to those Intel quad-cores, obviously quite intentionally.

The rest of the lineup plays out much as one might expect. The A8-4500M will occupy systems costing $550 or more, facing off against the lesser Core i5s and greater Core i3s. The A6-4400M, with only one compute module (and thus two cores) enabled, will do battle with the Core i3 in laptops above the $450 mark. As far as we know, the A6 parts are actually quad-core Trinity chips with two cores disabled. AMD wasn’t willing to disclose any plans to produce a natively dual-core version of Trinity.

The most interesting Trinity parts, in our view, are the 25W and 17W models. The 17W version is the one destined for those ultra-thin MacBook Air clones, and we’re very much intrigued by its potential. It may prove to be a nice alternative to the dual-core version of Ivy Bridge, once the dual Ivy chip arrives later this summer.

The Trinity whitebook

Our Trinity APU sample is enclosed in the following 14″ whitebook:

The system looks and behaves almost like a retail product, but it isn’t one. It’s etched with AMD’s corporate logo instead of a vendor’s badge, and it lacks the fit and finish of a commercial system. One tell-tale sign is the optical drive, which sticks out a little past the lower edge of the system’s body—not enough to snag on something, but enough to make it clear this is a prototype.

Unlike the Llano whitebook we reviewed last year, this one lacks a discrete GPU. The APU’s two memory channels are fed with two 2GB DDR3 SO-DIMMs clocked at 1600MHz. AMD threw in a 128GB Samsung 830 solid-state drive, as well. We ruthlessly replaced it with a 500GB WD Scorpio Black hard drive to keep our benchmark comparisons fair.

Speaking of benchmark comparisons, we had some trouble gathering adequate contestants for this match-up. The 13″ Llano whitebook made a return appearance, as you’d expect, but the dual-core Sandy Bridge notebook against which we compared it last year wasn’t available for an encore. We do, however, have a couple of quad-core Intel notebooks on hand: one based on Sandy Bridge, and another based on Ivy Bridge.

Those two notebooks are detailed below. They’re both larger than the Trinity and Llano whitebooks, with 15″ displays and thicker, heavier frames. Both are outfitted with GeForce GT 630 discrete graphics, which we didn’t use in our tests, and both have twice as much RAM as the AMD whitebooks—eight gigs—but we don’t expect memory capacity to be a constraint in any of our tests. The most notable difference is that the Intel notebooks have 45W processors. Keep that in mind as you see the results on the following pages; the Intel parts ought to have a built-in handicap since they have a 10W larger power envelope.

Our testing methods

We ran every test at least three times and reported the median of the scores produced.

The test systems were configured like so:

System AMD A8-3500M test system AMD A10-4600M test system Asus N56VM Asus N53S
Processor AMD A8-3500M APU 1.5GHz AMD A10-4600M 2.3GHz Intel Core i7-3720QM 2.3GHz Intel Core i7-2670QM 2.2GHz
North bridge AMD A70M FCH AMD A70M FCH Intel HM76 Express Intel HM65 Express
South bridge
Memory size 4GB (2 DIMMs) 4GB (2 DIMMs) 8GB (2 DIMMs) 8GB (2 DIMMs)
Memory type DDR3 SDRAM at 1333MHz DDR3 SDRAM at 1600MHz DDR3 SDRAM at 1600MHz DDR3 SDRAM at 1333MHz
Memory timings 9-9-9-24 11-11-12-28 11-11-11-28 9-9-9-24
Audio IDT codec IDT codec with 6.10.0.6277 drivers Realtek codec with 6.0.1.6537 drivers Realtek codec with 6.0.1.6463 drivers
Graphics AMD Radeon HD 6620G + AMD Radeon HD 6630M

with Catalyst 12.4 drivers

AMD Radeon HD 7660G with Catalyst 8.945 RC2 drivers Intel HD Graphics 4000 with 8.15.10.2696 drivers

GeForce GT 630M with 296.54 drivers

Intel HD Graphics 3000 with 8.15.10.2462 drivers

GeForce GT 630M with 296.54 drivers

Hard drive Hitachi Travelstar 7K500 250GB 7,200 RPM WD Scorpio Black 500GB 7,200 RPM Seagate Momentus 750GB 7,200-RPM Seagate Momentus 750GB 7,200-RPM
Operating system Windows 7 Ultimate x64 Windows 7 Ultimate x64 Windows 7 Professional x64 Windows 7 Home Premium x64

Thanks to Asus for volunteering a quad-core Sandy Bridge laptop, and thanks to AMD and Intel for providing the other systems.

We used the following versions of our test applications:

The tests and methods we employ are usually publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Memory subsystem performance

Per our tradition, we’re going to start off by comparing the memory subsystems of our CPUs in a few synthetic tests.

Please note that the A10-4600M and Core i7-3720QM have higher-clocked memory than the other two offerings. Because of the discrepancy, the results below won’t paint a clear, unadulterated picture of memory controller efficiency. But they will show us something else. You see, the A10-4600M and Core i7-3720QM both support faster RAM than their predecessors. (Both can accommodate DDR3-1600 memory, while the A8-3500M and i7-2670QM are limited to DDR3-1333.) So we’re going to be able to see what dividends the faster memory support pays from one generation to the next.

In this basic measure of memory bandwidth, the A10-4600M edges out the A8-3500M by about 13%. Our Ivy Bridge CPU enjoys a similar gain over its forebear. The A10 can’t come close to matching the Intel chips, though.

Next up: SiSoft Sandra’s more elaborate memory and cache bandwidth test. This test is multithreaded, so it captures the bandwidth of all caches on all cores concurrently. The different test block sizes step us down from the L1 and L2 caches into L3 and main memory.

The A10-4600M’s two L1 and L2 caches manage to match the A8’s four L1/L2 caches nearly step by step in terms of bandwidth. Neither can keep pace with the Bridge sisters’ cache hierarchies, however.

Sandra also includes a new latency testing tool. SiSoft has a nice write-up on it, for those who are interested. We used the “in-page random” access pattern to reduce the impact of prefetchers on our measurements. We’ve also taken to reporting the results in terms of CPU cycles, which is how this tool returns them. The problem with translating these results into nanoseconds, as we’ve done in the past with latency measurements, is that we don’t always know the clock speed of the CPU, which can vary depending on Turbo responses.

Because it shares 2MB of L2 cache across each dual-core module, the A10 manages a lower latency than the A8 at the 2MB block size.

However, the A10 falls behind the A8 at every other block size, including those small enough to fit into the L1 and L2 caches. The culprit may simply be slower caches on Piledriver. In our desktop tests, Bulldozer fared even worse against the Phenom II X6 1100T, which is based on the same architecture as Llano.

Productivity

TrueCrypt disk encryption

TrueCrypt supports acceleration via Intel’s AES-NI instructions, so the encoding of the AES algorithm, in particular, should be very fast on the CPUs that support those instructions. We’ve also included results for another algorithm, Twofish, that isn’t accelerated via dedicated instructions.

7-Zip file compression and decompression

SunSpider JavaScript performance

Trinity’s hardware AES acceleration gives it a massive lead over Llano in our first TrueCrypt test. Elsewhere, the differences are smaller. We do see a sizeable improvement from the A8 to the A10 in SunSpider, though, which is a very good thing. You might not compress or decrypt files everyday, but most of us spend a big chunk of our day using JavaScript-heavy websites and web apps. The A10 promises to be solidly faster than the A8 in those.

Neither AMD APU can catch up to Intel’s quad-core offerings, of course, but that’s no great surprise. We didn’t expect AMD to catch Intel in raw CPU performance, especially not with a 10W power envelope handicap. Still, Trinity will have to do well on other fronts to distinguish itself.

Image processing

The Panorama Factory photo stitching
The Panorama Factory handles an increasingly popular image processing task: joining together multiple images to create a wide-aspect panorama. This task can require lots of memory and can be computationally intensive, so The Panorama Factory comes in a 64-bit version that’s widely multithreaded. We asked it to join four pictures, each eight megapixels, into a glorious panorama of the interior of Damage Labs.

In the past, we’ve added up the time taken by all of the different elements of the panorama creation wizard and reported that number, along with detailed results for each operation. However, doing so is incredibly data-input-intensive, and the process tends to be dominated by a single, long operation: the stitch. Thus, we’ve simply decided to report the stitch time, which saves us a lot of work and still gets at the heart of the matter.

Video encoding

x264 HD benchmark

This benchmark tests one of the most popular H.264 video encoders, the open-source x264. The results come in two parts, for the two passes the encoder makes through the video file. I’ve chosen to report them separately, since that’s typically how the results are reported in the public database of results for this benchmark.

We see the same story unfold in our image editing and video encoding tests: the A10-4600M edges out the A8-3500M by a decent margin, but it’s no match for quad-core Sandy and Ivy CPUs.

Accelerated applications

Trinity, Llano, Sandy Bridge, and Ivy Bridge all dedicate a substantial chunk of their die area to graphics. And, with the exception of Llano, they all have special-purpose video transcoding logic, as well. We sought to unleash all of those extra transistors in a few general-purpose applications, to see if the competitive picture would change at all.

LuxMark OpenCL rendering

We’ve deployed LuxMark in several recent reviews to test GPU performance. Since it uses OpenCL, we can also use it to test CPU performance—and even to compare performance across different processor types. And since OpenCL code is by nature parallelized and relies on a real-time compiler, it should adapt well to new instructions. For instance, Intel and AMD offer integrated client drivers (ICDs) for OpenCL on x86 processors, and they both claim to support AVX. The AMD APP ICD even supports Bulldozer’s distinctive instructions, FMA4 and XOP.

First, a word about those missing bars in the graph. Sandy Bridge’s HD 3000 integrated graphics lack OpenCL support, so we couldn’t run LuxMark on the Core i7-2670QM’s IGP. Also, the AMD processors don’t support Intel’s ICD driver, so we were only able to run LuxMark on their integrated Radeon HD graphics and on their CPU cores using the AMD APP ICD. Ivy Bridge is the only processor that supports both AMD and Intel ICDs and has the ability to execute OpenCL code using its integrated graphics.

As we saw in our Ivy Bridge review last month, AMD’s APP ICD yields better results than Intel’s ICD when the IGPs are kept out of the running. The best results are obtained by combining the CPU with the APP ICD and the integrated graphics with their own OpenCL drivers. Regardless of the configuration, though, Trinity falls well behind both Ivy Bridge and Sandy Bridge. At the same time, it’s still nicely ahead of Llano.

The GIMP

AMD supplied us with a special build of The GIMP 2.8, which features a wealth of OpenCL-accelerated filters. Future GIMP builds will feature an entirely OpenCL-accelerated image processing pipeline, but the build we used did not. Here’s what AMD had to say on the subject:

The upcoming major release of GIMP is expected to move its main processing pipeline to use the GEGL library. . . . Knowing that OpenCL and GEGL are the future, the current OpenCL work is designed to impact GEGL, not the current GIMP pipeline. One consequence of aligning with GEGL is that the speed of adoption will rely on GEGL integration with GIMP. In current GIMP builds, there are special menus to use GEGL operation. There is other overhead as well. While we’re seeing nice speedups with OpenCL now, even better performance is expected once GIMP moves completely to the GEGL pipeline.

We tested by loading up an image from our camera, a 32-bit, 4272×2848 bitmap, running through 15 GEGL filters, and averaging the results. AMD says OpenCL kernel code is built when a filter is run for the first time, and this results in a “slight performance hit.” To compensate for that hit, we ran each filter four times and only recorded results from the last three runs.

Sadly, The GIMP’s GEGL operations weren’t available on our Intel systems. The menu simply didn’t show up, not even when we had the AMD APP ICD installed. Our results for the AMD systems tells us what we already know: Trinity is quicker than Llano. The difference is more pronounced here than in LuxMark, though.

WinZip 16.5

The latest version of WinZip features a parallel processing pipeline with OpenCL support. The pipeline allows multiple files to be opened, read, compressed, and encrypted simultaneously, all with hardware acceleration. Right now, though, WinZip’s OpenCL capabilities seem to be off-limits to Intel processors—again, regardless of what ICD is installed. The OpenCL switch in the WinZip settings would only appear on our AMD systems.

We tested WinZip by compressing, then decompessing, a 1.17GB directory containing about 150 small text and image files, a couple dozen medium-sized PDF files, and 14 large Photoshop PSD files. We timed each operation with a stopwatch.

OpenCL acceleration doesn’t do much for decompression, but it clearly pays off during file compression. Interestingly, Trinity sees greater overall benefits from hardware acceleration than Llano. The Intel CPUs are faster even without help from their IGPs, though.

CyberLink MediaEspresso

This user-friendly video transcoder supports AMD’s VCE and Intel’s QuickSync hardware transcoding blocks. Those are effectively black boxes without much programmability, so their output isn’t necessarily comparable—and neither is their performance, strictly speaking. From a practical standpoint, though, it’s helpful to see which solution will transcode videos the quickest. So that’s what we’re going to do.

For our test, we fed MediaEspresso a 1080p version of the Iron Man 2 trailer, and we asked it convert the clip to a format suitable for the iPhone 4. We tested with full hardware acceleration as well as in software mode. Where the setting was available, we selected encoding speed over quality. The A8-3500M was only run in software mode, since it lacks hardware H.264 encoding.

Both VCE and QuickSync appear to halve transcoding times… except the latter looks to be considerably faster. We didn’t see much of a difference in output image quality between the two, but the output files had drastically different sizes. QuickSync spat out a 69MB video, while VCE got the trailer down to 38MB. (Our source file was 189MB.) Using QuickSync in high-quality mode extended the Core i7-3760QM’s encoding time to about 10 seconds, but the resulting file was even larger—around 100MB. The output of the software encoder, for reference, weighed in at 171MB.

IGP texture filtering quality

Among discrete GPUs, anisotropic filtering comparisons have become somewhat superfluous. Today’s solutions apply the same level of filtering quality with the same mipmap transitions at all polygon angles, which yields generally consistent results across different GPU makes and generations.

In the integrated world, though, things aren’t quite as rosy. We witnessed that first-hand when comparing Llano to Sandy Bridge last year. While Llano’s IGP had a nice, consistent filtering pattern, Sandy’s HD 3000 integrated graphics exhibited huge variations in filtering quality at different angles.

Happily, though, things have improved quite a bit with Ivy Bridge. Take a look:

Trinity

Ivy Bridge

Sandy Bridge

The patterns above are the output of our Direct3D AF Tester. In case you’re not familiar with it, here’s our explanation from last year’s Llano review:

In the images above, you’re peering down a 3D-rendered cylinder or tube, and the inside surface of that tube has been covered with a simple texture map. The colored bands are what are known as mip maps, or increasingly lower resolution copies of the base texture mapped to the walls of the cylinder. The further you move from the camera, the lower the resolution of the mip level used. In the pictures above, the different colors show different mip levels. (Of course, mip maps don’t normally come in different colors. They look very much like one another and like the base texture. This test app colors them in order to make them easily visible.) Mip maps are a helpful tool in texture filtering because sampling from a single copy of the original, high-res texture can be work-intensive and, in a constrained grid of pixels, can produce excessive high-frequency noise, which is visually disruptive. In other words, a little bit of blurring and blending in the right places can be beneficial to the final result.
Alongside mip mapping, we’re layering on a couple of additional techniques to improve image quality. We’re using trilinear filtering to blend between mip levels, so that we don’t see abrupt transitions or banding. That’s why the different colors transition gradually from one to another. We’re also using anisotropic filtering, grabbing more samples for textures that exist at certain angles on the Z or depth axis—typically on surfaces stretching away from the camera, like floors, walls, and ceilings—in order to preserve sharpness that simple mip mapping would destroy. All of these things we take for granted in modern GPUs, which have custom hardware onboard to perform these functions.

In a nutshell, we want the color patterns to map consistently to the geometry (so, in this case, we want them to be perfectly circular), and we want the transitions between each color to be smooth. Trinity’s Radeon HD 7760G integrated graphics has no trouble with either task. Ivy Bridge’s HD 4000 IGP also manages mostly circular patterns with smooth transitions, but if you look closely, you’ll see jagged lines where the red fades into the background checkerboard pattern. As for Sandy Bridge, well, the image speaks for itself.

Trinity

Ivy Bridge

Sandy Bridge

In a real-world example, the differences are plainly visible. Trinity and Ivy Bridge both give us nice, sharp textures at off-axis angles of inclination, while Sandy Bridge fails in a very noticeable way. Those textures only look sharp on Sandy’s IGP if we rotate the viewport to align the wall with the edge of the screen.

The Elder Scrolls V: Skyrim

Our Skyrim test involved running around the town of Whiterun, starting from the city gates, all the way up to Dragonsreach, and then back down again.

We tested at 1366×768 using the “medium” detail preset.

Now, we should preface the results below with a little primer on our testing methodology. Along with measuring average frames per second, we delve inside the second to look at frame rendering times. Studying the time taken to render each frame gives us a better sense of playability, because it highlights issues like stuttering that can occur—and be felt by the player—within the span of one second. Charting frame times shows these issues clear as day, while charting average frames per second obscures them.

For example, imagine one hypothetical second of gameplay. Almost all frames in that second are rendered in 16.7 ms, but the game briefly hangs, taking a disproportionate 100 ms to produce one frame and then catching up by cranking out the next frame in 5 ms—not an uncommon scenario. You’re going to feel the game hitch, but the FPS counter will only report a dip from 60 to 56 FPS, which would suggest a negligible, imperceptible change. Looking inside the second helps us detect such skips, as well as other issues that conventional frame rate data measured in FPS tends to obscure.

We’re going to start by charting frame times over the totality of a representative run for each system—though we conducted five runs per system to sure our results are solid. These plots should give us an at-a-glance impression of overall playability, warts and all. (Note that, since we’re looking at frame latencies, plots sitting lower on the Y axis indicate quicker solutions.)

Frame time

in milliseconds

FPS rate
8.3 120
16.7 60
20 50
25 40
33.3 30
50 20

From this vantage point, it’s obvious the A10-4600M and Radeon HD 7760G IGP combo pulls off the lowest, most consistent frame times of the bunch. Ivy Bridge and its HD 4000 IGP suffer from a greater number of latency spikes, and they seems to exhibit more variance in general, as well. Sandy Bridge is the worst of the bunch by far, with embarrassingly high frame latencies and a huge spike over 250 ms at the end of the run.

We can slice and dice our raw frame-time data in other ways to show different facets of the performance picture. Let’s start with something we’re all familiar with: average frames per second. Though this metric doesn’t account for irregularities in frame latencies, it does give us some sense of typical performance.

Next, we can demarcate the threshold below which 99% of frames are rendered. The lower the threshold, the more fluid the game. This metric offers a sense of overall frame latency, but it filters out fringe cases.

Of course, the 99th percentile result only shows a single point along the latency curve. We can show you that whole curve, as well. With integrated graphics or single-GPU configs, the right hand-side of the graph—and especially the last 10% or so—is where you’ll want to look. That section tends to be where the best and worst solutions diverge.

These latency curves are nice and neat, with no one solution crossing over to be slower than the other one in the last 5% or so. Sometimes things aren’t like that, as we’ll likely see shortly.

Finally, we can rank solutions based on how long they spent working on frames that took longer than 50 ms to render. The results should ideally be “0” across the board, because the illusion of motion becomes hard to maintain once frame latencies rise above 50-ms or so. (50 ms frame times are equivalent to a 20 FPS average.) Simply put, this metric is a measure of “badness.” It tells us about the scope of delays in frame delivery during the test scenario.

No question about it: Trinity’s integrated graphics are fast. They’re substantially quicker than even Llano’s, and the contest with Intel’s solutions is really no contest at all. From a seat-of-the-pants perspective, only the A10-4600M and Radeon HD 7760G are really playable at these settings. Llano is borderline, and the Intel offerings are just too choppy.

Batman: Arkham City

We grappled and glided our way around Gotham, occasionally touching down to mingle with the inhabitants.

Arkham City was tested at 1366×768 using medium detail and medium FXAA, and with v-sync disabled.

Uneven frame times seem to be a fact of life with this game, and our integrated graphics solutions appear to exacerbate the problem. By the looks of it, though, Ivy Bridge has slightly lower and slightly more consistent frame times than Trinity. Llano achieved shorter latencies than Sandy bridge overall, but it also suffered huge latency spikes a couple of times throughout the run. Those seemed to occur consistently no matter how many times we ran through the same sequence.

Well, isn’t that interesting? As poorly as Ivy and the HD 4000 did in Skyrim, they’re actually faster and smoother than the A10-4600M and Radeon HD 7760G across the board here—and by a fair margin, too. Perhaps Intel’s driver team has done some optimization work for Unreal Engine 3-based titles. Either that, or some of the integrated Radeon’s performance has been left untapped. Considering Trinity barely edges out Llano here, the latter seems more likely.

Battlefield 3

We tested Battlefield 3 by playing through the start of the Kaffarov mission, right after the player lands. Our 90-second runs involved walking through the woods and getting into a firefight with a group of hostiles, who fired and lobbed grenades at us.

BF3 wasn’t really playable at anything but the lowest detail preset using these IGPs—so that’s what we used.

Trinity is back in the saddle, yielding lower, more consistent frame times than Ivy Bridge. Sandy Bridge, meanwhile, isn’t even in the running. Not only does it perform poorly, as evidenced by the plot above, but its HD 3000 IGP also has image quality problems. It fails to render shadows across the ground texture properly.

Yep. Trinity definitely leads the pack here. The average FPS figures might fool you into thinking Ivy Bridge is almost as fast, but a glance at our frame latency curve will show otherwise. The Ivy system’s frame times rise sharply for the last 10% or so of frames.

Battery run times

We tested battery run times twice: once running TR Browserbench 1.0, a web browsing simulator of our own design, and again looping a 720p Game of Thrones episode in Windows Media Player. (In case you’re curious, TR Browserbench is a static version of TR’s old home page rigged to refresh every 45 seconds. It cycles through various permutations of text content, images, and Flash ads, with some cache-busting code to keep things realistic.)

Before testing, we conditioned batteries by fully discharging and then recharging each system twice in a row. We also used our colorimeter to equalize display luminosity at around 100 cd/m². That meant brightness levels of 40% for the Trinity system, 70% for the Llano machine, 25% for the Asus N56VM, and 45% for the N53S. The Intel systems had larger panels than the AMD ones, though, so that might have impacted power consumption.

We should note one other caveat: our four machines didn’t all have the same battery capacities. The batteries in the two Intel notebooks both had 56 Wh ratings, but the Llano laptop had a 58 Wh battery, and the Trinity system’s battery was rated for 54 Wh.

It’s no surprise to see the Trinity whitebook pulling off longer run times than the two Intel notebooks, since those have bigger displays and more power-hungry CPUs. The leap over Llano is encouraging, though; coupled with our performance data, it suggests AMD has managed to deliver both higher performance and greater power efficiency without a die shrink.

Now, that said, Trinity’s power-efficiency lead over Llano might not be as huge as our web-browsing results suggest. Last year, after much fiddling with BIOS and control panel settings, we managed to squeeze 5.4 hours of web surfing out of the same Llano whitebook. We weren’t able to reproduce that result this time, but it’s worth keeping in mind.

Even if Trinity only gets you an extra hour of run time over its slower predecessor, though, that’s still a nice improvement.

Conclusions

The Trinity-based A10-4600M APU is an improvement on virtually all fronts versus its predecessor, the Llano-derived A8-3500M, purely on the strength of architectural updates. Although the Bulldozer CPU microarchitecture has been something of a disappointment on the desktop, its updated “Piledriver” module has delivered unambiguous benefits in this new mobile chip, in part because of a big boost in clock speeds. We even saw a nice gain in the lightly threaded SunSpider JavaScript benchmark, a fact that warms our cold, calculating hearts, since weak single-threaded performance has been one of our major concerns about recent AMD processors. The refreshed IGP in Trinity offers somewhat higher performance, as well, which is enough to make it the undisputed champ of this segment. And yes, Trinity manages to combine these performance gains with substantially better battery life than Llano within the same power envelope.

In pitching Trinity to the press, AMD repeatedly emphasized the subjective user experience and downplayed the importance of benchmarks. Take a look at our CPU performance results—even keeping in mind the 10W handicap the A10 had to deal with—and you’ll understand why they might not want to see that comparison emphasized too much. Still, it is a fair point to note that one can’t always perceive differences in CPU performance these days. During our time with the Trinity laptop, we found its snappiness for everyday web browsing and such to be virtually indistinguishable from our two Intel quad-core laptops. Of course, running heavier-duty applications is where our CPU tests and perception collide; there’s little arguing with a photo stitching result where the A10 takes 12 seconds longer than Sandy Bridge to complete the same task. Whether one will regularly notice the difference between the two will depend on how one uses the system.

AMD is doing some good work in helping to push heavy-duty desktop applications like the GIMP and WinZip toward GPU acceleration via OpenCL. Many others, including the x264 video encoder, are purportedly slated to get OpenCL support soon. Further adoption of OpenCL and GPU acceleration could transform some of the stickiest parts of the desktop usage model by making key applications more GPU-dependent than CPU-dependent. That’s huge. Presumably, AMD and its APUs would benefit from this change. However, the early returns from WinZip and LuxMark have shown four of Intel’s x86 CPU cores to be even faster than Trinity’s CPU-and-IGP tag team. AMD still has a lot of work to before it can credibly claim to be fulfilling its vision of a better user experience via converged computing.

For now, the choice between AMD’s Trinity and the Intel competition is very much about priorities. If you value desktop application performance above all else, then Trinity probably isn’t for you. If you care about graphics and gaming, well, then Trinity may hold some interest. We don’t think that’s a minor point in the grand scheme of things. Laptops are rapidly becoming the most popular consumer PCs, and a great many consumers will want to play games on them at least some of the time. We’ve noted that one can’t always tell CPUs apart from the seat-of-the-pants experience. The results of our latency-focused gaming tests will tell you IGP performance deltas are much easier to perceive, at least in graphically intensive titles like the ones we tested. All of these IGPs are relatively wimpy graphics solutions, so you really want the best one possible. That’s one of the reasons we liked Llano, and Trinity gives us no reason to change our tune. Yes, Ivy Bridge’s IGP is much improved, but Trinity’s is enough better to erase any questions of supremacy on that front.

What we want, now, is to get our hands on an ultra-thin laptop with a 17W Trinity inside. If that setup proves to be reasonably competent for both all-around use and occasional gaming, then AMD may have set a new high-water mark of sorts in ultraportable computing. That would really be something.

Comments closed
    • Anvil
    • 7 years ago
    • Eldar
    • 7 years ago

    For a HTPC/light 1080p gaming system will Trinity be the best option, especially from value standpoint??

    • BaronMatrix
    • 7 years ago

    I’d have to say that Trinity is the next level of “platformance.” 4×4 was great when overloaded, Phenom was better Ph2 better and FX\Trinity are even better. That’s why AMD demoed with multiple heavy loads running.

    That’s what real power users need now because having 8-16GB RAM automatically means you can and will have lots of crap open. People tell me I can’t tell the difference between a machine that responds well and one that doesn’t.

    My Ph2 AM2+ does better in those cases than my i5 at work under the same % RAM load. I had to buy a couple of Opterons or I would have an FX now.

    BTW, Server 2008 HyperV doesn’t YET support AMD AVX so FX is ahead of its time, but mainly because Intel went a different direction and Fabs are much more expensive than in Hammer Slot A days. ISAs do matter and Intel changed their AVX\ FMAC implementation twice while AMD was trying to add support. That held up BD several quarters and left them AGAIN in the position of not being fully-compatible with the Win Scheduler.

    But then, Trinity is +500MHz in a lower envelope, so again, they can advance the design without the disturbing the envelope.

    And interestingly enough with the UltraBook, there aren’t any Trinity’s at Newegg yet. HP isn’t even advertising the SleekBook for AMD on their site either. Makes you wonder if Intel isn’t slipping some additional “conditions” on delivering them….

    Ehh never mind.

    This post should get – 2000

      • chuckula
      • 7 years ago

      [quote<]This post should get -2000[/quote<] It most certainly should after you regurgitated idiotic marketing buzzwords like "platformance" that were spoonfed to you by what little is left of AMD marketing.

      • NeelyCam
      • 7 years ago

      That was a pretty weak attempt. What happened? You used to be a much better troll at S|A

    • LoneWolf15
    • 7 years ago

    Trinity looks quite interesting.

    The one thing that I don’t see a lot of people factoring in though is the components that back it up. Processors are generally fast enough now that Trinity should be great for most users. I just hope that it won’t be backed by cheap chipset/mainboard components in an attempt to make a cutthroat, race-to-the-bottom laptop design that doesn’t measure up for stability/reliability.

    There are times I’ve bought Intel, not because of the processor, but because of the mainboards. I’d really like to see AMD products achieve parity in that area, desktop and mobile.

    • Sam125
    • 7 years ago

    I’m guessing the people who’re still clinging to the notion that processor performance for the PC is relevant anymore are the ones hating on Trinity. It’s pretty clear that in an age where Brazos and Atom are “good enough” that Trinity is going to be very successful. : )

      • Firestarter
      • 7 years ago

      Performance is always relevant. The general public is rather less willing to pay for it these days though.

        • Theolendras
        • 7 years ago

        Not really, it all depends on the software you are using. You might argue for longivity of the investement tough. But yeah, many usage can see improvements no mater what speed they’ve got.

      • LoneWolf15
      • 7 years ago

      Success isn’t always based on your metrics. The Pentium 4 sold quite well, if you may recall, beating out the Athlon XP overall.

        • Sam125
        • 7 years ago

        Northwood was really quite an excellent processor. Even the first P4 wasn’t that bad but it was eclipsed by the Athlons and even Intel’s own PIII Coppermine.

        I remember this quite well because I worked in computer retail at the time and gave the Intel rep a lot of grief for it. :p

      • swaaye
      • 7 years ago

      I sure do see a lot of AMD Exxx and Ax notebooks at the big box stores. Lots of Intel Pentium/Celeron/i3 stuff too. I’m sure it will fit right in.

    • maroon1
    • 7 years ago

    The gap between Trinity GPU and HD4000 is much lower than the gap between Llano and HD3000

    Not only in performance but also in image quality. Actually, I can’t even the tell the difference between trinity and HD4000 image quality.

    As for CPU performance A10-4600m performs below i5 2450m. Techreport didn’t test i5 2450m for some reason but you can always look at other reviews.

    Here check CPU benchmark comparison between A10-4600m and i5 2450m
    [url<]http://www.tomshardware.com/reviews/a10-4600m-trinity-piledriver,3202-13.html[/url<] [url<]http://www.tomshardware.com/reviews/a10-4600m-trinity-piledriver,3202-14.html[/url<] [url<]http://www.tomshardware.com/reviews/a10-4600m-trinity-piledriver,3202-15.html[/url<] i5 2450m wins in all in CPU benchmarks, and the gap is sometimes is over 30%

      • halbhh2
      • 7 years ago

      On the whole, the article you link, once I looked beyond only the 3 pages you choose, provide a pretty convincing case for the A10, especially in the crucial, central question of performance vs. energy (and thus battery life):

      “This is probably the most impressive chart in today’s story. Despite the A10-4600M’s significant performance advantage in games, it manages do its job using a little less power than the A8-3500M and about 10 W less than Intel’s Core i5-2450M.”

      My takeaway from your link: A10 is sharply superior. Thanks, Maroon1

    • Ashbringer
    • 7 years ago

    I have to say I’m disappointed, and very sad that I’ve stuck with AMD for so long. Up until today I’ve been an AMD fan, but now I’ve changed my mind. Intel clearly matched AMD in graphics, which is something I really wasn’t expected Intel to do in years. I was hoping that Bulldozer would fix it, and then Trinity. I can assure you that the next PC I build will have an Intel processor in it.

    The tests show that Intel has nearly a 100% performance advantage, and sometimes even larger then that. That means that it takes two Trinity’s to match one i7 processor, and even then not really. The graphics are clearly not enough to fix the sluggish performance of their processor.

    The good news for AMD is that the Trinity is probably comparable to the i5 more then then i7 chips. Given they drops the price sufficiently to compete with it.

      • boomshine
      • 7 years ago

      Intel clearly matched AMD in graphics — in what aspect?

        • Ashbringer
        • 7 years ago

        In that Intel’s latest can keep up against AMD, in terms of integrated graphics. Some cases winning, and some losing. The graphics chip itself is weaker, but because the CPU is so much stronger that games are nearly identical in performance with some differences.

        I honestly think Trinity is a dual core x86 processor. It would make sense of why it’s doing so poorly. Probably something like hyper threading but permanently on.

          • clone
          • 7 years ago

          I agree I’d like to see AMD push the bar even higher but seriously, are you kidding?

          Intel’s integrated graphics working adequately in a cppl of commonly used online testing suites is not a testament to their brilliance….. I sold off my main system and was stuck using Intel integrated for a week until I lost it and bought a 6570….. random crashes in Civilization III no less, a 2d game.

          I own both but when going integrated Intel is not an option, the worst part is Intel integrated is still not on the map when I look for IGP improvements, Nvidia bailed on integrated and AMD is the only innovator in the realm… at least for now.

            • Voldenuit
            • 7 years ago

            [quote<]Nvidia bailed on integrated [/quote<] More like they were forced out of the chipset business by intel refusing to license QPI (not to mention integrating the GPU on the CPU chip).

            • clone
            • 7 years ago

            Nvidia still builds AMD chipsets but they’ve let them languish, had they not they could from that position push into mobile and desktop while furthering their “CPU’s don’t matter as much as GPU’s” argument.

            • Vasilyfav
            • 7 years ago

            You missed his point entirely. Despite all their innovations, their obsolete CPU performance is crippling their GPU to the point of where all those innovations are irrelevant, because there’s no visible performance benefit for the end user.

            • clone
            • 7 years ago

            didn’t miss anything.

            if I was buying a system with add in graphics in mind I’d go Intel but if building with integrated graphics Intel is off the list.

            a cpu doesn’t fix pre existing graphics driver issues nor overcome a lack of extended support no matter how wonderful it is.

          • Voldenuit
          • 7 years ago

          [quote<] In that Intel's latest can keep up against AMD, in terms of integrated graphics. Some cases winning, and some losing. The graphics chip itself is weaker, but because the CPU is so much stronger that games are nearly identical in performance with some differences. [/quote<] Except you're ignoring that intel's much more expensive 45W quad core SKU only ekes out a 5-10% advantage when it does win, but loses by nearly 50% (Diablo III) when it loses. I think it would be fairer to compare Trinity with dual core Ivy and Sandy, where it is still behind on CPU, but not by as much, and clearly ahead on graphics. Such a comparison would also be fairer on battery run times, where the intel notebooks TR was testing had to power 2 extra cores. [quote<] I honestly think Trinity is a dual core x86 processor. It would make sense of why it's doing so poorly. Probably something like hyper threading but permanently on.[/quote<] Then it's a good thing that its natural opposition in the marketplace is dual core sandy and ivy bridge, assuming AMD has the good sense not to price it against intel's quads (where it would lose). If you're going to get a quad core i7 laptop and want to game on it, you'd better be looking for something with a mid to high end discrete GPU. This will absolutely demolish any mobile part that AMD can field. No one's disputing that. But if you need an affordable or compact system (where there is no space for a dGPU) and want some gaming gusto, Trinity is a pretty good option.

          • boomshine
          • 7 years ago

          So don’t say “Intel clearly matched AMD in graphics” if intel only catches up. And also the CPU of intel is helping the intel graphics to pull more FPS, BTW comparing the graphics detail of Trinity versus Intel 4000, AMD clearly won, it’s not on the FPS alone.

          • flip-mode
          • 7 years ago

          [quote<]I honestly think Trinity is a dual core x86 processor.[/quote<] Whatever. It's a hybrid core. It's not quite quad, but it's definitely more than dual. Either way, it's beating a true quad core. I understand being disappointed in AMD. I'm living through that myself. I've been on AMD chips since 1999. I'll be going Intel for my next build whenever that day comes. So I understand the disappointment, but I think you are saying strange things because of it. This supposed "dual core", as you would call it, is solidly beating the previous generation quad core and using less power while doing it. AMD says it functions like a quad core and the benchmarks definitively support that. Zambezi didn't live up to expectation at all, but Piledriver is a very decent improvement over Zambezi.

            • Arag0n
            • 7 years ago

            I would recommend waitting for Vishera before dropping AMD… Bulldozer seems the Windows Vista of AMD. You can buy one, and it mostly will work pretty good, but it’s not a really good option in the market. Looking to this review it makes me feel that AMD can play some tricks from the hat and make next AMD iteration of bulldozer a real Phenom II successor. Cheaper than intel, as powerfull as intel in some areas, and not a big drop in others.

    • sschaem
    • 7 years ago

    A 17w trinity look like a perfect match for 99% of laptop users.

    What matter to people is not that a file unzip in 1.3 second or 1.6 second,
    but how long they can use the web and consume media. and Trinity got an A++ on that.

    For the few people that need more CPU power and less GPU power, Intel is there…

    (I already feel the down votes 🙂

      • brucethemoose
      • 7 years ago

      This.

      • Krogoth
      • 7 years ago

      Exactly/

      CPU performance doesn’t matter that much anymore outside of workstation/servers. The new game in the mainstream market has been power efficiency. Intel has done remarkable well in this, but AMD is catching up.

      This will help invoke price wars which means that we will have affordable ultra portables that can handle any non-workstation/server application.

    • sweatshopking
    • 7 years ago

    you guys stop fighting.

      • ronch
      • 7 years ago

      No. World War III will be started in the TR forums such as this.

    • flip-mode
    • 7 years ago

    I’m wondering about some strictly CPU benchmarks comparing Trinity and a Zambezi quad core at the same clock speed.

      • sircharles32
      • 7 years ago

      Might as well add a Deneb quad in there also (You know, for a baseline).

        • flip-mode
        • 7 years ago

        Yep, good idea. At same clock speed.

    • maxxcool
    • 7 years ago

    229 posts !? it was 109 when i left work yesterday…. holy crap.

    • phileasfogg
    • 7 years ago

    I found this comment on page 2 particularly interesting:

    [i<] The brand-new VCE block throws hardware-accelerated H.264 encoding into the mix, too—something that's important not just for performance and power efficiency reasons, but also for enabling new features like wireless displays. [/i<] So, are Messrs. Wasson and Kowalski implying that AMD's OEMs will soon offer Intel-like WiDi functionality on Trinity-based laptops? I don't know if Llano-based laptops offer this feature (unlikely IMO), but if AMD's software team can do better than WiDi2.0 with Trinity, that could be a super-nice bonus for Trinity.

    • gamoniac
    • 7 years ago

    Nice review, but I was a bit surprised to see an eval version of Winzip being used for benchmarking. IMO, TR ought to at least cough up $29 for a paid version. What is $29 for some professional courtesy, right?

      • OneArmedScissor
      • 7 years ago

      Why even use Winzip at all when there are plenty of things that are actually free like 7-Zip?

        • gamoniac
        • 7 years ago

        I think they use WinZip 16.5 mainly to test the OpenCL accerelation part of the Trinity APU. 7-Zip does not yet support Open CL, I believe. I guess doing so does give WinZip some free marketing, which I am sure WinZip wouldn’t mind.

      • maxxcool
      • 7 years ago

      Given the level of ad’s and installware BS they bundle I’m shocked they tested it at all.

    • HisDivineOrder
    • 7 years ago

    Looking forward to when they shrink this thing down with a modern fab. Then again, I think Haswell is going to end this debate once and for all by fully bringing Intel up to par in graphics while completely destroying AMD in CPU.

    I hate that AMD’s fallen to begging reviewers not to look at benchmarks and that they aren’t actively trying to get ahead of this curve. However, it’s easier to drive in someone else’s wake, I guess. Especially when that someone else is popping out fab shrinks every other year and you can’t even manage a fab shrink every five years reliably…

    • Shenaya
    • 7 years ago
      • Palek
      • 7 years ago

      Anybody willing to click that link, bearing in mind that Shenaya only just registered today?

        • khands
        • 7 years ago

        I gave it a go, single page, no benchmarks, talks more about laptop APU voltage and new instruction sets than anything else and even that is spartan.

          • NeelyCam
          • 7 years ago

          There’s also a pretty picture of an “amd-llano” laptop

          • ronch
          • 7 years ago

          I think he’s sent here to help make page hits..

    • Jigar
    • 7 years ago

    AMD’s bulldozer architecture has started showing the signs of improvement…

    • Sam125
    • 7 years ago

    Thanks for the review, TR. Anandtech’s was very thorough and TR’s was quick and to the point.

    • pogsnet
    • 7 years ago
      • ronch
      • 7 years ago

      [quote<]what more i can say?[/quote<] Shut up and buy! // just kidding

    • just brew it!
    • 7 years ago

    So when do we get to see desktop chips with the Piledriver core? Only info I’ve seen says “sometime in 2012” (granted, I haven’t looked too hard). Piledriver desktop release might even be a good time to buy a high-end [i<]Bulldozer[/i<] CPU since the price will likely drop a fair bit, finally making them an attractive option in terms of price/performance! (I'm still sticking with my Phenom II systems for the time being...)

    • plonk420
    • 7 years ago

    was this Game of Thrones ep Main Profile or High Profile h.264? and if High Profile, Level 4.0 or 4.1?

    • LoneWolf15
    • 7 years ago

    “The system looks and behaves almost like a retail product, but it isn’t one. It’s etched with AMD’s corporate logo instead of a vendor’s badge, and it lacks the fit and finish of a commercial system.”

    Sure looks like most of a Dell Vostro 13″ to me. Not sure who is currently producing those for Dell, but the chassis is unmistakable.

      • Voldenuit
      • 7 years ago

      Looks more like a Compal to me.

        • LoneWolf15
        • 7 years ago

        Who might well make the current Vostro line for Dell.

        [url<]http://www.newtechnology.co.in/wp-content/uploads/2011/03/Dell-Vostro-3350.jpg[/url<] [url<]http://www.spatc.net/media/catalog/product/cache/1/small_image/x350/040ec09b1e35df139433887a97daa66f/d/e/dell_vostro_3350_00.jpg[/url<]

    • link626
    • 7 years ago

    considering that a 14″ IB quad i7’s with gt640 only costs $800 now, there’s no way I’d pay $700 for an A10.

    also, i wish someone would overclock this with amd pscheck.
    would like to see how much headroom this A10 has

    • maxxcool
    • 7 years ago

    Nice to see the fake quad core cpu performing so well for all the fanbois to see…. thank you TR, as always very precise and easy on the eyes to read.

    50% of a quad core is a dual core… that’s bulldozer and pile driver.

      • odizzido
      • 7 years ago

      It has four integer cores and two floating point cores. It’s a six core processor!

      Sarcasm aside, if windows did see bulldozer as a six core processor and just assigned threads to cores based on the math needed that would be nice. Maybe it does this already? I haven’t looked at bulldozer+the W7 update much since I am not looking to upgrade.

        • just brew it!
        • 7 years ago

        “Percentage of workload that requires floating point math” is not a metric the CPU exposes to the OS scheduler.

        • Celess
        • 7 years ago

        Its not that simple, just to rehash the never ending troll war. 🙂

        The FP “unit” can do more than one instruction at a time from two different threads, just like the two “integer” units do together, and just like the old “seperate” FP “units” did. One of several hardwired deficiancies is that it can’t do two 256-bit FMAC instructions at a time, but still can do two 128-bit ones. The float unit does share a scheduler where before it wouldnt in the same way, but thats more an issue of symantics than it is of why bulldozer was slower. Its more complex than the trolls make it. If it was really actually half the FP hardware, the scores would be way way lower in the tests than they are. Its far from a fake 4 core.

          • Rza79
          • 7 years ago

          The limitation of the Bulldozer architecture doesn’t lie in the execution units but in the front end which is shared. The fetch/decode unit can only issue 4 instructions per clock. As a result, an AMD module can only issue the same amount of instructions per clock as an Intel cpu. That’s why I never bought the 8-core crap. Basically an 8-core Bulldozer peaks less instructions per clock than a 6-core Phenom. AMD just took a different approach to multi-threading but wants to generate sales by inflating the core count.

            • Celess
            • 7 years ago

            Cant really size it up the same way you would a video card. Unless each x86 instruction is fast path, and a whole lot of other things happen just right, its rare to block there relative to other things.

            Things like predictor thrashing the cache or flushing long pipes hurt way more, and tons of other things. This, like the over simplification of the FP unit, isnt really the biggest issue.

            • maxxcool
            • 7 years ago

            What he said ^^

          • maxxcool
          • 7 years ago

          Your right, and the benchmarks are spot on. in FPU intensive apps we see the /airqoute/ quad core cpu /end aircquote/ pull 60% of the performance compared to a intel native die. clock speed aside its about 1/2 the integer and fpu potential.

          Now i like the idea of a 256bit fpu than can run 2 128bit operations at the same time…. but it only seems to net 10% +/- 5%. and they tried a different but somewhat similar approach to integer and it is killing them.

          What I would like to see is kinda what Tom’s hardware did with the new improved piledriver. run a pure 3ghz benchmark with all cpu’s hard locked at 3ghz and then run every test known to man. it would still be not in amd’s favor but would shed more light i think

        • geekl33tgamer
        • 7 years ago

        Since the patch, my FX-8120 went from being reported to Windows as an 8-Core CPU to a CPU iwth 4-Cores, 8 Logical Threads.

        I guess it just tries to copy HT now… Still made no performance increase though.

          • BaronMatrix
          • 7 years ago

          Wait, so you’re saying after the patch, DevMgr says 4 cores and TaskMgr says 8 cores?

          • BobbinThreadbare
          • 7 years ago

          It only makes a difference on processes that use 2-4 threads.

        • maxxcool
        • 7 years ago

        the patch was a bust. It would make you sad and kittens would die to hope against hope on it or the future win8 scheduler. a mere 5% overall improvement, same 4 issue limitation 🙁

      • jdaven
      • 7 years ago

      I bet your world flipped upside down the day a computer could have two processors inside without needing a gigantic dual socket server board.

      • flip-mode
      • 7 years ago

      So Trinity “dual” is faster than Llano quad. Very impressive.

      • dpaus
      • 7 years ago

      By your logic, the 80386 – without any built-in FPU – was only a ‘half-core’ chip.

        • Rza79
        • 7 years ago

        Well it was, wasn’t it? You needed to add a 387 FPU unit to make it a ‘full one’.

          • phileasfogg
          • 7 years ago

          Either that or the Weitek FPUs – the Weitek 3167 and 4167 were majestic chips for their time.

    • jdaven
    • 7 years ago

    Bulldozer, Piledriver, Steamroller, Excavator. These names are really the order of construction equipment that AMD can afford over time when it finally comes time to demolish their headquarters. Just four easy payments per year.

    I kid, I kid.

      • ronch
      • 7 years ago

      But paying for your Intel processor sure won’t be easy when that happens.

    • swaaye
    • 7 years ago

    It’s a step forward for budget stuff, a nice improvement over Llano, and a better choice than the AMD Exxx or Celeron/Pentium notebooks.

    Another thought though – was there ever a Llano notebook that was good quality and really liked? AMD notebooks always seem to be middling or worse in that respect, because they are competing at the bottom.

      • Voldenuit
      • 7 years ago

      Sony had a Llano line and there were the Thinkpad Edge series.

      I’d like to see a 17 or 25W Trinity in a Thinkpad X series – now that would be sweet. AMD should make a Trinity SKU with ‘FireGL’ professional workstation graphics drivers and certification.

      • just brew it!
      • 7 years ago

      We’ve got a Llano based Thinkpad at the office that’s pretty decent.

    • DeadOfKnight
    • 7 years ago

    [quote<]All of these IGPs are relatively wimpy graphics solutions[/quote<] Yet they are still faster than consoles...

      • SPOOFE
      • 7 years ago

      Oh yes, perfect examples of the Holy Gaming Experience that is the PC, these integrated graphics procs.

        • DeadOfKnight
        • 7 years ago

        The point is that the baseline for PC graphics has gone up dramatically within the last couple years, so much so that any mainstream PC built with these new processors can play games at settings that are comparable to modern day gaming consoles. That’s just the baseline, and that’s a huge step forward. Back in the day if you didn’t have a discrete GPU, you simply couldn’t play games at all.

        Actually these aren’t just PCs, they are laptops, which makes it even more amazing,

          • SPOOFE
          • 7 years ago

          Yes, PC hardware encompasses a humongous range of hardware capabilities, and yes, PC hardware advances over time. The “baseline” has moved incrementally into “less awful” territory.

          The point is not very good; observing that it takes the lowest of low-end PC graphics capabilities six or seven years to slowly crawl into the realm of not-completely-useless is not a very significant observation. The comparison to consoles is absolutely ridiculous.

            • shank15217
            • 7 years ago

            Why is it useless? Because these console hw aren’t receiving games and updates right? The Trinity GPU is hardly slow, its basically kills off most $100 or less discrete chips in one go and you haven’t even see the desktop variety.

            • SPOOFE
            • 7 years ago

            Trinity is great, as long as it’s being compared to similar products of its class… such as low-end graphics cards, as you did (because that’s what the Trinity GPU [i<]is[/i<]). I think comparing it to a completely different category of product is ridiculous. You might as well compare it to golf clubs.

            • DeadOfKnight
            • 7 years ago

            It’s not a slow crawl over that much time, it wasn’t even moving hardly at all until last year.

            • SPOOFE
            • 7 years ago

            There’s been a steady stream of development in the IGP world for many years now. True, most developments were mere accelerations of 2D functions like video or such, but the ability to play games on these cheapies has been slowly expanding for much longer than you seem to assert.

            • DeadOfKnight
            • 7 years ago

            Yes but only now has it become a viable solution for the casual 3D gamer.

            It’s not just old games either; dx11 games that are easily playable at 720p+.

            If that’s not significant enough, just wait, ULV chips will be doing it at 17W!

            • SPOOFE
            • 7 years ago

            [quote<]Yes but only now has it become a viable solution for the casual 3D gamer.[/quote<] Says you. But I played Torchlight just fine, maxed out, on crappy Intel C2D integrated graphics from something like three or four years ago. I've played other games of various release dates and peak quality on the same (and on crappier!) hardware. So it comes down to what YOU say is "casual 3D gaming". Who died and made you king? Casual 3D gamers have been at it on crappy integrated graphics since forever.

            • DeadOfKnight
            • 7 years ago

            We’re moving away from crappy hardware altogether; that’s my point. I don’t even know what your argument is with my post, it seems like you’re just arguing for the sake of argument.

            We have integrated graphics that has caught up with mainstream gaming. Sure, it may be 6-year-old mainstream gaming, but that’s better than we’ve ever had for 3d games, and I still see many arguments that the current generation of consoles is more than good enough. I don’t agree with that sentiment, but you have to give it to the chipmakers for raising the bar up to that standard.

            This is a good thing not just because it brings more performance, but also because it completely demolishes the market for crappy discrete graphics solutions which never should have existed in the first place. Such wasteful and completely useless products were only ever put out to decieve cash-strapped consumers.

            This also helps out the PC gaming industry. With the baseline up to this standard the PC becomes a much better platform for developers. They are already tweaking their games to run well on these solutions.

            • SPOOFE
            • 7 years ago

            [quote<] I don't even know what your argument is with my post[/quote<] Because you're reading all sorts of nonsense into very simple things that I'm saying, because you're itching for a fight or something. I said that your observation above "is not a very significant observation." [i<]Like you just said[/i<], "We're moving away from crappy hardware altogether; that's my point." Well, big fat duh, sherlock. That's exactly how the tech industry - and, really, so-o-o-o many other industries and aspects of human existence - has behaved since forever. So you make an observation that pertains to almost everything... and now your panties are in a knot, spewing four-paragraph response for whatever reason, because I told you straight-up - no unnecessary verbiage spewt about - that your observation was not terribly significant. That's it. I didn't even read the rest of those letters you threw about as if there was some grand philosophical debate to be had. There isn't; there's just you, bloviating very simple responses into some mutant monster of a debate that's happening 99% inside your head, and me, forced to resort to so much unnecessary Pounding You In The Head that it really makes me wonder if you're just seeking the sensation of "victory" by making people avoid interacting with you.

            • DeadOfKnight
            • 7 years ago

            Heh, I’m not the one with the belligerent tone. I was talking shop, I really don’t care if someone is wrong, let them stay wrong. Doesn’t bother me. I was just defending because you were attacking.

            • Bensam123
            • 7 years ago

            The lowest of the low… That’s the baseline consoles function at.

            This whole argument doesn’t even take into account what a 680 or 7970 can accomplish.

            • SPOOFE
            • 7 years ago

            [quote<]The lowest of the low... That's the baseline consoles function at.[/quote<] By that same criteria, every console sold is among the fastest of its class. [quote<]This whole argument doesn't even take into account what a 680 or 7970 can accomplish.[/quote<] Nor does it take into account one's horoscope, the number of pigeons currently pooping in Central Park, or the exact photon output of Betelgeuse, because none of those things are salient, either.

            • Bensam123
            • 7 years ago

            lol… I think you’re being purposefully obtuse.

            • SPOOFE
            • 7 years ago

            Okay. I think it’s ridiculous to compare part of a brand-new CPU to an entire console entertainment system that’s clearly nearing official EOL. I think it’s even more ridiculous to include brand-new high-end graphics cards in the mix. What is the point of the comparison? What is gleaned? That newer hardware advances over old hardware? You needed to compare Trinity to an Xbox to a 680 to learn this?

            Like I said: Ridiculous. Although if you think there’s any viable use to such endeavors, I welcome the explanation.

            • Bensam123
            • 7 years ago

            It would be ridiculous if there were newer systems to compare it to, but there aren’t and the launch of such systems, which will probably be based on two year old hardware is still 1-2 years off.

            It isn’t a fair comparison, but that’s because there isn’t a fair comparison to be made. That’s one of the downfalls of consoles and the point the OP was making. Especially with this generation that they’re milking for everything it’s worth.

            • DeadOfKnight
            • 7 years ago

            I think it’s a fair assumption that this next generation of consoles isn’t going to impress us hardware enthusiasts. There’s potential for the 3D craze to make its way to consoles in a big way. Best case scenario, every game runs at 1080p with medium-high detail settings and 4xAA. Extremely doubtful.

            • SPOOFE
            • 7 years ago

            “Fair”? Who gives two donkeys about fair?

            I mentioned that it’s not significant.

            [quote<]That's one of the downfalls of consoles and the point the OP was making.[/quote<] ... So? Significance? Commentary on something [i<]everybody doesn't already know[/i<]? Do you people even read your own posts? Vomit much, afterwards?

            • Bensam123
            • 7 years ago

            I don’t know, I was hoping it would sort of click for you if I went through all the steps with you.

            • Voldenuit
            • 7 years ago

            I don’t think it’s that ridiculous at all, since consoles dictate the performance target for cross-platform (read: nearly all PC games these days).

            That’s ignoring that console titles benefit from a fixed configuration and more optimization effort/resources/budgets, of course. But they’re definitely useful as a baseline and/or a comparison.

            • SPOOFE
            • 7 years ago

            [quote<]since consoles dictate the performance target for cross-platform (read: nearly all PC games these days).[/quote<] BULL. The entire justification for this myth is some sort of notion that "ported games SHOULD run better on my system". But since there's no way to ever make a controlled comparison, it's inherently impossible to actually TEST THE ASSERTION, EVER. Console ports to PC regularly get features or tweaks that the original releases lacked... [i<]just not always[/i<]. And some have glitches. And some run poorly. But EVERY single one of those traits has ALWAYS existed in the world of gaming. Some games do suck. Some games are glitchy. Some don't have features you want. This existed in the '70s, in the '80s, in the '90s, and ever since the clock hit the 2000s. The source of this myth's perpetuation is the whiny self-indulgent, self-absorbed, self-centered "core attitude" that permeates the gaming audience, the very same attitude that makes it impossible for coherent feedback to reach devs and publishers, the very same attitude that makes them think they can get away with crap. It's self-defeating and holy crap, how have more people not figured this out by now?

            • Bensam123
            • 7 years ago

            Consolization took place sir. Everything went down hill and only now a handful of games caught up to Crysis graphically.

            Skyrim, BF3, and Crysis 2 (go figure) are the big three in the last five years that have done this. All three of the above are also the sole games that offered PC exclusive content and updates.

            I appreciate you attacking all the supporters of such standing with ad-hominems, but I don’t believe it helps you make a point. Just sorta seems like you’re getting pissy because no ones believing you. You do need more stereotypes and hyperbole though. I think that would help more.

            Not to poke holes in swiss cheese here, but how many people do you know of putting SSDs in their consoles… Why do you suppose that is? Consoles don’t come with SSDs… They aren’t a piece of hardware you can even buy for them legitimately. I wonder what would happen if you could change out the graphics card and processor too… More memory… I feel like we’re edging on something here, I can’t quite put it in words… You’ll have to feel it out for me.

        • Bensam123
        • 7 years ago

        lol… it’s amazing how such a tiny tidbit of truth can hold such a huge bite.

          • SPOOFE
          • 7 years ago

          That’s not a bite, that’s just your spittle on my shirt. 🙂

      • ronch
      • 7 years ago

      You can thank AMD for that. I guess IGPs only became respectable starting with the 780G. Then after that the APUs came along. Nvidia was out of the IGP race, and Intel only stepped up its efforts because of AMD.

    • WaltC
    • 7 years ago

    [quote<]Well, isn't that interesting? As poorly as Ivy and the HD 4000 did in Skyrim, they're actually faster and smoother than the A10-4600M and Radeon HD 7760G across the board here—and by a fair margin, too. Perhaps Intel's driver team has done some optimization work for Unreal Engine 3-based titles. Either that, or some of the integrated Radeon's performance has been left untapped. Considering Trinity barely edges out Llano here, the latter seems more likely.[/quote<] What really interested me with Batman was how you used FXAA instead of the 4x FSAA you used in the Skyrim comparison, and you *turned off* every single IQ feature the game offered, save dynamic shadows. Alternatively, you might've experimented with lowering the resolution and raising game details by turning on game features, like D3d 11 support, for instance--just to see what, if any, difference it made. I guess what you meant to say here is that as long as you don't mind the game looking like horse manure, the HD 4000 is your ticket. I'll wager that if you turned on all or some of those features to make the game look better, the HD 4000 would quickly drop behind, and significantly so. Shheeesh. This is a far cry from TR's Llano comparison with Sandy Bridge IGP, IIRC (maybe I'm wrong, maybe it wasn't TR). IIRC, that comparison delved into the graphics end of things far deeper than you've bothered doing with Trinity. (I apologize if I have TR mixed up with another site.) Anyway, this looks to be a no-brainer comparison--if you are someone who uses a notebook to word-process, Internet browse, use Excel, and run 3d games--then Trinity is the product of choice. But it's likely to be Ivy Fridge if you are someone who loves running synthetic cpu benchmarks in order to get a "top score," and isn't much enamored of software like 3d games...;) I mean, let's face it, everyone buys DX 11 games so that they can [url=http://images.anandtech.com/doci/5831/Screen%20Shot%202012-05-14%20at%2010.39.30%20PM.png<]turn off D3d 11[/url<] support, right? And "ambient occlusion"--ugh. You're right, who needs it? Awful feature--why'd they support it in the first place? Ingrates. Shader model 5.0? Trash. AMD claims this is a DX11 gpu, so it is incumbent on you to test it as same, as well as the HD 4000, even if the results come out as Trinity 10 fps HD4k 0 fps. IMHO, of course. It's the kind of specific information a review should provide.

      • SPOOFE
      • 7 years ago

      [quote<] I'll wager that if you turned on all or some of those features to make the game look better, the HD 4000 would quickly drop behind, and significantly so. Shheeesh.[/quote<] So what? They'd both be unplayable at that point. Hooray for AMD, their IGP is only slightly less awful than Intel's at high resolutions or graphics settings! Way to miss the point, Walt.

    • halbhh2
    • 7 years ago

    “If you value desktop application performance above all else, then Trinity probably isn’t for you. If you care about graphics and gaming, well, then Trinity may hold some interest. ”

    Er….well, from the results which I looked over carefully, and caring about graphics, browsing, HD movies, and some bit of gaming, I’d say “Trinity is the *clear* choice, superior.” instead.

    I like my year-old Sandy bridge laptop, but it looks like the next one will be Trinity.

      • SPOOFE
      • 7 years ago

      Ha. The “clear” choice is a discrete GPU. The “CHEAP” choice is Trinity.

    • flip-mode
    • 7 years ago

    I’m not feeling the love.

    Piledriver improves on Orochi, sure, but I’m not really sure we should be impressed by that. LOL, um, Thuban was an improvement (like, a fairly substantial one) over Orochi, remember? I suppose the power consumption is actually quite impressive, but CPU performance certainly is not. GPU performance on a laptop is not something I personally care about. I can’t speak for the broader market, but I hope for AMD’s sake that the broader market cares a lot more about it than I do. My wife needs a new laptop. Unless the price difference between Ivy Bridge and Trinity solutions is too much to overlook, I’m aiming for an Ivy Bridge.

    But, keep up with the incremental improvements, AMD. That really adds up over time.

      • deruberhanyok
      • 7 years ago

      I’m running an old laptop right now – Pentium M 1.7GHz – that I dropped 2GB of RAM and a first generation OCZ vertex into. It’s plenty fast for my day to day use (with exception to the horrid onboard Intel IGPs of 6 years ago) with Windows 7. Admittedly, a large part of that was from the SSD upgrade, but the point I’m trying to make is that we had adequate day-to-day performance from CPUs many years ago.

      I think, if your use is the typical “checking email, facebooking, watching some hulus” then the A10’s CPU performance is more than enough – as it is with just about any processor out these days. Their excellent IGP and UVD hardware provides the horsepower needed for 1080p video playback or the occasional gaming, without sacrificing battery life (which I feel is the most important metric if you’re talking about portable systems).

      Ultimately, though, I’m saying what you just said: price is king. So I still get to say you’re on point. 🙂

    • Chrispy_
    • 7 years ago

    This is why TR is always worth waiting for. I read Anand’s review yesterday and was a little depressed that The HD4000 gets so close in so many games.

    What the numbers don’t tell you is that the HD4000 is far less playable – and only an “inside the second” analysis of actual frame latencies shows the seemingly Trinity-beating IGP results as a false positive.

    It’s disappointing that Trinity isn’t showing the “up to 54% faster” than Llano as advertised but it’s still a solid enough choice that I will be picking one up instead of a Mobile Ivy. Let’s hope the vendors don’t make a complete hash of the launch models.

    • ronch
    • 7 years ago

    I’ve noticed that ever since Bulldozer came out, there’s no fanfare on AMD’s website when the new chips came out. Same today with Trinity. It’s so unlike when Barcelona or Deneb came out. Then again, AMD’s website sucks nowadays, with lots of inconsistencies and unprofessional-looking pages. Like they just randomly grabbed some guy in the office to do it.

      • dpaus
      • 7 years ago

      [quote<]Like they just randomly grabbed some guy in the office to do it[/quote<] Given that they sacked most of their marketing team when Rory Read took over, that's probably pretty close to the truth.

        • ermo
        • 7 years ago
        • pogsnet
        • 7 years ago
        • ronch
        • 7 years ago

        No, the AMD website sucked even BEFORE the layoffs on Read’s watch. Missing pages, wrong grammar, confusing/wrong/incomplete information (like, I think I remember a Black Ed processor indicated as not being unlocked… duh!), etc. Far cry from the best websites I’ve ever browsed.

          • Arclight
          • 7 years ago

          Meh, it’s all BS anyhow. The only important info you can get from AMD’s or Intel’s websites are code numbers and a few usefull spex that you don’t always find in a review, like TJ Max.

            • ronch
            • 7 years ago

            Yeah, except AMD’s spec tables can sometimes be weird, like they specify only the amount of L2 per core. Well, they could’ve made it something like ‘512KB per core’ instead of just ‘512KB’, because that’s sure as heck confusing.

          • just brew it!
          • 7 years ago

          They also do something odd on some of the menus that Chrome has problems with…

            • ronch
            • 7 years ago

            Bottomline – it sucks. I hope someone at AMD tells Rory about it.

        • ronch
        • 7 years ago

        ermo forgot to type his comment… or maybe your words left him speechless.

          • ermo
          • 7 years ago

          I was going to say that you were still right, but belatedly realized that dpaus never contradicted you.

          By that time, I couldn’t remove my post, hence I simply left it blank. 🙂

            • ronch
            • 7 years ago

            There’s no Cancel button?

            • ermo
            • 7 years ago

            Yes, but there’s no delete button once you’ve submitted your reply.

            You can only delete the content of the post.

    • ronch
    • 7 years ago

    Really wish my Turion X2 laptop lasted a bit longer for Trinity. Well, it broke down after years of service and I had no choice but to grab a Lenovo with a Core i5-2450M in it last Feb. Not a bad choice, but I admit, Dual Graphics is not as glorious as it sounds, as I’ve found out. I’d much rather have something such as Trinity.

    Oh well. On to Vishera. Can’t wait to see how it does compared to Bulldozer.

      • willmore
      • 7 years ago

      Similar here. My old Toshiba A135 died and I held out as long as I can (this was over a year ago and I was holding out for Llano). I didn’t make it and ended up with a ‘craptop’ with a Pentium B940 in it. Maybe it’ll do me a favor and die of natural causes sometime this summer. 🙂

    • slaimus
    • 7 years ago

    With some tweaks like switching to GCN shaders, it would be a heck of a console chip.

    • jdaven
    • 7 years ago

    Taking a look at the die layout, I can see that the GPU takes up half of the chip. It would be cool if AMD could make their APUs more modular or have all CPU. This would allow OEMs to make different config choics:

    1/4 CPU (2C/1C Int/FP) + 3/4 512 SPs – “Gaming APU”
    1/2 CPU (4C/2C Int/FP) + 1/2 384 SPs – “Everyday APU”
    3/4 CPU (6C/3C Int/FP) + 1/4 256 SPs – “Media APU”
    100% CPU (8C/4C Int/FP) – “Heavy-threaded CPU”

    All of these could have the same power envelope and clocks. You pick the APU/CPU based on your workload.

      • dpaus
      • 7 years ago

      Even better if it could dynamically scale across those configurations, maxing out each within a given power budget.

        • willmore
        • 7 years ago

        I think we’ll see more hope of this happening with the Bobcat type of chip. For one, the CPU core is synthasized, not hard, so it could be tweaked more easily–and it’s made on a process and at a company that does contract fab work.

        • NeelyCam
        • 7 years ago

        That would require the chip to have max cores and max SPs = big and expensive

          • dpaus
          • 7 years ago

          Still less than a high-end CPU [i<][b<]and[/i<][/b<] a discrete mid-range/high-end GPU though. If you need the performance and the flexibility, I think the value proposition could be made

      • Bensam123
      • 7 years ago

      Sounds like daughter cards…

      Conveniently, those turned into graphics cards which function like daughter cards.

    • derFunkenstein
    • 7 years ago

    Prior to this review (and prior to Anand’s that I read yesterday) I didn’t hold out any hope for Trinity to be a decent replacement for Llano. These reviews have kind of changed my mind and I’m vaguely interested to see the desktop version.

    Also curious: what will AMD Dual Graphics be with the mobile/desktop versions? Do we need to wait for low-end 7000-series GPUs to be released? Will the low-end discrete graphics be VLIW4, VLIW5, or GCN-based? Does the desktop version of Trinity require a new socket or will A75/FM1 still be viable? Lots of questions remain.

      • NeelyCam
      • 7 years ago

      As almost always is the case with new chips vs. prev.gen, if one has a Llano system, no need to upgrade. Same for SB/IB.

    • riviera74
    • 7 years ago

    I only have one question: when can we get our hands on a notebook with a Trinity APU?

      • FuturePastNow
      • 7 years ago

      I’ll add a second question: will the A10-4600M actually be available in 13″ laptops like that demo unit? Because I remember companies only selling the A8-3500M in 16″ behemoths.

    • smilingcrow
    • 7 years ago

    “We liked the mobile Llano variant well enough to consider it a viable alternative to Intel’s dual-core Sandy Bridge processors—perhaps even a superior choice for most folks, given the gap in graphics capabilities.”

    Which begs the question do most folk really play graphic intensive games on their laptops? I think not but am curious to know what percentage do. OpenCL is a long way away from being significant for mainstream (non techie) users so gaming performance is what sells these surely so it would be good to know just how large the market is.

    Good to see AMD with another strong challenger in the mobile space after so many years in the wilderness. It’s sure taken a long time for them to integrate ATI’s tech into their CPUs but good to see it has finally paid off.

      • Voldenuit
      • 7 years ago

      [quote<]do most folk really play graphic intensive games on their laptops?[/quote<] /Anecdotal/ My wife played Oblivion on her ancient 13" C2D with Geforce (some ancient low-to-mid end unit) Mobility laptop. When Skyrim came out, the machine wasn't powerful enough, and she migrated to our desktop. So, to answer your question: some people do. /Anecdotal/

        • dpaus
        • 7 years ago

        /speculative/

        And now, even more people will – just because they can.

        /speculative/

          • willmore
          • 7 years ago

          Yep, people won’t game on a laptop if the game won’t run well (or at all). Sort of a chicken-and-the-egg kind of problem.

          The fact that there are laptops designed for gaming (and have been for a long time) should tell you that some people do game on laptops and that, combined with the fact that those who don’t game on them probably have a laptop that they *can’t* game on should tell you what will happen when more laptops will be capable of gaming.

          “If you build it, they will come.”

          • smilingcrow
          • 7 years ago

          “And now, even more people will – just because they can.”

          The difference here is that it’s now possible at a lower price point so it will be interesting to see how that affects gaming on low end laptops. Even if it doesn’t lead to much of an increase in the number of people gaming it might have a significant impact in that people will play a wider range of more demanding games which is in itself significant.

        • Mr Bill
        • 7 years ago

        I play WOW on my A8-3500 llano laptop.

      • My Johnson
      • 7 years ago

      Minecraft.
      Google Earth
      Some 3D flash stuff within a web page.

      • JustAnEngineer
      • 7 years ago

      I’ve played Mass Effect 3, Skyrim, Guild Wars 2 (beta), Counter-Strike: Source, Civilization V and Borderlands on my laptop this week.

      • smilingcrow
      • 7 years ago

      Thanks but I’m not interested in personal anecdotes but was wondering if there was a survey that looked at this.

        • heinsj24
        • 7 years ago

        A trip to Steam’s Hardware and Software survey would answer that question. Gaming at the resolution of 1366 x 768 accounts for 16% of users.

          • smilingcrow
          • 7 years ago

          I asked what percentage of laptop users play graphically demanding games. Steam just have data on gamers not on everybody so their data can’t answer the question I asked.

            • SPOOFE
            • 7 years ago

            Maybe you oughtta go look it up yourself, then.

      • ermo
      • 7 years ago

      Students?

      EDIT: -1 for suggesting that Trinitiy is a value solution with good enough CPU performance, good battery life and class-leading IGP performance, adequate for playing games with medium settings at a resolution of 1366×768?

      If that is not aimed squarely at students, what is?

      Given that it is likely going to be competitively priced, something like this in a 13″ or 14″ form factor is IMO the ideal student laptop. And it can’t be a coincidence that it is reaching the market just prior to the back-to-school season.

        • smilingcrow
        • 7 years ago

        Agreed, I think students on a budget will love this if they game and if the price is right even if they don’t.
        AMD’s mobile APUs are easily the cream of their products for me and going forward I think they are important. As well as hopefully making them good money it also gives them a halo product which they’ve lacked for ages. There are a lot of rumours about Haswell being a game changer for Intel so AMD need to keep the focus on mobile APUs I feel. Although Intel are unlikely to significantly close the gap in driver quality terms with AMD anytime soon if their hardware gets much more competitive in terms of FPS that will take the shine off AMD’s lead.
        It’s always good to see an area where there is a lot of competition and AMD v Intel is a big pull. 🙂
        If I was running Intel I would be focusing a lot on improving graphics performance in laptops and overall performance for tablets/phones. It’s easy to say what I would do if running Intel as they have so much R&D funds but for AMD it would be hard to call as they seem so cash starved. But that just goes to show how well they are doing and also how badly Intel are doing in this area. I’ve heard it said that Intel have difficulty in taking graphics seriously but considering they started out as a memory company I believe that seems ironic.

      • willyolio
      • 7 years ago

      if my laptop were capable of Starcraft II on it, i’d play it. Trinity seems to handle SC2 quite well.

      • odizzido
      • 7 years ago

      I know a lot of people who don’t like games but play/watch the sims.

    • kamikaziechameleon
    • 7 years ago

    This is encouraging. Assuming AMD can price competitively they’ll have the low end gamer market cornered.

      • ronch
      • 7 years ago

      More importantly, AMD should have ample supply, else most folks will just be reading about Trinity.

    • Peldor
    • 7 years ago

    Couple of comments:

    There’s not much point in battery life tests with different screens. It might be more interesting to do power consumption tests while running an external monitor. That would give a better assessment of how the different CPUs keep power consumption in check under various loads.

    Do we know yet what the i5 and i3 Ivy Bridge mobile lineup has for graphics? All of the Trinity models listed are cut down by 1/3 or more in clockspeed or resources from the top A10 model. If Intel keeps the HD4000 on a dual-core i3/i5, that’s going to be a potent competitor for the lower spec Trinity models.

    Edit: Just found this piece: [url<]http://news.softpedia.com/news/Intel-Core-i5-3210M-Specifications-Dual-Core-Ivy-Bridge-269807.shtml[/url<] Core i5-3210M 35W 2.5 GHz / 3.1 GHz Turbo HD 4000 at 650 MHz/1100 MHz max That's down from a 1250 max clock on the quad-core reviewed here, but still should narrow the gap in games relative to the lower Trinity parts.

      • NeelyCam
      • 7 years ago

      Much agreed.

      Actually, I found it kind of silly to compare Trinity “quad”-core to SandyBridge Quad-Core. The CPU tests clearly show that SB and Trinity cores are in different classes altogether. Comparing to a dual-core SB would’ve been a better apples/apples comparison. Same thing with battery life – what’s the point of comparing essentially what is a dual-core with SMT to a quad-core?

        • ermo
        • 7 years ago

        [quote<]what's the point of comparing essentially what is a dual-core with SMT to a quad-core [i<]with SMT?[/i<][/quote<] FYP and I agree.

      • BobbinThreadbare
      • 7 years ago

      Couldn’t they have hooked up external monitors and turned off the laptop monitors and then not had to worry about them.

      Yes this wouldn’t give “real world” numbers, but it would tell you how much power the systems are drawing.

        • Peldor
        • 7 years ago

        Yes that’s what I was suggesting.

        • pogsnet
        • 7 years ago
      • Bensam123
      • 7 years ago

      I would’ve actually chosen to compare 4600m to a Intel model that is at the same price point. Everything else really doesn’t matter as that’s all what it comes down to.

    • shank15217
    • 7 years ago

    Whats is the point in having a 256-bit bus for the GPU if the memory bus width is only 128-bit? I think the GPU virtualization feature is very interesting for compute applications. It would be very cool if VMs get direct access to the GPU through SR-IOV virtual function support, this might make the trinity and subsequent processors in the APU line a tightly coupled HPC solution as well.

      • just brew it!
      • 7 years ago

      The GPU bus may be running at a lower clock speed than the DRAM bus. Also (not clear on this) it may be able to access data in the CPU’s cache, which is faster than DRAM.

    • willmore
    • 7 years ago

    Minor nit. First page, thrid paragraph:
    “parity on this front had long been eluded AMD”

    Either remove ‘been’ or change ‘eluded’ to ‘eluding’.

    • deruberhanyok
    • 7 years ago

    Thanks for the writeup, TR!

    I get why you guys decided to compare them to the 45W processors, but I feel it doesn’t really mean much. If you’re looking for a $700 laptop, the likes of which might have the A10 APU in it, comparing it to a processor that would likely appear in a laptop that costs an extra $400 (or more) doesn’t really mean anything.

    I’m also wondering how they’d compare to a different models of the current Sandy Bridge, since so many of them are still in the market – maybe an i5 something or other, with a similar power envelope. Specifically in regards to battery life, since I think the performance tradeoffs between the two will be pretty clear.

    Any chance we could see a short follow-up with a few other comparisons? Perhaps one of the 2 core, 4 thread 35W Sandy Bridge processors – i7-2620M or i5-2520M, something like that?

    • mekpro
    • 7 years ago

    I wish notebook manufacturers made a high quality Trinity-based notebooks. I know it contradicted to the strenght in price-point but when its come to notebook purchasing, CPU (and even price) is only a part of equation. Many buyer (including myself) are concerned more on build quality of the notebook.

    The manufacturer, sadly, usually release their AMD based models with poor build quality, cheap display and ugly looking. This was true for Llano and i afraid it still hold true for Trinity. Which may made it less appealing than Ivy’s Ultrabook lineup.

    • rechicero
    • 7 years ago

    In a nutshell:

    The pricier CPUs, with 29% more thermal envelope and twice the memory are faster in CPU/Memory intensive* tasks.

    The cheaper and thermally frugal CPUs, with smaller screens, half the memory, without a discrete GPU (not used, but I assume connected and in some kind of idle state) and smaller thermal envelope have more battery life.

    Did you really need tests for that?

    Don’t you really feel like you should’ve compare Trinity with Llano and call it a day? Throwing a couple Intel rigs just for the sake of it looks odd to me.

    Try a 35W Intel chip with 4 Gigs, same screen and without a discrete GPU (and ideally with the same HDD).

    * Although you said “we don’t expect memory capacity to be a constraint in any of our tests”, in the first image processing test, you tell us “This task can require lots of memory”.

    Even if there are no differences performance-wise, those extra Gigs surely will eat some watts in the battery run tests, aren’t they?

    If you replaced the HDD in the AMD system,

    why didn’t you replace the memory in the systems with 8 Gigs or viceversa?
    why didn’t you run the battery tests with an external monitor?
    why didn’t you unplug, if possible, the discrete GPU?
    why didn’t you use the same memory speeds (like, say, 1333 Mhz 9-9-9-24 for every system)?

    You’ve done great things in the past. The “not only FPS” approach for GPU testing is just brilliant and you’re the site I like the most. But this review… Let’s say it’s not the best you’ve done :-(.

      • kc77
      • 7 years ago

      Good grief. I actually think this is the best Trinity review out there.

      A) There’s not nearly as many Intel chips in the results as some of the others that really don’t do much but muddy the waters (I think rather intentionally). In this review there’s two so you at least can get an understanding of where Trinity is.

      B) This is the only review that not only gives you latency tests, but image quality results as well, which is a very big deal if you are comparing a SB laptop with that of Trinity. Aside from the lower performance when gaming, there’s pretty big deficit in the realm of image quality with SB. This is something that no one else has provided.

      Overall I think it’s a pretty good review that provides information that many other reviewers that released earlier didn’t provide.

        • rechicero
        • 7 years ago

        It is. Just the “not just FPS” approach means anything involving a GPU will be better here. Always. And then you reminded me of the Image Quality tests…

        After saying that:

        A) Yeah, I like a most focused approach. I don’t want more CPUs, but I do want relevant ones: with similar prices, TDPs, screens, memory… These Intel rigs are just too different. From any point of view you can think of: Price, TDP, screen, memory setup, battery…

        B) I agree, completely. When testing GPUs (or IGPs) there is only one site for me: TechReport.

        They offered a great amount of useful GPU info, but I need to estimate what the CPU and battery tests mean. And now I ask you, apart from “better battery than Llano”, did you learn something from the battery tests? Is Trinity better, equal or worse than Intel CPUs in battery life according to this tests? How much is the delta here? And in perf-watt?

        We’re talking about mobile CPUs here. Battery life and perf-watt surely have some importance.

          • NeelyCam
          • 7 years ago

          I agree. GPU-wise, this is the best review that I’ve seen, but otherwise this isn’t in the same level as some of the other TR reviews. More laptops (including Intel dual-cores) to compare Trinity to would’ve been nice. Also, one of my favorites, task energy, is missing. As I already said, I thought the battery life tests didn’t really make that much sense.

          This is not criticism – these are just observations and opinions. I know TR guys are busybusy and need to sleep some times too, and I understand the pressure to get the review out the door before the ‘patrons’ get anxious.

          I just hope there will be more TR Trinity reviews in the future – maybe when the 17W parts come out – that include task energy, wider array of other laptops etc.

          • grantmeaname
          • 7 years ago

          Perf-watt doesn’t have that much relevance unless you’re doing your h.264 transcoding on the go. Laptops spend almost all of their time at idle, and they really aren’t working that hard even when ‘in use’ for web browsing, email, or playing music.

    • TaBoVilla
    • 7 years ago

    awesome review guys!

    • jdaven
    • 7 years ago

    I am most interested in the 25W Trinity SKU. It is only capped by CPU and GPU speeds. Full SPs and CPU cores at near ultrathin power consumption for the win.

    • dpaus
    • 7 years ago

    [quote<]Trinity can drive as many as four (displays) at once over HDMI, DVI, and DisplayPort.... and this chip supports DisplayPort 1.2 operation at up to 5.4 Gbps, including the daisy-chaining of multiple monitors on a single link.[/quote<] I could've stopped reading right there, and been satisfied that this is the chip I want in my next notebook. But I read on, and was satisfied - not impressed, but satisfied - with the progress AMD has made with this architecture since Bulldozer. I will buy a Trinty-based 'ultrathin' (and I'll take a 35W version, thank you, not the 17W, battery life be damned), but I'll admit up front that I'll dump it in a heartbeat if, next year, AMD comes out with a 35W Steamroller APU featuring 28nm process for improved performance and efficiency, and invests 50% of the power savings into CPU performance improvements and the rest into putting in the baddest-ass Southern Islands GPU they can. [u<]That's[/u<] an ultrathin I could get excited about.

    • bcronce
    • 7 years ago

    Conclusions: Think Seasonic Goldbadged PSU without the price. AWESOME.

    Now to wait for a Piledriver server CPU.

    • jensend
    • 7 years ago

    Measuring memory latency in cycles may be an interesting comparison of architectural design decisions, and it’s convenient that one set of values can cover all CPUs with a given microarchitecture regardless of clockspeed.

    However, it makes no sense to use that as a performance benchmark, and doing so leads to confusions.

    The Piledriver cache isn’t slower than Stars’s in any meaningful sense except perhaps at the 32KB stride length. The two microarchitectures are simply designed for radically different clockspeeds.

    Why not do a graph of latencies in nanoseconds?

      • willmore
      • 7 years ago

      Well said.

      With CPUs varying the clock speed at will, it’s very hard to make meaningful measurements or to draw meaningful suppositions from them. For example, the L1 is always at core speed, right? So, the # of clocks to it are fixed and independent of clock speed. What about an L2? Is it at core speed? Does its speed vary? For L3 and main memory, the core and the memory operate at different speeds. When the core changes speed, what does that do to the latency? For anyone who’s had to move data between different clock domains, you’ll see the problem right away.

      If we could fix the core at a frequency and run the test, and then change the frequency and rerun, we might start to see something that can lead to a useful insight, but as it is–with the core speed changing at will–the results don’t tell us much.

      Converting clocks into time when you don’t know the cycle time (inverse of clock frequency) isn’t possible to do reliably. After all, the cores in these chips can vary by over a factor of 2. Even my cruddy little SNB generation Pentium B940 laptop can vary between 800MHz and 2000MHz.

      • ptsant
      • 7 years ago

      [quote<]The Piledriver cache isn't slower than Stars's in any meaningful sense except perhaps at the 32KB stride length. The two microarchitectures are simply designed for radically different clockspeeds.[/quote<] Another extremely important metric would be the probability of a cache hit vs miss. Intel has very, very efficient algorithms for auto-prefetching and branch prediction. A "fast" cache is useless if it misses often. Thankfully, Piledriver seems much better than Bulldozer in this regard, judging from the good CPU performance at moderate GHz speeds.

      • ermo
      • 7 years ago

      [quote<]Why not do a graph of latencies in nanoseconds?[/quote<] Indeed. OTOH, TR has released a flurry of reviews these past weeks. My guess is that 'time constraints' do play a part in this omission, just like 'logistical constraints' robbed us of a useful comparison with a Core i5 w/ a discrete GPU with a similar shader count as the A10. Ah well, we can't have it all I suppose. All in all, a good review in my opinion.

    • Tristan
    • 7 years ago

    Bulldozer architecture is nightmare. AMD should drop it as soon as possible. There is no way to fix it.
    Only core with high IPC / medium GHz, can be really energy efficient.

      • NeelyCam
      • 7 years ago

      -12? Fanbois hate facts.

        • ronch
        • 7 years ago

        Fanbois also hate trolls.

          • NeelyCam
          • 7 years ago

          Sure – most people hate trolls. But Tristan is arguably correct, and I don’t think it warranted -12

            • OneArmedScissor
            • 7 years ago

            His comment was that it can’t be more energy efficient than the concept behind previous architectures…and yet it’s a significant improvement. It’s bother faster and runs longer.

            What part of what he said is correct, then? The rest is just his opinion.

            • NeelyCam
            • 7 years ago

            I said [i<]arguably[/i<] correct. His first three sentences are his opinions but there are plenty of stuff to support those opinions. How accurate the last sentence is depends on one's measure of the word "really", so that's a matter of opinion as well - again, he could easily support his 'opinion' And thumbing down someone because you don't like their opinion is a lame fanboi move. I stand by my [i<]opinion[/i<] that he didn't deserve what is now -22

            • bwcbiz
            • 7 years ago

            If it’s “arguably” correct, then it’s an opinion, not a fact. And in that case the -12 (now -29) is just fanboys expressing their opinion that they think his statements
            1) Bulldozer architecture is nightmare.
            2) AMD should drop it as soon as possible.
            3) There is no way to fix it.
            are misinformed. This isn’t just disagreeing. They’re saying that the opinions in those 3 statements are not backed up by the facts as they know them. In which case thumbs down is deserved.

            Personally, I agree with #1. Bulldozer has some resource bottlenecks that kill its performance. Up until the results here, I agreed with 2 and 3 as well. But if Piledriver can attain the same 15-20% improvements over Bulldozer in the desktop space, that would put it at close to par with Sandy Bridge performance. That’s where it should be, since they are both 32 nm. Unfortunately for AMD they are competing against Ivy Bridge, not Sandy Bridge.

            Achieving parity with Sandy Bridge at 32 nm, even if it’s a year late, pretty much disproves all of Tristam’s statements about the architecture, though. It’s AMDs engineering process and budget that are putting it behind Intel. Their design/production cycle is just a tad too slow to compete, so they end up rushing out designs like Bulldozer that are poorly optimized and fail.

            • NeelyCam
            • 7 years ago

            Pretty much everything we talk about is based on opinions. Only small details can be considered ‘facts’, and as soon as the discussion moves to what is “better”, it all becomes opinions.

            But I don’t downthumb people saying that they think Trinity is great because the CPU is good enough and GPU is awesome, or when people say Atom is crap because it’s crap. Downthumbing because of opinions is silly.

            • ronch
            • 7 years ago

            So you never thumbed anyone down here in the forums? Ever? Hmm.

            • NeelyCam
            • 7 years ago

            I downthumb when people are wrong, or when their trolling isn’t funny.

            • ronch
            • 7 years ago

            But sometimes ‘wrong’ is a matter of opinion.

            • NeelyCam
            • 7 years ago

            Then it’s not “wrong” – it’s just a different opinion. I don’t downthumb those.

            • rrr
            • 7 years ago

            If Trinity really sucked hard in tests, I’m quite sure, he’d be upvoted a lot. Except it doesn’t.

            • NeelyCam
            • 7 years ago

            “sucking” is a matter of opinion. Looking at the CPU benchmarks, one could easily argue that Trinity CPU pretty much “sucks” compared to Sandy/Ivy

            • rrr
            • 7 years ago

            Compared to i7? No kidding. It sucks even worse when compared to 3960X or HPC clusters. Trololololo.

            • NeelyCam
            • 7 years ago

            C’mon… You do know that other sites compared Trinity to dual-core SBs, right? Like here:

            [url<]http://www.anandtech.com/show/5831/amd-trinity-review-a10-4600m-a-new-hope/5[/url<] Those Anandtech plots paint a pretty clear picture. Unless of course you think it's unfair that a "quad-core" is compared to a dual-core with SMT...? And you call me a troll...

            • Arag0n
            • 7 years ago

            And trinity ends in the 25% margin at worst…. Now go troll somewhere else. 25% less CPU power is not easy to notice, and the GPU will be far ahead of whatever you can find in a Sandy Bridge laptop with intel 3000…. that are the kind of laptops that will become unexpensive soon. Quad or Dual Core i5 from SB era. That’s what trinity is fighting, not the top of the line Core i7…

            • NeelyCam
            • 7 years ago

            No – Trinity is fighting IB dual-cores with HD4000. Intel can easily bring the prices down if they wanted to – keep in mind that even a quad-core IB is much smaller than a dual-core+SMT Trinity.

            • Arag0n
            • 7 years ago

            Price my friend is the key point. SB will be the cheap intel option for dual core i5’s for sure… sure IV would be the competition in a perfect world, but AMD is competing with last year technology from intel, the one that people use to buy because goes in sale to leave place for new one.

            • NeelyCam
            • 7 years ago

            Once the SB inventories are gone, IB will take over. Only during this short transition period Trinity would have a chance to be cost competitive ([i<]if[/i<] you can find one anywhere), but of course inventory selloffs make SB cheaper than normal. And soon it's tiny IB dual-cores everywhere, and AMD will struggle selling Trinity and still making profits.

            • clone
            • 7 years ago

            once the SB inventories are gone…. did Intel stop producing them?

            while the amount of sand in a cpu affects margins Intel sets the pricing and what Intel can do and will do are nowhere near in sync.

            Intel will go for the money, it’s not an all or nothing game and keeping AMD around helps Intel in so very many ways that putting them under is far too detrimental.

            governments don’t like monopolies, competing interests can take advantage of monopolies in the political and legal realm but with AMD around Intel can keep governments out claiming competition…. if AMD died leaving Intel as a complete monopoly governments would go for the manufacturing advantage first, they would force Intel to produce for everyone which while profitable on the manufacturing side would inevitably hurt all of their most lucrative divisions….. having AMD collecting 15% of the market is a small price to pay compared to Intel being broken and forced to become a commodity.

            • bwcbiz
            • 7 years ago

            I wish sites would compare chips at the same price point, rather than trying to match cores. From a consumer point of view, we don’t care about # of cores except for bragging rights. What really counts is performance per dollar. Supposedly the A10 4500 will price between the Core i7’s and i5s, so comparing it to a quad core intel chip is appropriate if it’s also bracketed by a dual core i5 with hyperthreading.

            • NeelyCam
            • 7 years ago

            It’s really difficult with laptop chips because those prices aren’t really public (and whatever “list” prices are completely meaningless). With desktop parts it’s easier to pick certain price points, but it looks like almost nobody cares about desktop chips anymore

            • ronch
            • 7 years ago

            Better edit your post. It’s now at -33 as I type this.

            • Sam125
            • 7 years ago

            lol nah, you really are a troll.

        • willyolio
        • 7 years ago

        thank you for proving your own point, Neely. did you read the article?

          • NeelyCam
          • 7 years ago

          Yes, I read the article.

            • jensend
            • 7 years ago

            If so, then how did you, like Tristan, fail to see that the article showed a processor with lower IPC and higher clockspeeds having [i<]considerably better performance and energy efficiency than[/i<] its higher IPC / moderate clockspeed predecessor? [i<]Those[/i<] are the facts, and it's not "AMD fanbois" who are taking a disliking to them.

            • NeelyCam
            • 7 years ago

            Your logic fails. You take two points, extrapolate and make a conclusion, without considering other data points.

            Both Llano and Trinity have considerably lower performance and energy efficiency than Intel’s SB/IB, both of which are high-IPC solutions. BD was a clear example how badly low-IPC speedster worked in terms of energy efficiency. Comparing Trinity to Llano without considering Intel solutions is like comparing a Toyota Camry to a Honda Civic without considering a BMW 335d.

            Trinity benefited from all sorts of power optimization tweaks (like the resonant clocking network). Llano (or Phenom II) could’ve benefited from those same tweaks. But neither SB nor IB has resonant clock networks and [i<]still[/i<] they beat Trinity in performance/watt.

            • flip-mode
            • 7 years ago

            Probably more like comparing Honda Civic to Honda Accord without considering BMW 335d. And the answer is: car analogies suck. Car analogies almost always fail when made in the context of CPUs.

            It’s perfectly legit to compare Trinity to Llano without considering Ivy. It’s just got limitations, that’s all. The limitation is the fact that the consumer isn’t going to ignore Ivy.

            Regardless, I continue to have little faith in the direction AMD has taken with Bulldozer. I’m not confident that AMD can do the high clockspeed strategy. Trinity hits 3.2 GHz at turbo – we’re not talking about very high clock speeds there, so even AMD’s high clock speed strategy isn’t getting very high clock speeds.

            Time will tell. AMD will either take Bulldozer to surprising heights and get pats on the back or they won’t and everyone will say “yep, that’s how we thought that would go”.

            • NeelyCam
            • 7 years ago

            [quote<]It's perfectly legit to compare Trinity to Llano without considering Ivy. It's just got limitations, that's all. The limitation is the fact that the consumer isn't going to ignore Ivy.[/quote<] Sure - in some cases comparing Trinity to Llano and nothing else is perfectly fine. But jensend did that and proudly proclaimed that low-IPC "Netburst" approach is the sh*t. All anyone needed to do is to point to SB/IB to show that low-IPC approach is sh*t

            • willyolio
            • 7 years ago

            sure, and we can just tweak trinity over to 22nm tri-gate tech, too.

            • NeelyCam
            • 7 years ago

            SB beat Trinity in power efficiency without resonant clock mesh. Try again.

            • Rza79
            • 7 years ago

            You say this like you know for a FACT that Intel isn’t using some form of resonant clock mesh. Intel has always been very sketchy on information of technology that goes into their cpu’s. Be sure that Intel is employing many technologies into their cpu’s to minimize leakage that AMD isn’t applying to theirs. Millions of design decisions go into a cpu. Just because you have three samples here from only two companies, doesn’t mean you can make conclusions based on their higher-level architecture. I’m not saying that it might not be right in this case but you should refrain from making conclusions for all the worlds cpu’s makers.

            • NeelyCam
            • 7 years ago

            If Intel had used it, I’m sure they would’ve said something when AMD had press releases about it. At least Anand or David Kanter would’ve mentioned it, or one of their ISSCC papers would’ve said it. I seriously doubt Intel is using resonant clock mesh.

            • Rza79
            • 7 years ago

            Having doubt doesn’t make it a fact.

            • NeelyCam
            • 7 years ago

            No it doesn’t. But do you have any reason to think Intel used a resonant clock mesh?

            I’d estimate it’s 95% likely that they didn’t, because if they did, it [i<]most likely[/i<] would've been mentioned in the ISSCC paper. Is 95% close enough for you to stop arguing.

            • Bensam123
            • 7 years ago

            Stopped the hate with +1.

            • Peldor
            • 7 years ago

            Not unless ‘we’ get Intel to make them. Everyone else is looking at 2x nm to be planar.

        • ClickClick5
        • 7 years ago

        Thats the thing about the internet. Fanbois everywhere.

        If this was an Intel post, people would be called an AMD fanboi.
        Since it is AMD, people say the chip is awful.
        If this was Nvidia, people would be saying that all negative claims are 100% false, even if true.
        If this was ATI (AMD), people would be saying that they suck, wait for Nvidia’s next chip.
        If this was VIA, that one guy would be praising them.

        Eh…

        • maxxcool
        • 7 years ago

        Yeah they do, and love sand in their ears to boot. I have been screaming every time there is a review these ARE NOT TRUE QUAD CORE PARTS IN THE SLIGHTEST, and get down-voted to hell. but you know what, at least some of us are smart enough to know this on not plod along like stupid troll-fan-boi-sheeple.

        Let them by thier crippled FAKE quad cores, the benches here today clear show that 50% of the time,… half the cores on that quad core / eight core cpu are just wasting space and energy. Because even with “these improvements” they still cant even beat the 10.5 stars cores 1:1… not even close.

        • Bensam123
        • 7 years ago

        Not sure you’re on the winning end of a argument here Neely… You’ve been reading into too much hyperbole.

        Most definitely BD is worse then SB and now IB, but it has a lot of potential in the architecture. Essentially they took the best parts of BD in Trinity and made them better. It isn’t a win win for performance, but what they did to improve upon existing architecture shows through. They have good graphics on top of it to boot.

        I can only imagine what the next revision of BD desktop processors will bring. People like jumping on the ragging band wagon and say how horrible something is for the fun of it. Often times taking it completely out of real world context.

        Scott does a pretty good job in the article of weighing the pros and cons of the BD architecture in Trinity. I applaud him for such an unbiased opinion and objective look at it.

      • ronch
      • 7 years ago

      Ok. If designing entirely new architectures from scratch when you found out your new arch sucks is as easy as making a paper plane, then yes, what you said is possible. Unfortunately, designing processors is a nightmare, as I’m sure you are aware of. And no, don’t write off Bulldozer yet. I’m sure AMD knew what to avoid with Bulldozer and did its best to not make it a photocopy of Pentium 4. It’s yet to see its heydays.

        • ermo
        • 7 years ago

        [b<]@ronch:[/b<] I know you're cheering for the green team (as am I), but I think the rational observer will also have to concede that [i<]at this point in time[/i<], Bulldozer in its Trinity incarnation is still just mediocre. Sure, as the anand review put it, in terms of what could be clearly improved on compared to the initial Zambezi BD incarnation, Trinity effectively represents harvesting the low hanging fruit. But that doesn't change the fact that its CPU performance lags behind that of hyperthreaded Core i5 and i7 systems. So if you can afford a laptop with a dedicated GPU, SB/IB is clearly the better choice in terms of out-and-out performance. Would I personally get/recommend others to get a Trinity laptop? Yeah, and I'd go for one of the A10 variants, not the 17W [s<]ultrabook[/s<]sleekbook one. And since I need bigger screen elements than most people due to an eye condition, a 15" 1366x768 display is right up my alley, but I can appreciate that most people would prefer a resolution of at least 1440x900 or 1600x900 in that form factor. I could live with that if all Windows apps were coded with WPF, but sadly they aren't, so using the built-in +25% scaling doesn't always give good results. Surprisingly enough, the situation in Loonix land is actually a fair bit better in this respect, as both GTK+ and Qt offer good font and widget scaling. [b<]@Scott/Cyril:[/b<] Any word on Trinity dual gpu scaling?

          • ronch
          • 7 years ago

          Nope, never said Bulldozer is a dream come true. It has its flaws. And as much as I wanna grab an FX right now, I just can’t bring myself to do it. Maybe in time, BD will get there, but Intel is a moving target. We shall see. Nothing is absolutely certain in the game of processors.

      • Unknown-Error
      • 7 years ago

      AMD will drop Bulldozer from Steamroller onwards

        • ermo
        • 7 years ago

        Speculation or rumour?

          • Unknown-Error
          • 7 years ago

          Bit of both : From VR-Zone – [url=http://vr-zone.com/articles/amd-to-survive-and-thrive-still-/15564.html<]AMD to survive and thrive, still?[/url<] [quote<]... after Piledriver, [b<]there will be substantial changes in both cores and system architecture from Steamroller onwards, that should help make AMD competitive closer to the top.[/b<] I was told that delaying the socket migration beyond the AM3+, C32 and G34 to new socket is a good move,since AMD can design more aggressive, rather than stop gap, sockets for future platforms with better features like more memory and HyperTransport channels, as well as integrated PCIe v3, for greater future scalability. [u<]For the first time, some execs do acknowledge that Bulldozer approach may not have been the best one at the time, and things need to change.[/u<] I was told that there is some good frequency scalability in the Piledriver core which should help gain some per-core performance ground.[/quote<] PS: From X-bit labs - [url=http://www.xbitlabs.com/news/cpu/display/20120416165428_AMD_Expects_Significant_Performance_Improvements_with_Steamroller_Microprocessors.html<]AMD Expects Significant Performance Improvements with Steamroller Microprocessors.[/url<]

            • ermo
            • 7 years ago

            Excellent, thank you. [i<]*thumbs up*[/i<]

      • echo_seven
      • 7 years ago

      I dunno, looking at the AnandTech review, and some hints in this review, it looks more like AMD couldn’t field a competitive Bulldozer because of shortage of engineering resources to fine-tune it.

      Which really is quite sad.

        • ermo
        • 7 years ago

        Agreed.

        But then again, the way AMD was apparently managed, you can’t blame the engineers for looking elsewhere?

          • NeelyCam
          • 7 years ago

          Best people have other options, and when things get messed up, they are the first to leave

      • Arag0n
      • 7 years ago

      You just saw a review of Piledrive over last generation Phenom/Athlon architecture using the same fabrication process and being around 20 to 40% faster and at the same time with 40% more battery life in idle. What else do you need to think that AMD finally got their new architecture ahead of their past?

        • NeelyCam
        • 7 years ago

        Idle power is almost entirely dependent on power gating and power management. It has almost nothing to do with the architecture.

          • flip-mode
          • 7 years ago

          Neely, consider this:
          [quote<]Only core with high IPC / medium GHz, can be really energy efficient.[/quote<] Technically speaking, that is not a fact. Facts are things that can be can be tested for falsifiability and I don't see how such a test can be made here. I imagine there are an infinite number of architectural tweaks and process tweaks that can be made, and I don't see how you can test infinitely. More practically speaking, Trinity improved both power efficiency and performance over Llano. That's the most valid comparison to make because both chips are manufactured on exactly the same process. Comparing even to Intel's 32 nm process chips technically throws an irreconcilable variable into the equation. In all honesty, I am quite amazed at the amount of polish AMD was able to apply to the Bulldozer turd. I would never have expected them to be able to get IPC up to Llano's level, much less beat Llano on power consumption. Just saying the terms "power efficient" and "Bulldozer" in the same sentence sounds impossible, and yet, Trinity disproves that. So, perhaps you're trying a bit to hard here. I think Tristan's post doesn't deserve the benefit of the doubt that you are giving it.

            • NeelyCam
            • 7 years ago

            [i<]Technically speaking[/i<], me making a statement that "Fanbois hate facts" doesn't mean that I think "Only core with high IPC / medium GHz, can be really energy efficient." is a fact. You may think that Tristan's post didn't deserve the benefit of the doubt - that's fine. But my point is that I don't think it deserves the -39 downthumb score it has at the moment.

            • flip-mode
            • 7 years ago

            Ah, well, that’s probably true. [spoiler<]you might deserve much of the credit for it getting so much attention[/spoiler<]

          • Arag0n
          • 7 years ago

          Do you realize that laptops stay idle more than 80% of time because CPU intensive times are spikes in normal usage right? That the most common task for intensive CPU usage for a large period of time is games or video, and at least for video we do know that has better battery life than Llano and even SB or IB. Really, what you are doing here is trolling. AMD did a great job with trinity, an unexpected one, and it’s a move that will benefit everyone just like everyone benefits from NVIDIA releasing a competitive GPU with the 680/670 so AMD must cut prices from their 79X0 series.

          Please, if you need any other thing to prove that Trinity has improved efficency over LLano and being both at 32nm that means the architecture is the main source of benefit, look at the benchmarks, trinity is always ahead, and remember that both are in the same power envelop, 35W. Same TDP, higher marks same Node Process. What else do you need to accept it?

          • just brew it!
          • 7 years ago

          But the ability to do these things effectively requires cooperation from the chip itself, i.e. the hardware architecture needs to support it. So saying it has “almost nothing” to do with the architecture seems a bit odd to me.

      • dpaus
      • 7 years ago

      It’s obvious that TR’s ‘Top Comments’ sidebar needs a ‘Bottom Comments’ companion.

        • NeelyCam
        • 7 years ago

        YES!! I’ve been waiting for that a long time

      • Krogoth
      • 7 years ago

      What are you smoking?

      The only problem with Bulldozer was never build to be a desktop chip. It was meant to handle workloads where “having more threads” is king. Bulldozer does well in this regard, but it pays the price with its inferior single-threaded performance to Intel solutions. Desktops only use two or four threads at most. The rest of Bulldozer’s potential goes to waste. It didn’t help that AMD marketing team decided to call Bulldozer a “octal-core” CPU, which is really a quad-core CPU that can behave like a pseudo “octal core CPU” when it handles integer calculations.

    • raghu78
    • 7 years ago

    x264 video transcoding using Handbrake( GPU accelerated with OpenCL)

    [url<]http://www.anandtech.com/show/5835/testing-opencl-accelerated-handbrakex264-with-amds-trinity-apu[/url<] This is a good sign. For all those people who thought AMD is not an option for video transcoding I think they are going to be surprised. Now we just need the OpenCL Handbrake software to reach release status. Way to go AMD

      • phez
      • 7 years ago

      Could you elaborate what’s the good sign? SB has no OpenCL acceleration but is still faster.

        • raghu78
        • 7 years ago

        The sandy bridge comparison is with a quad core i7 2630qm which is 45w and much more costlier. Did you notice the core i5 2410m which is the true competition being left far behind . Also the i5 2410m went from 8.45 to 10.94 compared to A10-4600M which went from 6.98 to 15.01. So it did improve but not as much as AMD Trinity. A top of the line ivy quad core is 29% faster but the more relevant comparison would be a ivy dual core core i5. haha. Even ignoring that please read this from anandtech

        “While video transcoding is significantly slower on Trinity compared to Intel’s Sandy Bridge on the traditional x86 path, the OpenCL version of Handbrake narrows the gap considerably. A quad-core Sandy Bridge goes from being 73% faster down to 7% faster than Trinity.”

        “This truly is the holy grail for what AMD is hoping to deliver with heterogeneous compute in the short term. The Sandy Bridge comparison is particularly telling. What once was a significant performance advantage for Intel, shrinks to something unnoticeable. If AMD could achieve similar gains in other key applications, I think more users would be just fine in ignoring the CPU deficit and would treat Trinity as a balanced alternative to Intel. The Ivy Bridge gap is still more significant but it’s also a much more expensive chip, and likely won’t appear at the same price points as AMD’s A10 for a while.”

        Dude you need some reading comprehension lessons . lol

      • smilingcrow
      • 7 years ago

      The MediaEspresso results from the link you gave show a dual core SB i5 taking 25 seconds versus 74 seconds for Trinity using OpenCL.
      So things aren’t clear cut as it depends on the software that you use and other factors such as file size, quality, format support.
      It is still early days and for video encoding it appears that specialised hardware such as Intel and more recently nVidia use is currently more important than OpenCL and CUDA.

        • raghu78
        • 7 years ago

        From anandtech

        “The open source community thus far hasn’t been very interested in supporting Intel’s proprietary technologies. As a result, Quick Sync remains unused by the applications we want to use for video transcoding.”

        Handbrake is a very popular open source video transcode app. Its quite important because its open standards based and is free. You cannot wish away the results which AMD has achieved in Handbrake.
        AMD is going to push aggressively down the OpenCL route. Their strategy is HSA APUs with OpenCL to exploit the vast power of their GPUs. And they are making good progress.

          • smilingcrow
          • 7 years ago

          I read the article and understand the implications. As I said it’s early days so we will have to see how it plays out. Software compatibility is a major issue so we’ll just have to see how things develop on that front.

          At this point if you want the best performance for transcoding you need to take the Intel route and use paid for software. So it’s the usual scenario of Intel offering the best performance at a higher price and AMD offering the better value with the bonus of best in class gaming performance. 🙂

            • BobbinThreadbare
            • 7 years ago

            That depends how you define performance.

            If you define it as giving the best picture quality, then no QuickSync is not the best.

            • smilingcrow
            • 7 years ago

            Earlier I related it to file size, quality, format support. It’s up to each individual to weigh up the various factors to determine what offers the best solution for their needs.
            From your experience what solutions offer the best picture quality and how noticeable are the differences?

            • BobbinThreadbare
            • 7 years ago

            Well I don’t have an Intel processor, but from doing research on the subject, good old software encoding on a CPU is still the only way to go. OpenCL might change this, I haven’t looked into it too much, but if you can create your own profile in handbrake and create files that fit what you want this then it’s useful to me. If you can’t it’s not. Thus even though Quicksync is faster at what it does, it doesn’t do what I want.

            As for how noticeable, this depends so much on the viewer that I don’t care to comment. I can see the differences which is all that matters to me.

            • stdRaichu
            • 7 years ago

            I have a 2600K that I use for software encoding via avisynth/x264, and to my eyes the image quality is streets ahead. The 720p rip of Game of Thrones I’ve just done (H.264 4.1 -> H.264 4.1) is able to achieve visual parity at about 2500kb/s. With the settings I use, the second pass goes through at about 25-30fps.

            Quicksync (on the gf’s laptop) is much faster, but image quality isn’t as good. You get noticeable fuzzyness around things like water or trees and quantisation visible in the sky.

            As far as hardware/hardware-assisted encoding goes, quicksync is by far the best I’ve seen, but the fact that it’s fixed function hardware means you can’t really tweak the encode process at all, and that one-size-fits-all approach means you’ll seldom reach optimum bitrate for a given image quality (for example, most cartoons/animations will benefit from having 5-10 b-frames, as opposed to most movies that are better off with ~3).

            As to how “noticeable” this all is, that depends. Most people I know haven’t trained themselves to spot artefacts and just enjoy the show 😉 But seeing as when I buy a DVD/blu ray it gets ripped, encoded and put in storage I want them to be the best quality available so I don’t have to dig them out and re-encode them again in five years time.

        • sschaem
        • 7 years ago

        AMD claim they have this engine in there 7 serie, unclear why no software uses it. Broken, unfinished drivers ?

          • ermo
          • 7 years ago

          More likely a cost benefit analysis on the part of the developers.

          You could go so far as to point out that getting OpenCL code paths into software that solves problems that lend themselves to a massively parallel approach is of strategic importance to AMD’s desktop/client strategy and thus should happen sooner rather than later.

          The onus is clearly on AMD to dedicate sufficient (engineering) resources to make it attractive and cheap for ISVs to jump on the bandwagon. And yes, I would imagine that supplying working drivers and well documented SDKs is clearly a prerequisite for this.

          As an aside, having access to cheap, OpenCL capable laptops is likely to be quite a hit with C.S. students going forward.

    • Alchemist07
    • 7 years ago

    “Both VCE and QuickSync appear to halve transcoding times… except the latter looks to be considerably faster. We didn’t see much of a difference in output image quality between the two, but the output files had drastically different sizes. QuickSync spat out a 69MB video, while VCE got the trailer down to 38MB. (Our source file was 189MB.) Using QuickSync in high-quality mode extended the Core i7-3760QM’s encoding time to about 10 seconds, but the resulting file was even larger—around 100MB. The output of the software encoder, for reference, weighed in at 171MB.”

    Would have been cool if you had posted screenshots of the movie, quicksync has been said to have bad image quality, would be interesting to see if VCE is the same

    • kalelovil
    • 7 years ago

    It would have been a good idea to disable half of the cores on the Intel quad-cores to simulate a more fair comparison.

    • Alchemist07
    • 7 years ago

    Would it have really been that difficult to get hold of a 35W i5 for comparison? Nice review otherwise but its a bit silly considering that the Intel CPUs have much more power to play with and are priced much higher.

    I’m disappointed to see this is a TR review, I suggest you get hold of a laptop with such a CPU and update the article!

    • sweatshopking
    • 7 years ago

    loving the new graphs, and this chip looks good. Just what i’ve been hoping for. competent graphics, and decent enough cpu that has battery that last more than a few minutes!

      • ronch
      • 7 years ago

      [quote<].. and decent enough cpu..[/quote<] Wasn't that what most folks also said when Llano came out? AMD is really good at making 'decent enough' CPUs...

        • just brew it!
        • 7 years ago

        As long as they price ’em accordingly, “decent enough” is fine with me.

          • ronch
          • 7 years ago

          So goes the saying, “There are no bad products, just bad prices.”

    • Bensam123
    • 7 years ago

    While I do think onboard GPUs are an interesting development, I don’t think they have any reputable place. If someone wants a laptop that plays games they’ll get one with a beefier integrated solution, compared to one that has better on CPU graphics. I would be more interested in a CPU that better augments the graphics capability of a dedicated graphics processor, rather then one that’s strapped onto the CPU.

    Nvidias Optimus and AMDs PowerExpress are more along the lines of what I would like to see highlighted as far as laptop solutions go. The GPU onboard the CPU should concentrated almost completely on operating on the bare minimum amount of power and when it actually needs some graphics processing capabilities, the GPU is turned on for the duration of the task. That is the optimal solution IMO. They can improve and tweak the onboard GPU all they want, but that’s a task better left up to an actual GPU.

    These onboard GPUs are actually starting to edge into entry level graphics cards, which is sorta scary. :l

      • rrr
      • 7 years ago

      How is this scary? OEM cards are notoriously shady business, with numerous rebrands and even same models with different specs. If anything, unifying that stuff is a great thing.

        • Bensam123
        • 7 years ago

        It’s a CPU, not a GPU.

      • OneArmedScissor
      • 7 years ago

      [quote<]These onboard GPUs are actually starting to edge into entry level graphics cards, which is sorta scary. :l[/quote<] Yeah, but compared to old ones that aren't going to be replaced. Look at the new round of graphics cards with 28nm GPUs. The smallest, lowest end chip is Radeon 7700 territory now. If you're concerned about integrated graphics wasting power, don't be. Trinity is on roughly the same node, yet its GPU is like taking the 7700 and cutting it in half. That's tied to the CPU's power management circuits, so it's not going to guzzle power. Not much really changed here. In every past generation, there were $50 graphics card with the exact same GPU as what motherboards had built in from both Nvidia and AMD. The only difference now is that so many CPUs have that already built in, so they stopped bothering with new cards below $100.

      • dpaus
      • 7 years ago

      [quote<]If someone wants a laptop that plays games they'll get one with a beefier integrated solution, compared to one that has better on CPU graphics[/quote<] By that logic, the market for games on tablets and smartphones should be zero. Apparently, it's something more than zero. Or so I hear, anyway....

        • smilingcrow
        • 7 years ago

        I fail to see the logic in your argument as the OP was talking about one platform and you are extrapolating from there to suggest something that is only tangentially related.
        The OP just wants better GPU performance so might well be happy to use tablets/phones for gaming provided the GPU is up to scratch.

        I thought the OP’s post misses the point of Trinity.

          • dpaus
          • 7 years ago

          I don’t think this is a platform-specific issue; the underlying fact is that people will play ‘games’ on all kinds of devices, and that only a small fraction of that overall market are ‘gamers’ that want to play the latest and/or ‘high-end’ games on laptops. As both the review and other commenters have noted, the graphics in Trinity (or, at least, in the A10) are plenty ‘good enough’ for most games, including titles that were ‘high-end’ only a few years ago – especially if they’re being played on the laptop’s screen (which, while hopefully not 1366×768, is unlikely to be any more than 1920×1080).

          I think AMD’s done a commendable job of balancing ‘good enough’ CPU and ‘good enough’ GPU in 35W. Of course, if you really want/need a discrete GPU, they are available and suppoorted by the A10. But the simple fact that the A10 can do a decent job on most games means that people will use it to play games – just like they do with tablets and smartphones, which have only a fraction of the power of this.

            • smilingcrow
            • 7 years ago

            The OP is presumably talking about more than casual gaming hence their interest in a discrete GPU. This implies they are interested in having a high end gaming experience which I presume Trinity doesn’t give; as good as it is in its class.
            Hence I don’t see what this has to do with gaming on a tablet/phone! If the OP has a similar desire for higher end gaming on the other platforms he will look for higher performing GPUs on those platforms also.

            I see what you are getting at and it is in context with where Trinity is positioned it’s just that I think Bensam123 is talking about something else.

        • Bensam123
        • 7 years ago

        I don’t believe these chips will be used in smartphones. That said, even tablets may be a close call (unless you’re talking about laptop style tablets).

        It’s entirely possible to accommodate both a integrated GPU on the CPU and a dedicated GPU that switches on and off based on the workload. I think I confused some people by not differentiating when I’m talking about the GPU on the CPU versus a dedicated GPU more clearly.

        The whole point of my post was that I believe a dedicated solution, with a ultra low power bare bones GPU on the CPU die is the best possible solution. Instead of CPU manufacturers wasting time and resources trying to beef up graphics on their CPUs to begin with. Instead they should work towards better handing off workloads to a dedicated GPU when the time comes and only handle static or work that isn’t resource intensive on the CPU-GPU.

      • just brew it!
      • 7 years ago

      Pushing decently performing graphics into lower cost and/or lower power systems is a worthwhile endeavor. Sure, it isn’t going to excite most gamers; but that’s not really their target market.

    • ET3D
    • 7 years ago

    I enjoyed reading the review, and I agree, a 17W review is what I want to see.

    • Rza79
    • 7 years ago

    Not including a mobile i5 (read: dual core Sandy) seriously handicaps this article. While the comparison to the i7 models is still an interesting one, it is also an useless one. AMD’s A10 and Intel’s i7 are not in the same price range nor same power enveloppe.
    Also as much as AMD wants to tell the world, a module to me is still one CPU, making Trinity a multithreaded dual-core.

      • Arclight
      • 7 years ago

      [quote<]Also as much as AMD wants to tell the world, a module to me is still one CPU, making Trinity a multithreaded dual-core.[/quote<] Or maybe it's a handicaped quad core 🙂

    • raghu78
    • 7 years ago

    Scott and Cyril
    Thats a good review. I find your reviews more balanced . I would have liked a few core i5s to give a real comparison as to how Intel and AMD perform with similar number of threads (4 threads). There are people who have trashed the Trinity in their reviews. I feel its all dependent on the point of view. As you say for average users who do basic productivity like MS Office , browsing, video playback and light gaming Trinity is a good overall package with good battery life. For people who want maximum x86 performance there is only one option – Intel. AMD needs to get a much faster CPU out in Steamroller to compete with Intel in the USD 700+ space. Right now they are a value play. AMD needs to make sure A10 laptops sell for USD 600 – 650. They need to avoid getting close to low end core i7 2630qm based models starting at USD 800. Also they need to focus on GPU acceleration through OpenCL. More apps need to make use of GPU acceleration and to much greater performance effects. Then they would be in a good situation.

      • dragosmp
      • 7 years ago

      I would add only that those that thrashed Trinity also used 3DMark and FPS as their only criteria to evaluate graphics performance which is a very fast albeit shortsighted way to review.

      Looking for a new laptop for very light gaming the question is: if the HD3000 is acceptable, if HD4000 is “that” good and is there any hope to extract performance from K10? This article answered all questions.

      For a given laptop chassis/screen if I didn’t have a fixed rig I would have picked an i7 Ivy Bridge for the faster CPU. Since I have a fast desktop now I lean towards the A10 which (amazingly considering the FXs) has adequate CPU cores. I only hope someone will manufacture a decent laptop w/ A10 inside. The 25/17W parts should be interesting.

    • Alexko
    • 7 years ago

    Thanks, that’s probably the best review I’ve seen so far. Congratulations.

    It could use a 35W Intel CPU, though.

    • yogibbear
    • 7 years ago

    Why can I not see the images and instead I just see thin lines????

      • Myrmecophagavir
      • 7 years ago

      I also see this in IE9, but it works in Chrome.

    • Arclight
    • 7 years ago

    Hmm i had hoped for more. I’m not impressed aka Krogoth’ed.

    • nico1982
    • 7 years ago

    Can’t say I like the test setup.
    Trinity looks bad in CPU performance against Intel quads, as expected, while its GPU trunches them, again as expected. Battery life is very good, but is no news that i7 quads, however frugal, will consume more power. There’s a stone guest, and is an dual core sandy bridge with switchable graphics: it would help to understand how nice of a package Trinity is (if at all).

    • soryuuha
    • 7 years ago

    Hi Scott & Cyril

    Thanks for the review, please enlighten me on battery test by Anand on trinity..why some i7/i5 parts have waaay better battery life when playing h.264 content? despite having similar battery capacity…

    [url<]http://images.anandtech.com/graphs/graph5831/46670.png[/url<]

      • derFunkenstein
      • 7 years ago

      The i7’s in this review are all quads with 45W ratings. The i7’s at the top of Anand’s charts are [url=http://ark.intel.com/products/54618/Intel-Core-i7-2637M-Processor-(4M-Cache-1_70-GHz)<]17W[/url<] low-voltage dual-core models.

    • Unknown-Error
    • 7 years ago

    First of all a big thank you to Scott & Cyril the dynamic duo :P. It is a solid performance by AMD. Unfortunately this is what bulldozer [b<]should[/b<] have been. If you look at the reviews with the SNB Core i5 24xxM/25xxM the A10 is pretty decent including the CPU perf. But as usual AMD is 1-year late. The critical time would be when IVB Core i3/i5 are released. Still decent job AMD, but no time to pat yourself on the back. Please get you act together with Steamroller. If VR-Zone is right we should see some real change Steamroller onward.

    • esterhasz
    • 7 years ago

    Lovely review, especially the conclusion is full of nuance, kudos, very enjoyable read.

    I am very impressed with the sunspider test, which – for the segment Trinity is aiming at – may actually be the most relevant benchmark. This also gives hope that the 17W piece with one module at 2.6Ghz will actually be quite alright for daily computing. I hope that you will be able to add Brazos or Atom to the mix when that part comes along, to provide a further reference point.

    Edit: it would be supremely helpful to have core/thread counts and turbo states in the table on page 4, I must admit that Intel’s naming scheme in particular has become simply too confusing for me…

    • Anonymous Coward
    • 7 years ago

    Not a bad show by AMD.

    I like what you’re doing with the graphics performance graphics, also the new percentile graph. But isn’t it possible to present frame latencies vs time instead of frame number? (Perhaps you would have to approximate the time. Anyway you could stretch them to the same length and then we’d see how the hard spots line up.)

    • willyolio
    • 7 years ago

    yay! i’m waiting for an AMD ultrabook that can double as a light-duty gaming laptop. They’ve got themselves a real winner here.

    i hope the manufacturers don’t cheap out on the chassis materials just because AMD is supposed to be more “value-oriented.”

      • raghu78
      • 7 years ago

      I think the HP 15.6 inch sleekbook at USD 599 with AMD trinity will be a good option for you

        • sweatshopking
        • 7 years ago

        yeah, but why is it 15.6 inches? i want 13-14!!!

          • OneArmedScissor
          • 7 years ago

          I found that weird, too, but HP spams AMD laptops of all shapes and sizes. It’s undoubtedly coming once there are enough of these chips to go around.

          • willyolio
          • 7 years ago

          yeah, 13″ or so is the right size for me.

          • halbhh2
          • 7 years ago

          Both should be abundant.

      • drfish
      • 7 years ago

      Agreed! Even though [url=https://techreport.com/forums/viewtopic.php?f=13&t=81456<]I went a different direction[/url<] I'm still VERY curious to see what OEMs do with the 17w part. Put one in a 10" carbon fiber tablet similar to the Transformer Prime and I'd probably be looking at my first Windows 8 machine... *drool*

        • sweatshopking
        • 7 years ago

        don’t be crazy. around here, windows 8 sucks.

          • drfish
          • 7 years ago

          LOL

          Drooling over the hardware, not the software. 😉 I genuinely think it’ll be just swell on a tablet though.

          • dpaus
          • 7 years ago

          Where ‘around here’ = Earth??

            • Thatguy
            • 7 years ago

            You made a funny! 😀

Pin It on Pinterest

Share This