AMD’s A10-5800K and A8-5600K ‘Trinity’ APUs reviewed

AMD’s Trinity chip is making a debut, but it’s not exactly a fresh face. We reviewed the mobile version of Trinity back in May and had mostly positive things to say about it. The second generation of AMD’s do-everything, converged APU offered solid progress over the first-generation “Llano” chip on many fronts. Not too long after Trinity’s mobile release, desktop versions of it started shipping exclusively in systems from large PC makers. Those wishing to build their own systems based on the chip, or to buy them from smaller PC vendors, had to wait. AMD took its time ushering this chip into broader sales channels, but the time is finally upon us. Trinity is now available as a retail product, as are motherboards based on the new Socket FM2 platform.

Trinitarian doctrine

Since Trinity is a known quantity, we won’t recount its architecture in great detail. You can read our review of the mobile version for that info. The basics are fairly straightforward, though. Trinity is, in many ways, a direct answer to Intel’s Ivy Bridge processors. The two CPUs incorporate many of the same functions, including things like PCI Express connectivity and graphics that were formerly delegated to support chips. The name of the game is integration, because integration saves power, reduces costs, and shrinks the footprint of a system. The latest PC processors are beginning to look very similar to the system-on-a-chip (or “SoC”) products that power smart phones and tablets.


Labeled picture of the Trinity die. Source: AMD.

Although Trinity is built on the same 32-nm SOI process technology as its predecessor, Llano, it offers architectural upgrades all around. The four older Phenom-era CPU cores have been replaced by dual “Piledriver” modules. Each module has two integer cores, a beefy-but-shared floating-point unit, and 2MB of shared L2 cache. Piledriver is the code name for an updated version of AMD’s still-new Bulldozer microarchitecture, with improvements to its per-clock instruction throughput and voltage-frequency response. Trinity brings Piledriver to the desktop for the first time.

AMD has refreshed the Radeon graphics on this chip, as well, moving from the older VLIW5-style shader core used in the Radeon HD 5000 series to the VLIW4 shaders from the Radeon HD 6900 series. This isn’t the latest GCN architecture from today’s Radeons, but it supports a full DirectX 11 feature set and should be an incremental improvement in efficiency.

The companion video acceleration block, however, is ripped right out of the Radeon HD 7000 series, and there have been updates to the display and memory controllers, as well. In fact, the only item of note that isn’t really up to date is the PCIe connectivity, which remains at Gen2. Third-gen PCIe offers twice the data rate.

One thing that doesn’t fit easily into the diagram above is the more refined integration of these different pieces. Trinity has considerably fewer visible seams compared to Llano. Among the major improvements is power management. Trinity can dial back the clock speed and voltage of its graphics component in response to CPU-heavy workloads. Llano could rein in the CPU when graphics-heavy workloads required it, but not vice-versa.

Plenty o’ flavors

Model Modules/

Integer

cores

Base core

clock speed

Max Turbo

clock speed

Total

L2 cache

capacity

IGP

ALUs

IGP

clock

TDP Price
A10-5800K 2/4 3.8 GHz 4.2 GHz 4 MB 384 800 MHz 100 W $122
A10-5700K 2/4 3.4 GHz 4.0 GHz 4 MB 384 760 MHz 65 W $122
A8-5600K 2/4 3.6 GHz 3.9 GHz 4 MB 256 760 MHz 100 W $101
A6-5500 2/4 3.2 GHz 3.7 GHz 4 MB 256 760 MHz 65 W $101
A6-5400K 1/2 3.6 GHz 3.8 GHz 1 MB 192 760 MHz 65 W $67
A4-5300 1/2 3.4 GHz 3.6 GHz 1 MB 128 724 MHz 65 W $53
Athlon X4 750K 2/4 3.4 GHz 4.0 GHz 4 MB 100 W $81
Athlon X4 740 2/4 3.2 GHz 3.7 GHz 4 MB 65 W $71

The table above shows the full lineup of Trinity desktop processors. Note the 65W and 100W power envelopes, just the same as the prior-gen Llano products. With the move to 22-nm process tech, Intel reduced its desktop power envelopes; even the most expensive Core i7 has a peak power rating of just 77W. AMD has supplied us with two chips to review: the A8-5600K and the A10-5800K. Both are K-series parts with unlocked multipliers for easy overclocking, but they unfortunately both have 100W TDP limits. We suspect the 65W versions may be more appealing to many folks.

Trinity APU pricing doesn’t rise above the $122 mark. AMD has kept the price tags modest, a tacit acknowledgement of the performance picture. By contrast, the Ivy-based Core i7-3770K sells for $332, well over twice the price of the A10-5800K.

For what it’s worth, we’ve neglected to list the complex suite of Radeon model numbers attached to the integrated graphics. The A10 series, for instance, has Radeon HD 7660D graphics, and the A8 series has Radeon HD 7560D graphics. What you need to know, really, are the ALU counts and clock speeds shown above. Oh, and it’s worth mentioning that most of these APUs support AMD’s Dual Graphics feature. That is, they can pair a low-end Radeon graphics card with the IGP in a CrossFire-style multi-GPU config. That’s not our favorite option given the added complexity, the asymmetry between GPUs, and the potential for multi-GPU micro-stuttering—but it is an option for those who want it.

A new platform: Socket FM2

The changes to Trinity are sweeping enough that they require a new CPU socket. Thus, Llano’s Socket FM1 gives way to the new Socket FM2.

Physically, Socket FM2 looks very similar to multiple generations of desktop sockets from AMD, but the pin layout is different to prevent the insertion of an incompatible processor by all but the most determined.


Block diagram of the platform. Source: AMD.

The basic platform layout is depicted in the diagram to the right. Trinity requires only a single support chip for I/O, but AMD offers several variants of that product. The entry level version is the A55, which has enough features for a basic PC. The A75 enables a few extras, including USB 3.0 support, six SATA 6Gbps ports, and some overclocking features. Top o’ the line is the A85X, with eight SATA 6Gbps ports, even more overclocking options, and support for dual discrete GPUs in CrossFire configurations. Having a trio of chipsets for a CPU lineup that spans the rather limited gamut from $53 to $122 seems like overkill to us. AMD must have been planning for better days.

Perhaps those days will come eventually. AMD expects Socket FM2 to stick around for a while, at least long enough to support the generation of APUs after Trinity. Presumably, that means the APU code-named “Kaveri,” which should have 2-4 Steamroller cores and Radeon graphics based on the current GCN architecture.

Motherboard makers have introduced a robust slate of Socket FM2-compatible offerings to play host to Trinity, including MSI’s snappily named FM2-A85XA-G65 mobo pictured above, which served in our testbed. This is a relatively high-end board, with dual PCIe slots for CrossFire and a gaggle of SATA ports. Around back, it serves up four display outputs, with everything from VGA to DVI, HDMI, and DisplayPort.

The competition

Pictured above is the Core i3-3225, a dual-core, quad-threaded Ivy Bridge chip clocked at 3.3GHz with a 3MB L3 cache. The “5” at the end of the model number means something important, believe it or not: this chip has Intel’s full-fat HD 4000 graphics implementation, not a cut-down variant. The list price for the i3-3225 is $134, making it arguably the A10-5800K’s closest competitor. As a low-end part, the i3-3225 is missing certain amenities like Turbo Boost and, somewhat freakishly, support for the AES-NI instructions that accelerate encryption. (Intel’s product segmentation is way, way too complicated.)

The one place where this Core i3 and the A10-5800K diverge most obviously is on power: the Core i3-3225 has a TDP, or max power rating, of just 55W. The 5800K’s power envelope is nearly twice the size at 100W, which gives it more headroom to push on both CPU and graphics performance.

We also have some competition lined up for the A8-5600K in the form of the Pentium G2120. The G2120 lists for only $86, so it’s a bit cheaper than the A6-5600K, but we think it’s the closest competitor in Intel’s lineup. The Pentium G2120 is also a seriously gimpy chip. Although it’s based on 22-nm Ivy Bridge silicon and has two cores running at 3.1GHz, the G2120 lacks support for a whole lexicon of marketing names and acronyms, including AVX, Turbo, Hyper-Threading, AES-NI, HD 4000 graphics, and QuickSync. Sometimes, it simply refuses to do math, until you ask again nicely. Even so, the G2120 fits into the same 55W power envelope as the Core i3-3225, so it has a huge handicap versus the 100W A8-5600K.

We’ll see how the new Trinity-based APUs compare to these chips and a huge host of others on the following pages, in what has to be the most data-rich review we’ve ever produced. Apologies in advance for the overload.

Our testing methods

We ran every test at least three times and reported the median of the scores produced.

The test systems were configured like so:

Processor

Phenom II X4 850

Phenom II X4 980

Phenom II X6 1100T

AMD FX-4170

AMD FX-6200

AMD
FX-8150

Pentium
G2120

Core i3-3225

Core
i5-2400

Core i5-2500K

Core
i7-2600K

Core i5-3470

Core i5-3570K

Core i7-3770K

Core
i7-3960X

Core i7-3820

Motherboard Asus
Crosshair V Formula
MSI
Z77A-GD65
Intel
DX79SI
North bridge 990FX Z77
Express
X79
Express
South bridge SB950
Memory size 8 GB (2 DIMMs) 8 GB (2 DIMMs) 16 GB
(4 DIMMs)
Memory type AMD
Entertainment

Edition

DDR3 SDRAM

Corsair

Vengeance

DDR3 SDRAM

Corsair

Vengeance

DDR3 SDRAM

Memory speed 1600 MT/s 1600 MT/s 1600 MT/s
Memory timings 9-9-9-24
1T
9-9-9-24
1T
9-9-9-24
1T
Chipset

drivers

AMD
chipset 12.3 
INF
update 9.3.0.1020

iRST 11.1.0.1006

INF
update 9.2.3.1022

RSTe 3.0.0.3020

Audio Integrated

SB950/ALC889 with

Realtek 6.0.1.6602 drivers

Integrated

Z77/ALC898 with 

Realtek 6.0.1.6602 drivers

Integrated

X79/ALC892 with

Realtek 6.0.1.6602 drivers

IGP drivers 15.26.12.64.2761
Processor AMD
A8-3850
AMD
A8-5600K

AMD A10-5800K

Core
i5-655K

Core i5-760

Core i7-875K

Motherboard Gigabyte
A75M-UD2H 
MSI
FM2-A85XA-G65
 
Asus

P7P55D-E Pro

North bridge A75
FCH
A85
FCH
P55
PCH
South bridge
Memory size 8 GB
(2 DIMMs)
8 GB
(2 DIMMs)
8 GB
(2 DIMMs)
Memory type Corsair

Vengeance

DDR3 SDRAM

AMD
Entertainment

Edition

DDR3 SDRAM

Corsair

Vengeance

DDR3 SDRAM

Memory speed 1600 MT/s 1600 MT/s 1333 MT/s
Memory timings 9-9-9-24
1T
9-9-9-24
1T
8-8-8-20 1T
Chipset

drivers

AMD
chipset 12.3
AMD
chipset 12.8
INF
update 9.3.0.1020

iRST 11.1.0.1006

Audio Integrated

A75/ALC889 with

Realtek 6.0.1.6602 drivers

Integrated

A75/ALC889 with

Realtek 6.0.1.6602 drivers

Integrated

P55/VIA VT1828S with

Microsoft drivers

IGP drivers Catalyst
12.8
Catalyst
12.8

They all shared the following common elements:

Hard drive Kingston
HyperX SH100S3B 120GB SSD
Discrete graphics XFX
Radeon HD 7950 Double Dissipation 3GB with Catalyst 12.3 drivers
OS Windows 7 Ultimate x64 Edition
Service Pack 1

(AMD systems only: KB2646060, KB2645594 hotfixes)

Power supply Corsair
AX650

Thanks to Corsair, XFX, Kingston, MSI, Asus, Gigabyte, Intel, and AMD for helping to outfit our test rigs with some of the finest hardware available. Thanks to Intel and AMD for providing the processors, as well, of course.

We used the following versions of our test applications:

Some further notes on our testing methods:

  • The test systems’ Windows desktops were set at 1920×1080 in 32-bit color. Vertical refresh sync (vsync) was disabled in the graphics driver control panel.
  • We used a Yokogawa WT210 digital power meter to capture power use over a span of time. The meter reads power use at the wall socket, so it incorporates power use from the entire system—the CPU, motherboard, memory, graphics solution, hard drives, and anything else plugged into the power supply unit. (The monitor was plugged into a separate outlet.) We measured how each of our test systems used power across a set time period, during which time we encoded a video with x264.
  • After consulting with our readers, we’ve decided to enable Windows’ “Balanced” power profile for the bulk of our desktop processor tests, which means power-saving features like SpeedStep and Cool’n’Quiet are operating. (In the past, we only enabled these features for power consumption testing.) Our spot checks demonstrated to us that, typically, there’s no performance penalty for enabling these features on today’s CPUs. If there is a real-world penalty to enabling these features, well, we think that’s worthy of inclusion in our measurements, since the vast majority of desktop processors these days will spend their lives with these features enabled. We did disable these power management features to measure cache latencies, but otherwise, it was unnecessary to do so.

The tests and methods we employ are usually publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

IGP performance – Skyrim

We’ll start by looking at integrated graphics performance. For these tests, we decided to focus on the higher-end A10 and Core i3 processors, since they have the faster IGPs and are more likely to be compelling offerings. We’ve also taken a look at the impact of memory speed on IGP performance, since memory bandwidth can be a pretty notable constraint. Our default test config used 1600MHz memory, and we also tested the A10 and Core i3 with 1866MHz memory at pretty tight timings: 9-10-9-27 1T. We’d hoped to test even higher memory frequencies, but neither platform took well to 2133MHz memory, even with relatively conservative timings and extra voltage.

We tested performance while taking a stroll around the town of Whiterun in Skyrim. You can see the image quality settings we used above, which are about as spartan as possible in Skyrim.


Frame time
in milliseconds
FPS
rate
8.3 120
16.7 60
20 50
25 40
33.3 30
50 20

Our gaming tests are very different from what you’re likely to see elsewhere. We’ve captured the time required to render every single frame from each of our five test runs, because we believe FPS averages tend to mask the short slowdowns that can break the sense of fluid animation. For more information on how we test and why, please see this article.

You can click the buttons beneath the plots above to see results for the different types of processors. Since we’re plotting frame times, lower numbers are better, and big spikes upward are bad—they represent delays in frame delivery. If you’re new to the idea latency-focused game testing, the table to the right may help. It shows frame times and how they correspond to FPS rates. Just a look at the raw plots above will tell you much of what you need to know about how these CPUs perform. The Core i3-3225 produces fewer frames at generally higher latencies than the A10, and its frame time spikes tend to be more dramatic.

Although FPS averages can be deceiving, in this case, the relatively high average numbers tend to be backed up by our alternative method, the 99th percentile frame time. (This metric just says that 99% of all frames were rendered in x milliseconds or less.) The overall latency picture for all of the IGPs isn’t bad. Except for the last 1% of frames, all of these solutions produce a constant flow of updates at a rate of over 30 FPS. Skyrim doesn’t look pretty at these settings, but it will run smoothly enough on any of these IGPs.

The A10 is measurably faster than the Core i3-3225, and you can feel the difference while playing. The difference between the A10 and its Llano predecessor, the A8-3850, is much subtler, only a couple of milliseconds in the 99th percentile metric. Even slighter is the impact of faster memory on the A10.

The 99th percentile frame time is just one point along a curve, and we can have a look at the broader curve to give us a better sense of the overall latency picture. As you can see, the A10 produces much lower frame latencies generally than the Core i3.

The 99th percentile frame time attempts to capture a sense of the overall latency picture while ruling out the outliers. We can also focus on the worst-case frame times, which makes sense, since we want to avoid those hiccups and pauses while playing. Our method of quantifying “badness” is adding up all of the time spent working on frames beyond a given threshold—usually, we set the mark at 50 milliseconds, which equates to 20 FPS. We figure if frame rates drop below about that mark, the illusion of motion is at risk. Also, 50 milliseconds is equal to three vertical refresh intervals on a 60Hz display. If you’re waiting longer than that for the next frame, there’s likely some pain there.

As you might expect given the other numbers above, most of these solutions don’t spend much time beyond our threshold. They really can run Skyrim pretty well at these (kinda lousy) image quality settings. Interestingly enough, the Core i3 benefits quite a bit from the move to 1866MHz memory; its time spent beyond our threshold drops to zero from nearly a third of a second before.

IGP performance – Batman: Arkham City

We tested Arkham City while grappling and gliding across the rooftops of Gotham in a bit of Bat-parkour. We’re moving rapidly through a big swath of the city, so the game engine has to stream in more detail periodically. You can see the impact in the frame time plots: every CPU shows occasional spikes throughout the test run.

Again, we’ve had to reduce image quality settings to their lowest possible level in order to accommodate these relatively pokey integrated graphics processors.


Even with all of the frame time spikes, the numbers above look reasonably good for the most part. The FPS average and 99th percentile frame times pretty much mirror each other, which is usually a sign of health, and the latency curves are all similar in shape, with no big spikes upward until we reach the last few percentage points worth of frames.

All of the numbers point to the same thing, too, which is a clear playability advantage for the A10-5800K over the Core i3-3225.

IGP performance – Battlefield 3


Uh oh. Those plots for the Core i3 configs look ugly and prickly. Let’s see what it means.

Looking at the FPS average, you might think the Core i3-3225 isn’t far behind the Llano-based A8-3850, but the 99th percentile frame time tells a different story.

A look at the latency curve illustrates the problem. The Core i3 has particular trouble with the last 10-12% of frames rendered, where latencies shoot up dramatically.

Given the shape of the latency curve, this result isn’t surprising. The Trinity-based A10 and the Llano-based A8 waste very little time working on frames beyond our 50-ms threshold, but the Core i3 spends just over—or just under, with the faster memory config—one second of our 60-second test run working on long-latency frames. That 32 FPS average might tempt you to think the Core i3 is reasonably competent, but in this case, it isn’t.

IGP performance – Crysis 2

But will it run Crysis? We fired up Crysis 2 on a lark to see if it could run on any of these IGPs. As one of the most graphically intensive games around, we really didn’t expect much. Turns out that it did indeed run, even on the Intel IGP. Credit Intel for getting a Crysis game to run on its IGP, even if it isn’t terribly fast. There was a day not long ago when running a game like this on an Intel graphics solution was a sure recipe for failure.


Hmm. The FPS average and 99th percentile results don’t match at all. What’s the story? Well, it’s pretty easy to see how the AMD results are riddled with spikes throughout, even though the plots show a relatively decent core of low-latency frames. That core translates into a healthy-looking FPS average, but not all is well.

The curves tell the story. The AMD IGPs struggle with about 4-5% of the frames in the scene—and we know from the plots those problem frames are interspersed throughout the test session. As a result, the A10-5800K’s curve meets the Core i3-3225’s at around the 98th or 99th percentile, even though the A10 is faster otherwise.

Playing Crysis 2 on any of these IGPs kind of stinks, though in different ways. All of the IGPs burn quite a bit of time beyond our threshold.

Interestingly enough, the two least “bad” configs here are the IGPs paired with 1866MHz memory. That illustrates how important a bottleneck memory bandwidth is for integrated graphics. This constraint is likely to be more of a problem going forward, as transistor budgets for integrated graphics grow, especially if mainstream systems stick with the same dual-channel DDR3 memory standard.

IGP performance – Civilization V

We have one more gaming test to include before moving on to bigger and better things. This test is a simple scripted one that spits out an FPS average, because there are only so many hours in the day for testing.

Yikes. We’re running Civ V at just about the lowest possible image quality settings, and although it doesn’t crash, it’s pretty much hopeless on the Intel HD 4000 IGP. The A10-5800K handles it reasonably well, it would seem, with an average of 43 FPS.

Converged applications: LuxMark

One of AMD’s goals for APUs going forward is to use the parallel computing power of the integrated graphics processor to assist the CPU cores where possible. Although GPU computing has taken off in specialized sectors like scientific computing and HPC, we are still in the early days of GPU computing for consumer applications. AMD has been making strides in persuading developers to use OpenCL to accelerate certain classes of applications, though, and it has supplied reviewers with a handful of programs to demonstrate the potential there.

These “accelerated” programs fall into several groups. Some of them are just video transcoders that make use of the dedicated encoding hardware built into new CPUs, features like Intel’s QuickSync and AMD’s HD Media Accelerator. We’ve recently taken a look at the hardware video encoding options on the PC, so you can read about them if you wish. However, the more interesting programs in our book don’t just use dedicated custom logic; they employ real GPU computing, likely through the OpenCL API, to handle tasks previously reserved for the CPU cores.

We tried out accelerated versions of The GIMP image processor and WinZip compression in our review of Trinity’s mobile variant, but the program we find most interesting to date is LuxMark, which uses OpenCL to tackle ray-traced rendering. Ray-tracing is a classic “embarrassingly parallel” application, so it’s a good test case to demonstrate the potential of data-parallel compute hardware. Also, we’ve already incorporated LuxMark into our wider CPU suite, which includes a huge selection of chips, so we have ample context for the performance numbers it spits out.

LuxMark should do a nice job of harnessing the capabilities of new CPUs. Since OpenCL code is by nature parallelized and relies on a real-time compiler, it adapts easily to new instructions. For instance, Intel and AMD offer integrated client drivers for OpenCL on x86 processors, and they both claim to support AVX. The AMD APP driver even supports Bulldozer’s distinctive instructions, FMA4 and XOP.

We’ll start with CPU-only results from a broad swath of processors. These results come from the AMD APP driver for OpenCL, since it tends to be faster on both Intel and AMD CPUs, funnily enough.

Using their CPU cores alone, the new Trinity APUs are only a smidgen faster than the chip they replace, the Llano-based A8-3850. Why? One reason is that the two “Piledriver” modules in Trinity have only one shared FPU each. Each of Llano’s four cores has its own dedicated FPU, so although Trinity benefits from the extra-wide vector math enabled by its support for AVX instructions, it’s not much faster than Llano.

Intel’s Core i3-3225 is only a dual-core processor, but it has two FPUs and can track and execute four threads via Hyper-Threading, so the architectural similarities to Trinity are closer than you might think. The Core i3’s FPUs support AVX, as well, and they achieve higher throughput than Trinity’s, even though they don’t use the fused multiply-add instruction. (FMA support is slated for Intel’s next-gen Haswell chip.)

Without AVX or Hyper-Threading, the Pentium G2120 finishes dead last, well behind the A8-5600K.

Moving the workload over to the IGPs uniformly produces lower performance than the same processors achieve with only their CPU cores. The IGP in AMD’s Trinity is substantially faster than Intel’s HD 4000 graphics, but neither CPU’s IGP can match its x86 cores.

If we invoke both the CPU cores and the IGPs at the same time, we see higher overall performance than with just one type of computing unit engaged—and the A10’s combined throughput is ever so slightly higher than the Core i3-3225’s. There’s a hint of potential here; combined performance is roughly equal to the AMD FX-6200’s, a chip with three Bulldozer modules.

To give you a better sense of the prospects for mixed-mode computing, let’s have a look at a much more capable GPU, the Radeon HD 7950, when driven by the various processors we’ve tested.

Now that’s more like it. Moving some workloads over to a fast enough GPU can really pay off. The Radeon HD 7950 achieves more than twice the throughput of the Core i7-3770K’s quad CPU cores, regardless of which processor is driving it. (The 7950 is somewhat faster when combined with Intel processors, likely because of their higher single-threaded performance.)

Of course, this GPU has its own fast, dedicated memory subsystem, so we’re not just adding a whole truckload of FLOPS; we’re adding bandwidth in support of those FLOPS. The discrete card also has its own rather substantial power envelope. Extracting additional performance out of the beefier IGPs of the future may run up against socket limitations that a discrete card doesn’t face. That’s especially true for applications that map well to GPUs and IGPs, since they tend to be very bandwidth- and power-intensive.

Here’s what happens when we invoke the CPU cores and the Radeon HD 7950 together. Somewhat surprisingly, performance drops for most configurations, except for the recent Intel processors that can track eight threads or more. Apparently, the lower-end CPUs would be better off spending their time just acting in support of the discrete Radeon.

Power consumption and efficiency

Our workload for this test was encoding a video with x264, based on a command ripped straight from the x264 benchmark you’ll see later. This encoding job is a two-pass process. The first pass is lightly multithread and will give us the chance to see how power consumption looks when mechanisms like Turbo and core power gating are in use. The second pass is more widely multithreaded.

We’ve tested all of the CPUs in our default configuration, which includes a discrete Radeon card. We’ve also popped out the discrete card to get a look at power consumption for the A10, Core i3, and A8-3850.

These plots of power use during our test period give you a sense of what to expect. The wide gap between the max power ratings of the AMD APUs (100W) and the competing Intel parts (55W) is unmistakably reflected in the power-use readings we took at the wall socket.

When idling at the Windows desktop, the Trinity chips rival their Intel competition for power efficiency. Without a discrete card installed, the A10-5800K sips power at idle. That 24W number is a testament to this chip’s mobile roots.

AMD’s desktop APUs leave behind those mobile roots in dramatic fashion when presented with some work to do. Our A10-equipped system draws 152W at the wall socket, about 50% more than a similarly equipped system based on the fastest Ivy Bridge, the Core i7-3770K. The Core i3-3225 system’s peak power draw is well under half the A10 system’s.

We can quantify efficiency by looking at the amount of power used, in kilojoules, during the entirety of our test period, both when the chips are busy and at idle. By that measure, the A10-5800K system is less power efficient overall than our Llano-based A8-3850 system. Removing the discrete graphics card helps, but not nearly enough: the Core i3-3225 system with a discrete Radeon still consumes less energy over the test period than the A10 system does without a video card installed.

Perhaps our best measure of CPU power efficiency is task energy: the amount of energy used while encoding our video. This measure rewards CPUs for finishing the job sooner, but it doesn’t account for power draw at idle.

The Trinity systems combine relatively high power draw and fairly lengthy rendering times, so their energy efficiency is among the worst of the CPUs we’ve tested. There’s no getting around this fact. On the desktop, these chips with their 100W TDPs are a far cry from their mobile counterparts, yet they’re not fast enough to conserve energy by finishing the job quickly.

The Elder Scrolls V: Skyrim

Now it’s time to pop in a graphics card and look at gaming performance. We’ve raised the display resolution and image quality settings substantially, but the CPU should still be the primary performance limiter. Again, we’re using our latency-focused game testing methods. If you’re unfamiliar with what we’re doing, you might want to check out our recent CPU gaming performance article, which has a subset of the data here and explains our methods reasonably well.


The scope of our ambition is laid bare, as we present frame-by-frame results for 22 different CPUs. I’ll admit, we have gone entirely overboard here. My only defense is that people keep asking for more data! No, I don’t know what’s wrong with them, either.

The FPS average and 99th percentile results mirror each other handsomely. However, what they show isn’t good for AMD. This is quite the reversal of what happens when you’re running games on the IGPs. The Pentium G2120, an $86 processor, performs better in this test than any CPU AMD has ever produced. And, yes, ye olde Phenom II X4 980 remains AMD’s fastest gaming chip, at least in this test case.

In the past, we’ve attributed the struggles of the newer AMD chips in this test to their relatively weak per-thread performance, and we still think that’s the case with Trinity. Notice how the dual-module chips like the A10-5800K and the FX-4170 outperform the quad-module FX-8150. The chips with fewer modules reach slightly higher clock speeds, giving them an edge in lightly threaded performance.

We had hoped Piledriver’s modest IPC improvements would make a noticeable impact, but that doesn’t seem to be the case. Compare the Bulldozer-based FX-4170 to the A10-5800K. The FX-4170 runs at 4.2GHz with a 4.3GHz Turbo peak, while the A10-5800K runs at 3.8/4.2GHz. Despite a difference in Turbo frequencies of just 100MHz, the FX-4170 remains faster than the 5800K. The A10 does achieve similar performance in a 100W TDP, while the FX-4170’s power envelope is 125W, so that’s progress—just not progress in per-clock throughput.


The latency curves capture the trouble with the newer AMD CPUs—it’s that spike upward for the last 5% of frames. Flip between the plots, and you’ll see that the Phenom II X4 980’s curve looks much nicer than the newer chips’.

Ahh, our old measure of “badness” keeps us grounded once again. Although AMD is slower than Intel in this test scenario, none of the chips perform horribly. Virtually no time is spent beyond our customary 50-ms threshold, and even 33 ms isn’t much of a challenge, so we’ve ratcheted our threshold down to 16.7 milliseconds—the equivalent of 60 FPS. Some of the fastest processors come very close to delivering a steady stream of frames a 60 FPS or better. The Trinity-based APUs can’t match that—and in fact are among the weakest CPUs here—but we’re asking them to meet a very tough standard.

Batman: Arkham City


In spite of the spiky nature of the frame time plots for Arkham City, the FPS averages and 99th percentile graphs again roughly track together. One exception is the Pentium G2120. Its FPS average tops all but one AMD processor, but the Pentium drops down the ranks in the more latency-sensitive measurement. Nevertheless, the general story told here isn’t terribly different from what we saw in Skyrim.


One ray of light for AMD is the relative performance of the Trinity chips versus the A8-3850. The A10 spends less than half the time on long-latency frames that the A8-3850 does. Trouble is, the Core i3-3225 spends less than a quarter of the time the A10 does beyond our threshold.

Battlefield 3


Judging by the FPS average, you’d think the various CPUs would all be equally adequate, pretty much. But have a look at the 99th percentile results and—whoops. The Pentium G2120 really struggles, and you can see it happening if you look at the frame time plot above. It’s riddled with spikes above 30 and 40 milliseconds.

The reason, most likely, is that the Pentium G2120 is the only CPU here that can track only two threads, one for each physical core. Apparently, one reason BF3 runs so well on the other processors is excellent multithreading. Even the slowest quad-core part (the A8-5600K, in this case) performs admirably, as do the dual-core Intel chips with Hyper-Threading. Have a look at the latency curves below, and you’ll see that they all look about the same, except for the G2120’s radical turn northward.


So yeah, the FPS average tells us the difference between the Pentium G2120 and the A8-5600K is a single frame per second: 81 FPS versus 82. To take another swing at a deceased equine, the difference between the two is much larger than the FPS average suggests.

Crysis 2


Notice the spike at the beginning of the test run; it happens on each and every CPU. You can feel the hitch while playing. Apparently, the game is loading some data for the area we’re about to enter. Faster CPUs tend to reduce the size of the spike.

Doh! The FPS and 99th percentile results don’t track again. Is he going to give us another lecture about frame latencies?

Nah. You get the idea. The Pentium G2120 again pays the price for being the only dual-threaded contestant.

Another thing worth noting is how closely packed the various CPUs are at the 99th percentile. At that one point, at least, there’s little practical difference between the fastest Core i7 and the two Trinity APUs.


Ooh! Ooh! Look at the curve for the A10 versus the FX-4170. (The A10 largely overlaps with the FX-8150.) The A10 delivers lower latencies from the 50th to the 80th percentiles or thereabouts. Could be a Piledriver IPC improvement spotted in the wild, perhaps. Hush, kids, and enjoy the view. Also, I’m still geeking out over the fine differences between the curves for various speed grades of Intel processors.

All of the CPUs are pretty competent, if you boil it down to our indicator of badness. The exception, of course, is the Pentium G2120. Perhaps we didn’t ask nicely enough.

Multitasking: Gaming while transcoding video

A number of readers over the years have suggested that some sort of real-time multitasking test would be a nice benchmark for multi-core CPUs. That goal has proven to be rather elusive, but we think our new game testing methods may allow us to pull it off. What we did is play some Skyrim, with a 60-second tour around Whiterun, using the same settings as our earlier gaming test. In the background, we had Windows Live Movie Maker transcoding a video from MPEG2 to H.264. Here’s a look at the quality of our Skyrim experience while encoding.



So, who had the Pentium G2120 being the whipping boy here? Good call, although it wasn’t hard to see coming. Disappointingly, the Trinity chips turn out to be slower than their older sibling, the A8-3850, in our latency-oriented metrics—and yes, folks, the Core i3-3225 is again quite a bit faster than any of ’em.

Civilization V

Civ V will run this benchmark in two ways, either while using the graphics card to draw everything on the screen, just as it would during a game, or entirely in software, without bothering with rendering, as a pure CPU performance test.

Either way you run it, the Trinity APUs are near the back of the pack, only ahead of their Llano predecessor.

Productivity

Compiling code in GCC

Another persistent request from our readers has been the addition of some sort of code-compiling benchmark. With the help of our resident developer, Bruno Ferreira, we’ve finally put together just such a test. Qtbench tests the time required to compile the QT SDK using the GCC compiler. Here is Bruno’s note about how he put it together:

QT SDK 2010.05 – Windows, compiled via the included MinGW port of GCC 4.4.0.

Even though apparently at the time the Linux version had properly working and supported multithreaded compilation, the Windows version had to be somewhat hacked to achieve the same functionality, due to some batch file snafus.

After a working multithreaded compile was obtained (with the number of simultaneous jobs configurable), it was time to get the compile time down from 45m+ to a manageable level. This required severe hacking of the makefiles in order to strip the build down to a more streamlined version that preferably would still compile before hell froze over.

Then some more fiddling was required in order for the test to be flexible about the paths where it was located. Which led to yet more Makefile mangling (the poor thing).

The number of jobs dispatched by the Qtbench script is configurable, and the compiler does some multithreading of its own, so we did some calibration testing to determine the optimal number of jobs for each CPU.

TrueCrypt disk encryption

TrueCrypt supports acceleration via Intel’s AES-NI instructions, so the encoding of the AES algorithm, in particular, should be very fast on the CPUs that support those instructions. We’ve also included results for another algorithm, Twofish, that isn’t accelerated via dedicated instructions.

7-Zip file compression and decompression

SunSpider JavaScript performance

Now that we’ve moved from games to productivity applications, Trinity has an opportunity to win a few victories over its Ivy Bridge-based rivals. The biggest win comes in the TrueCrypt AES test, where the processors with support for AES-NI fare much better than those without. Although Ivy Bridge can support AES-NI, that capability is fused off in the Core i3 and the Pentium.

In many cases, once again, the Trinity chips aren’t any faster than the A8-3850, depressingly enough. However, the A10-5800K cranks through SunSpider much sooner than the A8-3850 and, if you’re looking for evidence of Piledriver improvements, it’s quicker than the FX-4170, as well. This may or may not be an IPC improvement. It’s possible the A10 is just spending more time resident at its peak Turbo speed, thanks to Piledriver’s power improvements.

Image processing

The Panorama Factory photo stitching
The Panorama Factory handles an increasingly popular image processing task: joining together multiple images to create a wide-aspect panorama. This task can require lots of memory and can be computationally intensive, so The Panorama Factory comes in a 64-bit version that’s widely multithreaded. I asked it to join four pictures, each eight megapixels, into a glorious panorama of the interior of Damage Labs.

In the past, we’ve added up the time taken by all of the different elements of the panorama creation wizard and reported that number, along with detailed results for each operation. However, doing so is incredibly data-input-intensive, and the process tends to be dominated by a single, long operation: the stitch. Thus, we’ve simply decided to report the stitch time, which saves us a lot of work and still gets at the heart of the matter.

picCOLOR image processing and analysis

picCOLOR was created by Dr. Reinert H. G. Müller of the FIBUS Institute. This isn’t Photoshop; picCOLOR’s image analysis capabilities can be used for scientific applications like particle flow analysis. Dr. Müller has supplied us with new revisions of his program for some time now, all the while optimizing picCOLOR for new advances in CPU technology, including SSE extensions, multiple cores, and Hyper-Threading. Many of its individual functions are multithreaded.

At our request, Dr. Müller graciously agreed to re-tool his picCOLOR benchmark to incorporate some real-world usage scenarios. As a result, we now have four tests that employ picCOLOR for image analysis: particle image velocimetry, real-time object tracking, a bar-code search, and label recognition and rotation. For the sake of brevity, we’ve included a single overall score for those real-world tests.

Video encoding

x264 HD benchmark

This benchmark tests one of the most popular H.264 video encoders, the open-source x264. The results come in two parts, for the two passes the encoder makes through the video file. I’ve chosen to report them separately, since that’s typically how the results are reported in the public database of results for this benchmark.

Windows Live Movie Maker 14 video encoding

For this test, we used Windows Live Movie Maker to transcode a 30-minute TV show, recorded in 720p .wtv format on my Windows 7 Media Center system, into a 320×240 WMV-format video format appropriate for mobile devices.

The Core i3 and A10 split our image processing and video encoding tests pretty evenly. The A8-5600K fares better against the Pentium G2120, capturing the lead in everything but picCOLOR.

3D rendering

Cinebench rendering

The Cinebench benchmark is based on Maxon’s Cinema 4D rendering engine. It’s multithreaded and comes with a 64-bit executable. This test runs with just a single thread and then with as many threads as CPU cores (or threads, in CPUs with multiple hardware threads per core) are available.

POV-Ray rendering

These rendering applications aren’t OpenCL-accelerated like our LuxMark test, but they do give the FPU a good workout. Overall, the Core i3 and A10 are pretty evenly matched. The Pentium G2120 can’t really hang with the A8-5600K, though.

Scientific computing

MyriMatch proteomics

Our benchmarks sometimes come from unexpected places, and such is the case with this one. David Tabb is a friend of mine from high school and a long-time TR reader. He has provided us with an intriguing new benchmark based on an application he’s developed for use in his research work. The application is called MyriMatch, and it’s intended for use in proteomics, or the large-scale study of protein. I’ll stop right here and let him explain what MyriMatch does:

In shotgun proteomics, researchers digest complex mixtures of proteins into peptides, separate them by liquid chromatography, and analyze them by tandem mass spectrometers. This creates data sets containing tens of thousands of spectra that can be identified to peptide sequences drawn from the known genomes for most lab organisms. The first software for this purpose was Sequest, created by John Yates and Jimmy Eng at the University of Washington. Recently, David Tabb and Matthew Chambers at Vanderbilt University developed MyriMatch, an algorithm that can exploit multiple cores and multiple computers for this matching. Source code and binaries of MyriMatch are publicly available.
In this test, 5555 tandem mass spectra from a Thermo LTQ mass spectrometer are identified to peptides generated from the 6714 proteins of S. cerevisiae (baker’s yeast). The data set was provided by Andy Link at Vanderbilt University. The FASTA protein sequence database was provided by the Saccharomyces Genome Database.

MyriMatch uses threading to accelerate the handling of protein sequences. The database (read into memory) is separated into a number of jobs, typically the number of threads multiplied by 10. If four threads are used in the above database, for example, each job consists of 168 protein sequences (1/40th of the database). When a thread finishes handling all proteins in the current job, it accepts another job from the queue. This technique is intended to minimize synchronization overhead between threads and minimize CPU idle time.

The most important news for us is that MyriMatch is a widely multithreaded real-world application that we can use with a relevant data set. I should mention that performance scaling in MyriMatch tends to be limited by several factors, including memory bandwidth, as David explains:

Inefficiencies in scaling occur from a variety of sources. First, each thread is comparing to a common collection of tandem mass spectra in memory. Although most peptides will be compared to different spectra within the collection, sometimes multiple threads attempt to compare to the same spectra simultaneously, necessitating a mutex mechanism for each spectrum. Second, the number of spectra in memory far exceeds the capacity of processor caches, and so the memory controller gets a fair workout during execution.

STARS Euler3d computational fluid dynamics

Charles O’Neill works in the Computational Aeroservoelasticity Laboratory at Oklahoma State University, and he contacted us to suggest we try the computational fluid dynamics (CFD) benchmark based on the STARS Euler3D structural analysis routines developed at CASELab. This benchmark has been available to the public for some time in single-threaded form, but Charles was kind enough to put together a multithreaded version of the benchmark for us with a larger data set. He has also put a web page online with a downloadable version of the multithreaded benchmark, a description, and some results here.

In this test, the application is basically doing analysis of airflow over an aircraft wing. I will step out of the way and let Charles explain the rest:

The benchmark gtestcase is the AGARD 445.6 aeroelastic test wing. The wing uses a NACA 65A004 airfoil section and has a panel aspect ratio of 1.65, taper ratio of 0.66, and a quarter-chord sweep angle of 45º. This AGARD wing was tested at the NASA Langley Research Center in the 16-foot Transonic Dynamics Tunnel and is a standard aeroelastic test case used for validation of unsteady, compressible CFD codes.
The CFD grid contains 1.23 million tetrahedral elements and 223 thousand nodes . . . . The benchmark executable advances the Mach 0.50 AGARD flow solution. A benchmark score is reported as a CFD cycle frequency in Hertz.

So the higher the score, the faster the computer. Charles tells me these CFD solvers are very floating-point intensive, but they’re oftentimes limited primarily by memory bandwidth. He has modified the benchmark for us in order to enable control over the number of threads used. Here’s how our contenders handled the test with optimal thread counts for each processor.

Yeah, you’re not likely to use any of these low-end processors to do this sort of work, unless you’re a poor grad student or something. (Keep emailing me with your tests, guys.) Let’s see if we can wrap this monster up.

Conclusions

Ok, look, I have a project scope problem. Including frame-by-frame data across multiple games for 22 different processors, along with everything else, probably wasn’t the wisest move. We may need to re-think our approach.

To help you deal with the data overload, we’ve boiled our results down to a few simple scatter plots showing price versus performance. We’ve averaged performance across several sets of tests using a geometric mean. The first plot is based on our overall CPU suite, including the discrete gaming tests but not IGP gaming. The second plot isolates just the discrete gaming tests, and the third one covers only the IGP gaming tests. For the gaming plots, we’ve converted our 99th percentile frame time values into their FPS equivalents. As ever, the best position on each plot is the top left-hand corner, where prices are low and performance is high.


If you’ve been following the x86 processor wars lately, you might be surprised to see that the A10-5800K beats the Core i3-3225 in overall performance, while the A8-5600K ties it. (You probably aren’t surprised to see the pitiful Pentium G2120 buried at the bottom left of the scatter plot.) Trinity’s strength here comes from its four integer cores (versus two for the Intel competition) and the fact that our CPU performance suite is very nicely multithreaded, as a rule. Trinity’s per-thread performance is still a significant weakness, but AMD has priced the A10-5800K and A8-5600K appropriately, given their performance. Just don’t forget that Trinity is matching the Core i3-3225’s benchmark numbers by carving out nearly double the power envelope for itself.

Click over to the discrete gaming scatter, and you’ll see something of a remix of our recent CPU gaming article. To those folks who requested the inclusion of a Core i3 the next time around: you were right. The Core i3-3225 is one heckuva budget gaming chip, faster overall than any CPU in AMD’s lineup. The Trinity-based APUs aren’t terrible for gaming, but their pokey per-thread performance can impact the smoothness of frame delivery. In some cases, like our single-player Battlefield 3 test, the impact may be minimal. In others, you’re likely to feel it. The A10-5800K just matches the gaming performance of the Core i5-655K, a two-generations-old dual-core Intel part—and it does so in nearly twice the power envelope of the much faster Core i3-3225. If you plan to plug a discrete graphics card into your system, you’d do well to go with a recent Intel CPU instead.

Just make sure that CPU isn’t the Pentium G2120, whose woeful performance in a couple tests proved to us that you probably do need 3-4 cores, or at least threads—or, you know, maybe AVX and higher self-esteem—to run today’s games optimally.

Pop over to the IGP gaming scatter plot and the story’s a simple one. The A10’s faster graphics extend AMD’s lead. Of course, to keep beating the drum, the A10 achieves its higher performance by using nearly twice the power envelope of the Core i3-3225, so its dominance should come as no surprise. Considering its power envelope handicap, Ivy’s IGP isn’t nearly as far behind Trinity’s as one might think. I’d still trust AMD to provide better drivers for its Radeon graphics solutions than Intel does for its IGPs, but the HD 4000 did manage all of the big-budget titles we threw at it without any crashes or obvious visual artifacts. Intel is clearly paying more attention to software support than it has in the past.

The games we used are indeed AAA titles, and they required relatively low resolutions and quality levels just to achieve playable frame rates. That’s what you get with IGP gaming these days. There are tons of indie and casual games whose requirements are much lower; they’ll run well on just about any modern IGP. But in such cases, the performance differences between the Intel and AMD integrated graphics solutions are less likely to matter.

We still think serious gamers will want to add discrete graphics cards to their systems. Expandability is one of the hallmarks of a desktop PC, after all. Even a relatively affordable card like the Radeon HD 7750, which sells for as little as $89 at Newegg, will run today’s games competently at two-megapixel resolutions and decent quality levels. We’re talking about a vastly superior experience to the one these integrated graphics processors can muster. What’s more, the Radeon HD 7750’s TDP is just 55W. Add that to the 55W of the Core i3-3225, and you’ve only exceeded the A10-5800K’s max power rating by 10W. In terms of power draw, noise, and cooling demands, the two options will be practically the same.

Forgive me if this sounds like our Llano review on replay, but it’s hard to see where these Trinity APUs fit into the desktop PC landscape. Their 100W TDP disqualifies them from all-in-one systems, small-form-factor enclosures, and home theater PCs. Not being able to slide into those types of systems, where discrete graphics cards aren’t practical, largely negates AMD’s advantage in integrated graphics performance. The 65W version of the A10 or A8 would be workable in a larger HTPC case, but you’ll be giving up performance in order to fit into that envelope—and it’ll still draw 10W more at peak than the Core i3-3225. Perhaps in that scenario, if you planned to do some gaming at 720p, Trinity’s faster graphics would be the deciding factor.

The only remaining landing place for these 100W APUs is a budget desktop PC—but again, the Core i3-3225 will give you better single-threaded performance and lower power draw than the A10-5800K for just a few bucks more. I’m having a hard time envisioning a system guide build where the A10 makes more sense than the Core i3. What’s the concept? A very budget desktop PC in which the user does a small amount of gaming with non-casual titles? That needle is hard to thread.

At least there is a narrow opening for the A8-5600K at $100, since the Pentium G2120 is such a willing victim. If you just want a web-surf-itron or a Facebook-inator, a basic PC that’s snappier than the desperately cheap Atom- and Brazos-based stuff, the A8-5600K could well be the CPU for you.

What I’d really like to see AMD do next is release some desktop Trinity variants geared for lower power envelopes, like 35W and 45W. The Socket FM2 platform’s power draw at idle is impressively low. And in the world of slim enclosures, Mini-ITX motherboards, and iMac-style all-in-ones, Trinity would be playing to its strengths, not its weaknesses.

Follow me on Twitter for even more words, in small packets.

Comments closed
    • aim18
    • 7 years ago
    • gsg88
    • 7 years ago

    Hi,

    I own an AMD A8 5600k and i want to ask you what video card to choose from these:

    [url<]http://www.msi.com/product/vga/R5770-Hawk.html[/url<] [url<]http://www.asus.com/Graphics_Cards/AMD_Series/EAH6770_DC_SL_2DI1GD5/[/url<] [url<]http://www.msi.com/product/vga/R6670-MD1GD5.html#?div=Specification[/url<] dual fan [url<]http://ro.asus.com/Graphics_Cards/AMD_Series/EAH6670DIS1GD5/[/url<] I want to know if i can use all of them in dual graphics mod, AMD recomends HD6570 and HD6670 for this processor THX!

    • Morris
    • 7 years ago

    GREAT pcitures of the hardware! Excellent graphs. Too much unimportant data IMO because it doesn’t tell what the user experience really is like.

    As we see however, AMD’s Trinity desktop APUs are excellent performers and also a very good value. These should please many folks looking for a nice performing, modestly priced desktop system.

    It’s all good.

    • halbhh2
    • 7 years ago

    I think this article could answer some questions for some people thinking about really economical upgrades of older computers, where they can reuse their OS.

    But there is an interesting 2nd angle after all….

    See, about 80-85% of computers, while they are powered on in S0 state (full on) are doing just exactly…nothing at all about 95% of the time. Another 4% of the time they are doing very light stuff, like loading a web page or running a scan, etc.

    There are a *lot* of computers for which this description is their life.

    For this kind of use, for about 80-85%% of people, the A8, A10 can make sense, because it has the slight glamor of actually being able to play a modern game on a smallish monitor. Like a car that has a speedometer that suggests it could indeed go over 90 if you pressed the pedal long enough.

    I agree very much with the idea that lower power A10s for the desktop could be interesting in a year or so. Consider this experience:

    I visited a office to troubleshoot and switched over to a working computer at one point to look something up, and was at first disgusted and then somewhat intrigued when I realized it was a cheap E-350 box. I found that it is slow in ways that I can notice….but not slow to surf the web, or not really slow to surf. The interesting moment came when I saw the power brick laying on the floor. That was the PSU. Was it 18 watts? It was absurdly low wattage.

    • Celess
    • 7 years ago

    Theres no way to easily click back to the article. Content title needs to be a link back to article on the comments pages in the new TR version. This article is good example with tons of comment references to specific stats in the subject article but no way to get there without finding the article by starting over on the site. Theres no link back at the bottom either, should be one there. Probalby help SEO too.
    (off-topic / on-topic)

      • flip-mode
      • 7 years ago

      It’s not obvious, but if you click the picture next to the article title it links to the article.

    • WaltC
    • 7 years ago

    I am waiting on socket AM3+ standalone Piledriver which, supposedly, will ship around the 22nd or so of October (this month).

    • destroy.all.monsters
    • 7 years ago

    This isn’t a chip family meant for enthusiasts and I question some of the testing as a result. I don’t think the average person not running a server farm is all that concerned about wattage (and plenty of enthusiasts say “oh it’s hot – moar fans!” at least on many forums). HTPCs have been built with far hotter chips and far hotter video cards. I just wonder if we’ve just crossed a certain threshold where almost everything is good enough for the average person.

    We’re so far away from truly broken architectures like Prescott (which despite all the grief it got served many mainstream users very well) that I think you just have to decide whether you prefer single-threaded performance (and artificial product segmentation) positive or better graphics (AMD) or it becomes about brand loyalty. I don’t see wattage particularly under load a real consideration – and I don’t know anyone using a home system that does.

    A final consideration is to realize that no product is perfect and the likelyhood of AMD ever tackling intel head on again (as in the glory days of the Athlon) is virtually nil. To continue to beat the drum that their products “aren’t good enough” makes it very likely that there will be no competition to be had at all. Be annoyed at the PR shenanigans all you like – intel isn’t battling for its existence – and intel’s R&D expenditures dwarf AMD’s by several orders of magnitude.

    This chip looks like just the ticket for a good many people that don’t need add in cards – which is pretty significant portion of the population. It is that AMD has pushed the envelope on what an IGP can do that’s brought light to medium gaming to the table at all.

      • diable
      • 7 years ago

      Sorry but AMD’s chips “aren’t good enough” anymore. The AMD of 10 years ago is not the AMD of today and no amount of pity is going to change that.

    • Spunjji
    • 7 years ago

    The only thing I can really complain about here is the lack of testing Trinity with higher graphics options. Unfortunately, it’s a fairly large complaint. Your graphs prove that the chip can do better than the settings you’re testing with, but I’m left guessing as to just how well it would cope. For me that makes the difference between a budget gaming non-option (i3) and something that would sit nicely under my TV as a budget media/light games system. In a similar vein, some voltage/memory tweaking wouldn’t go amiss either.

    I understand that all of this takes time, though, and you can only do so much. So thanks for what you did do.

    • Shouefref
    • 7 years ago

    As far as I can see AMD has lost from Intel.
    Of course, the 65W haven’t been tested yet.

    • albundy
    • 7 years ago

    great review! you gave me a great reason to stay with my phenom 2 even longer, lol! basted on the 980’s performance, I am still good for another few years until ddr-4 and SATA Express hit mainstream, then i will probably upgrade.

    • ronch
    • 7 years ago

    Just a thought. Why would someone who knows how to downclock choose the 65w variants (5700 and 5500)? The 5800 and 5600 are priced similarly to these and come with unlocked multipliers, and will probably go down to 65w once underclocked. Take the 5800K, for example. You can buy it now for just as much as the 5700, clock it at 3.4GHz, and probably get 65w. But you STILL have the option of clocking it back up or overclock it if you want to. And as for the concern that they may still draw more than 65w when downclocked, remember that the unlocked parts are probably better samples with better power efficiency, else they probably won’t have been black edition parts. Of course, your mileage may vary but it’s better than being stuck forever at stock clocks.

      • jimbo75
      • 7 years ago
        • ronch
        • 7 years ago

        But if what I said is true, it would make the non-K variants obsolete.

          • jimbo75
          • 7 years ago
    • jimbo75
    • 7 years ago
    • OneArmedScissor
    • 7 years ago

    What on erf is going on with the Pentium’s lag in games?

    Did Intel go back to using lower clock speeds for the cache / memory controller / busses in certain brands?

    • rechicero
    • 7 years ago

    This is odd, I usually love TR’s review, but this one… I didn’t like it so much.

    I don’t like to see a GPU review, using 720p and low everything in not-FPS games and seeing how the worst of the pack is in the 50-ish fps. Those games don’t need so many FPS and surely you should’ve crank up quality until at least the worst IGP is at least in the limit of playable fps.

    I don’t like to see a CPU review with zero? overclock tests. And zero? downvolt tests. Specially when one of the targets of the CPU reviewed is HTPC. And I don’t like to read in the comments which CPUs are overclockable (and downvoltable) and which ones not (or having to look it up in the Internet). Overclocking and downvolting should be a must for any budget and HTPC CPU review.

    From a price competition point of view, if the A10 costs 130 and there is one IV i3 that costs exactly the same (I’m using newegg as reference, as it’s your sponsor), why didn’t you use the i3-3220?????? And yes, the 3225 has better GPU, but it doesn’t seem fair using a more expensive CPU when there is one exactly in the same price point. I’d say that’s misleading. I could talk about how the most expensive FM2 mobo is 30 cheaper than the LGA1155 one you used, but that’s probably a lost cause…

    I really love your reviews, but this one seems too… standard. Automatic.

    I don’t know where are the limits of the IGP in not FPS games: I’m pretty sure you can crank up resolution or quality of the those games in the AMD IGP, but you don’t tell me how much.

    I don’t know how can I play with the CPUs to extract more perf at less price or minimize watts with downvolting. And this time I don’t have the slightest idea of how they would perform. Or if they would, at all.

    Disclaimer: I couldn’t find the overclock-downvolt parts, but maybe they are there, somewhere… In that case, maybe you might add them to the description of the pages? That’d be usefull.

      • rechicero
      • 7 years ago

      And I’d like to add some scenarios (completely hypothetical, as I don’t have the data):

      This guy doesn’t care about watts, just wants the best budget CPU for gaming. He reads this review and thinks, well, I can save like $45 with the AMD (CPU+mobo, using the most expensive FM2 mobo in newegg) but…

      I don’t care about FPS, I like RPG and RTS games. But I want something more than 720p in minimun quality. Can this IGP do it? How much better? He won’t know.

      I want the best absolute budget CPU for gaming (I’d play FPS and I’d add a discrete card). The A10 seems to be in the 40 in the 99th percentile discrete gaming scatter plot. The i3225 is in the 46-47 (just guessing from the plot). That’s 15-17% more, so the i3225 is better. But, wait a moment, the A10 is a K edition and I’m buying the most expensive mobo, with all the overclocking options of the A85X chipset, if I can overclock it 15% (and that’s not a massive overclock), it would be the same… for $45 dollar less. If I can overclock more than that… I’d have a faster CPU for less money! But i don’t know even if I will be able to squish 100 MHz more from this A10, because nobody tested it. And maybe the Dual Graphics thingie could offer even better perf! But, why didn’t test anyone of this topics??????

      This other guy wants an HTPC:

      OK, which one offers better Visual quality? Mmmm, nothing said about this matter.
      OK, can I downvolt one of these puppies? Would it be useful? How much? Mmm, nothing said about this matter.

      I really think this 3 scenarios are pretty common. And right now are looking for an answer, and that answer is not here (or I couldn’t find it)

      EDIT: And in the “don’t care about wattage, just want the best perff/dollar and I’ll overclock”, in the case of a 17% overclock that levels the perf of the 5800K and the i3225, How many hours would pass until the $45 are consumed in electricity bills? That would be interesting to the cheap bastard inside me :-P.

        • sschaem
        • 7 years ago

        I have most of those questions 🙂

        I have to extrapolate from the mobile reviews and scattered test.

        But all data point to a killer 100% fanless HTPC platform.

        I’m not saying, no moving parts, because I might still want a local media cache using a 2TB HDD.

        • xeridea
        • 7 years ago

        Good points.

        Undervolting in particular was something I was wondering about, from what data we have, power usage seems to drop substantially with just a couple hundred Mhz less, undervolting would further extend this, as BD/PD seem to shoot up fast in watts once voltage/Ghz gets past a certain point. This is part of why Trinity does so good vs IB in mobile sector. Look at the 5700K, TDP drops 1/3 for ~8% drop in clockspeed, and negligible drop on GPU. The lower end A4 and A6s are probably closer to 35-55W power usage, but TDPs tend to be bundled together to simplify cooling planning. Look at the 5700K vs the 5400K or 5300, half the cores, lower turbo clocks, substantially slower GPU, same TDP. I would dare say the 5400K and/or 5300 would be fine for small HTPC setups, for a mere ~$60.

      • sschaem
      • 7 years ago

      The GPU overclock to 1GHZ, and Trinity benefit from 2133 memory. <$50 for 8GB

      Thats about 20% better gaming performance if you need it.

      i3-3220 $129.99
      A10-5800k $129.99
      i3-3225 $145.99

      Its puzzling why TR did this…

      Maybe because the i3-3220 would have been a laugh in all GPU related benchmarks and is a waste of time to benchmark? (The i3-3225 uses the HD4000, Intel highest end GPU to date)

      Maybe TR will repay the favor and compare Vishera with 10% cheaper Intel processor when its released?

        • rechicero
        • 7 years ago

        Yeah, I know I can find some of those answers in other sites. But I just wanting to explain clearly why this one seems so… wanting. Those are, I’d say, pretty obvious questions for a budget CPU and an APU.

        And you can’t find an answer in the site I truly think is the best for reviews.

        The thing is, if I go to other reviews, you can expect like 4,4-4,5 GHz from this CPUs, that means like… maybe 10% more perf. Not stellar at all. But the GPU overclock is really good. So, the answers would be:

        A) Yes, if you don’t play FPS, and overclock the IGP, you can play any other game at pretty good quality for peanuts. If you want the best budget PC, capable of good quality (not the best, but good) visuals in non-FPS games: the 5800K is probably ideal for you.

        B) No, you won’t extract the same game perf the i3 has… If you want the best budget CPU for discrete FPS gaming, the i3320 is the best option (the 3225 would be throwing away $15 bucks).

        And, for the HTPC crowd

        A) AMD offers the best Video Quality (according to the HQV test)

        B) We’re still waiting for something to try.

        So maybe the 5700K is the best option, with some downvolting, I’d add: best IQ, probably passive cooled, more than good enough for anything you can expect in an HTPC and cheap.

        But I have to go to other sites and I truly think the best review site should answer those questions. Instead, all I find is “APUs are for laptots”.

        About the memory, I don’t see the point of paying more for… for what? We’re talking about budget CPUs here. Budget means budget!

      • brucethemoose
      • 7 years ago

      I’ll answer that for you, based on experience and some guessing.

      Depending on your resolution and FPS/smoothness taste, Skyrim will play somewhere between tweaked versions of medium and high on my A8, so it’ll be a good bit above than on an OC’d A10.

      At lower frequencies, llano is one of the best chips for undervolting ever made. Personally, I have a massive undervolt and a 30% OC at the same time on my llano in my laptop.

      Trinity, unfortunately, is based on BD, and isn’t blessed with the same headroom. From what I hear, there’s very little room to undervolt mobile trinity, same with BD. Underclock a bit, and you can reduce power consumption alot, but it isn’t blessed like llano.

      I too, would want to see a OC review. But it still won’t touch an i3 on a cheap mobo paired with a 7750, making this chip almost irrelevant in the desktop… which is why it launched late.

        • rechicero
        • 7 years ago

        I don’t know… an i3+7750 means like $120 more. In a $500 rig, that’s 24% more money. Of course it’s faster!

        I really think this chip has its place… for some users (a gamer on budget, specially if he doesn’t care about FPS).

        In fact, the irrelevant chip would be (IMHO) the i3225. If you want need the extra IGP power, the 8500K is muuuuch better and cheaper. And if you don’t care about the IGP, the i3220 offers you exactly the same perf for less money. I just can’t think of an scenery where this chip makes sense.

        • rechicero
        • 7 years ago

        In fact, you can pair the 5800K with a 7770 for the same money… and that combo would be probably faster than the i3+7750 one for gaming. Not irrelevant at all…

        • kc77
        • 7 years ago

        [quote<]But it still won't touch an i3 on a cheap mobo paired with a 7750, making this chip almost irrelevant in the desktop... which is why it launched late.[/quote<] That's a really bad argument to make and I don't know why people are making it. You would have thought all of us forgot A) who this chip is for and B) who sells the most graphic chips. For that second point Intel sells the most graphic chips. How is that possible you say? Because of iGPU's that's why and that's exactly what this SKU is going after. Intel sells more graphic chips BECAUSE HALF OF THE MARKET DOESN'T BUY DISCRETE. Besides that, your argument is if you spend more than X you get Y. This is the case for all computer hardware on the planet. If you spend more generally you get more performance. Therefore we look at the market the particular SKU is geared towards. This SKU is geared towards Intel's i3 linueup without discrete and that's what it should be judged on. Not if and when the end user ends up in another price bracket. On top of all that NO ONE saw fit to test xfire so we don't even know if that statement holds true when gaming.

    • JuniperLE
    • 7 years ago

    thanks for the test, pretty good read…
    also the Pentium dual core vs i3 shows how amazingly well HT is working for gaming!

      • Airmantharp
      • 7 years ago

      It should work great on a dual-core processor, but can be detrimental on a quad-core.

        • JuniperLE
        • 7 years ago

        well, that’s because most games don’t really use more than 4 threads… but HT works really well for the i7 in many applications, and on a game optimized for more cores it should also work pretty well…

        SMT just works.

          • Jason181
          • 7 years ago

          The reason it just works is pretty simple, and highlights pretty much the only thing I’d change about the article. Intel’s FPUs are much more robust. Thus, adding another thread to them actually improves performance quite a bit because there are actually resources there to take advantage of.

          AMD’s FPUs are not strong, and it shows. Two threads are feeding one FPU (SMT by any other name), but the resources to take advantage of that other thread are just not there.

          It was easy to predict whether Trinity would be the curb-stomper or curb-stompee just based on whether it was an integer-based or floating point-based test. If AMD were a little more up-front about the “cores,” I wouldn’t have a problem with it. But they’re not; they call them “cores” even though they’re really not a complete core because they share the FPU.

          The only other factor was whether the application used two threads, or more than two threads (it only really mattered if the application was also integer-based).

          There is a place for Trinity, but buyers have to be aware that the performance is going to be very uneven (great in some applications and abysmal in comparison in others).

    • FatherXmas
    • 7 years ago

    Let’s say you are going off to college and your parents are willing to buy you a new desktop computer for $600 from either Dell or from a big box store. Now of course you would like to be able to play games on it but that’s something your parents really don’t want to hear and you don’t want to tell them.

    So your choices are an Intel Ivy Bridge or an AMD A10. Both systems have woefully small power supplies that would limit your choices for an add in video card and most people aren’t tech savvy enough to feel confident about a PSU transplant to go with a better video card. Not to mention you are a starving college student so you don’t have a lot of spare cash around to upgrade your system anyways.

    In that case the AMD A10 is a “better” buy for you. You get better gaming graphics than the Ivy Bridge system and for browsing, writing reports and such, you really aren’t going to notice any difference in performance.

    Of course once you do drop a half-way decent video card into the system, assuming you took care of the power issues, the A10 can’t keep the card properly fed then sure Ivy Bridge surges into the lead. Honestly I would like to see a review about the G2120 and why it performs so badly in the discrete GPU tests. I can’t believe it’s just the lack of HyperThreading, Turbo Boost and AVX instructions.

    As for the power issue. I honestly believe harping about the power envelope just read as a bunch of sour grapes over the staggered reviews that AMD asked for and it was the one thing they could point out because price/performance for CPU tasks and IGP were better than the Ivy Bridge i3-3225. And when did AMD ever had a power advantage on Intel since the Pentium IVs? It would be big news if it did. Might as well say the sun is bright.

    Sorry, I expect better from you guys.

    #SaveCoH

      • mikato
      • 7 years ago

      I like you’re thinking and agree with most of that, up until you said it was sour grapes. Come on, I doubt it. I am really interested to see that AMD does have a power advantage in an area though, like you said. That does seem like a big deal to me as well.

        • FatherXmas
        • 7 years ago

        But show me another review where they made such a big stink about power when CPU performance was as good or better for a similar price? They repeated it three times on the conclusions page, right after they admit it did something as good as the i3-3225. Would they point out multiple times, that the i3-3225 IGP scores are poor in it’s review if the A10 came out first?

        Discrete GPU gaming, yea Intel wins big time since Sandy Bridge. Lets compare Apples to Apples here, it’s roughly a $125 CPU, designed and priced to go up against Intel’s dual core with HT CPUs so let’s not compare it to the SB or IB CPUs unless they are less than say +20% of the cost of the A10. I’m tired seeing reviews, both CPU and GPU, where the chart includes products that are priced 2-3x the cost of the product being reviewed. Gee of course in those cases the product being tested always ends up near the bottom of the charts. At least here they’ve included a few CPUs that are similarly priced where other sites simply compared it to an i7-3770K (surprise, it lost badly on every benchmark). I’m tired seeing Miata Vs 911 reviews.

        I’m the first to admit that the A10 is a very niche product. However it does raise the bar significantly on big box store desktops when it comes to gaming capability. Soon parents and grandparents will have machines that can play games that aren’t Popcap or Flash based, a boon for those who visit on Holidays when they go to bed at 9 and you stay up till midnight.

        And more PCs that can play more games is only a boon for PC gaming.

        #SaveCoH

          • Jason181
          • 7 years ago

          Techreport has highlighted the power angle in pretty much all of their recent cpu reviews. Go back and look.

    • swaaye
    • 7 years ago

    For me the only interesting platform for this chip is in AMD Ultrabook-a-likes.

    • ronch
    • 7 years ago

    I’ve noticed that AMD structured Trinity’s pricing scheme such that, at the same price, you either choose between a black box Trinity and a white box Trinity. Those that come in black boxes have unlocked multipliers, high clocks, and higher TDPs than same-price, white box SKUs. I suppose these are meant for enthusiasts (on a budget). The white box Trinities are locked, have lower clocked CPUs, but have 65w TDPs compared to same-price, black box SKUs. They’re most likely targeted at HTPCs. What separates the Trinities in price (and there are only 3 price levels, if you tie the 2 cheapest SKUs together) are the number of shaders in their respective GPUs.

      • ermo
      • 7 years ago

      This begs the question (which others have already posed):

      “What happens when you try to undervolt the black box K editions? Can they run as cool as the white box non-K editions?”

      I’d be surprised if K editions did not run just as well or better than the white box versions, which means that enthusiasts should really always get a K edition, even if they just plan to HTPC it?

      But — and this is a big but — why should I go with a Trinity over, say, an E-450 for an HTPC, when the E-450 has lower idle power draw and can still play 1080p flawlesslesly as far as I know?

        • ronch
        • 7 years ago

        [quote<]why should I go with a Trinity over, say, an E-450 for an HTPC, when the E-450 has lower idle power draw and can still play 1080p flawlesslesly as far as I know?[/quote<] Trinity somehow still allows you to have gaming aspirations. WIth the E-450, you're limited to Popcap games. You could also say that Trinity chips are more future proof and are snappier to use. I have an Atom N450-based Lenovo netbook, and boy, is it slow! I reckon the E-450 isn't much better. It really depends what your priorities are, I suppose. Edit - about whether to get the K or non-K variants, i actually posted about that somewhere here. I believe in always getting the K variants. Just check out that post.

    • ronch
    • 7 years ago

    In terms of performance both for the CPU and GPU we already had a rough idea of where Trinity will stand even before the desktop version’s official release. In terms of performance, Trinity is ok with me. In terms of price, the product itself stands well against i3 Ivy models, which is ok with me as well. What I find interesting and needing of further study is power consumption.

    Looking at the power consumption graphs Trinity jumps from 24w when idle to a whopping 113w at load. That’s a whopping 4.7x increase over idle and still 2.36x more than the i3-3225 (no dGPU) at load! I thought the Piledriver design was supposed to curb power draw?

    Another thing: I’d like to know how much power an i5/i7+HD7950 sucks in when gaming compared to a system that only employs the A10-5800K APU (no dGPU). The power consumption test at both idle and load were performed by encoding video, so the dGPU in both cases is probably close to just idling, with minimal power draw. But how will the graphs look when gaming? Will an i5/i7+HD7950 combo still consume less power than a sole A10-5800K which puts out lower frame rates? Granted, the Intel setups will cost a lot more, but we’re talking strictly about power efficiency here, and obviously, with an HD7950 AMD also affects how the Intel system does because the GPU is from AMD.

      • [+Duracell-]
      • 7 years ago

      Well, the only way to really tell if power draw was really addressed is to compare Bulldozer to Piledriver parts, and that likely won’t happen until the Piledriver-based FX parts come out.

      The A10-5800K is a 100W TDP part, so it’s sort of expected to see that jump when going full-bore.

      • sschaem
      • 7 years ago

      The 22nm i3-3220 run at .8 volt and @3.3ghz ??
      [url<]http://www.xbitlabs.com/articles/cpu/display/core-i3-ivy-bridge_2.html#sect2[/url<] While the 32nm A10 is pumped at 1.368 volt @3.8ghz You might be able to run the A10 at .8 volt, but the clock will most likely top at 3ghz. For reference the A10-4600m is .8volt at 2.8GHZ stock. So you will see a drop of ~50watt in power consumption. Just speculation based on the mobile part... We need a review site to experiment with the K part to see how far it can be pushed for HTPC usage.

      • Airmantharp
      • 7 years ago

      I’m (honestly) trying to figure out why you want to use an HD7950 in a power consumption focused scenario. Better price/performance for sure, but behind on performance/watt compared to a GTX670.

    • sschaem
    • 7 years ago

    I kind of wish that the Dual Graphics feature was explored.

    A10-5800k + 7850 (<$100)
    vs
    i3-3220 + 7850

    In some review I seen a 30% boost in discreet gaming performance.

    Its to bad Physx doesn’t run on AMD GPU, because the IGP would have been a great accelerator.

      • willmore
      • 7 years ago

      Interesting theory–wether the IGPs contribution would offset the CPUs loss. I’m guessing it would vary a lot from game to game as to make any solid decision impossible.

      • DragonDaddyBear
      • 7 years ago

      I could be wrong, but the GPU in Trinity is a VLW4 GPU, right? So wouldn’t you need another VLIW4 GPU to pair with it? If so, the best would be a 7600 card.

        • sschaem
        • 7 years ago

        I dont think its required, but then I dont know how AMD manages unbalanced GPUs.
        A 6670 or 6570 would be the closest match.

        I haven;t seen a review yet that explorer dual graphics in detail.

        My hope is that at some stage games can use IGP for non rendering GPU task.
        Can you imagine if you could run physx on the IGP in games like borderland 2.

          • Rza79
          • 7 years ago

          Here you go:
          [url<]http://www.computerbase.de/artikel/grafikkarten/2012/test-trinity-vs.-ivy-bridge-im-gpu-test/8/[/url<] As long as the monitor is connected to the discrete videocard (and not the motherboard) there's a healthy performance improvement. And yes, micro stuttering is a huge issue: [url<]http://www.computerbase.de/artikel/grafikkarten/2012/test-trinity-vs.-ivy-bridge-im-gpu-test/9/[/url<] It seems that the dGPU renders two frames and then the iGPU renders one (slow) frame. There's also no dedicated crossfire connection which (i guess) adds to the latency.

            • Arag0n
            • 7 years ago

            I’m going to guess that this kind of asymetric crossfire that 2 frames are rendered on GPU and 1 in iGPU. That means that the games must use triple buffer. That should mean that the iGPU process the frame while the GPU is processing the other 2 so the final result is that GPU has 3 frames time to process 2 frames and iGPU has 3 frames time to process a single one.

            I’m not sure how they calculate the latency, but given the time differences, the GPU has half the time to process a frame than the iGPU and it should have half frame time, and that’s exactly what I see in the graph you showed, or in other words, you get one frame for free every 2 frames if using async crossfire.

            You can see that both in the graphs, 24ms the GPU and 48ms the iGPU or 30fps gpu alone and 45fps async crossfire.

            The issue about the latency should be that time framerate or frametime is reported according to the processing time and not the real time between frames. Still, some games seems to not use tripple buffer (same framerate with and without) or get limited by CPU (lower than ~150% performance of GPU).

            Edit: Forgot to add, that’s the way tripple frame works. The CPU process 3 frames without drawing and stores all the drawing instructions in a queue and the GPU process one by one. That means that there is always 3 frames avalible for the iGPU and GPU so the fact that iGPU takes double time shouldn’t increase the cadency between frames.

            You can implement it as a queue of 1 for iGPU and a queue of 2 for GPU and then,

            bool addFrameToQueue(Frame frame)
            {

            if( GPU.queue < 2)
            GPU.add(frame)
            else if(iGPU.queue < 1)
            iGPU.add(frame)
            else
            new eventWaitForFrameProcessed( frame );

            return true;
            }

            eventWaitForFrameProcessed(sender s, args frame)
            {
            s.add(frame);
            }

            • Arag0n
            • 7 years ago

            From your review, the interesting point is that seems that async crossfire is totally useless for computing, GPU or iGPU only is the way to go.

            And another crazy interesting point, the only bench they have that intel CPU’s can be faster by CPU path than GPU’s is luxmark, the center piece and single computing benchmark techreport did to the computing power of the GPU on trinity….

            [url<]http://www.computerbase.de/artikel/grafikkarten/2012/test-trinity-vs.-ivy-bridge-im-gpu-test/10/[/url<] Really, techreport is getting so fishy about transparency and all, but why do they place trinity under the bad lights to test it? Why did they test Llano with 1333Mhz despite the fact that it was defined to work with 1866Mhz memory? Why do they keep using CPU intensive games to test the iGPU? Why don't they use more than a single benchmark or application to profile computing power of the iGPU on trinity? Why did they compare Trinity on mobile against top of the line Core i7-3720QM and Core i7-2670QM? [url<]https://techreport.com/review/22932/amd-a10-4600m-trinity-apu-reviewed/6[/url<] I'm not going to go as far as to say that those reviews for AMD are fished, but I'm going to think that Scott dislikes AMD so much, he doesn't understands the product and then, he doesn't really pay attention to it while reviewing. He just does the standard set of benchmarks that are designed to highlight CPU performance, so no wonder why AMD will never look good from techreport reviews.

        • derFunkenstein
        • 7 years ago

        You get the same options now that Llano had, despite being different architectures.

        [url<]http://www.amd.com/us/products/technologies/dual-graphics/pages/dual-graphics.aspx#3[/url<]

          • TO11MTM
          • 7 years ago

          Wow… I think I’ve seen PC Chips/ECS have better compatibility lists than that.

            • derFunkenstein
            • 7 years ago

            Well at some point you get to where the discrete card is spending more time waiting on the iGPU to take its turn in the AFR setup than it would have taken the discrete card to do all the lifting on its own.

      • Pettytheft
      • 7 years ago

      You don’t need a GPU to do Physx. Runs just fine on CPU’s. Just need the industry to move in another direction or Nvidia to stop handicapping it.

        • destroy.all.monsters
        • 7 years ago

        It would behoove them to open up PhysX but I don’t think that they will. It’s still a bullet point on their cards that no one else has (for whatever diminishing amount of the gaming public cares about it) .

          • Waco
          • 7 years ago

          Which is retarded considering most titles that use it slag along on a single core or two.

          Borderlands 2, for example, drops in CPU usage with CPU PhysX running because the SINGLE THREAD of physics calculations holds up the entire engine.

          nVidia would gain absolutely massive acceptance and praise if they actually took their PhysX library and made it useful on x86 cores. They could still brag about it with GPU acceleration because it’ll let you have the same eye candy with a lot less CPU horsepower and they’d stop pissing people off that spend 3 seconds figuring out why PhysX is so damn slow when running on a standard CPU.

            • destroy.all.monsters
            • 7 years ago

            I had heard a year or two ago that nvidia was working to improve their x86 PhysX performance so I’m surprised it’s so dismal in Borderlands 2.

            I guess in Jen-Hsun’s world vendor lock in trumps consumer good will.

            OT: I keep wondering when or if Open CL in some way will make it irrelevant for gaming on the PC side. It seems Havok has stagnated as well.

    • willmore
    • 7 years ago

    I’m going to harp on Intels product segregation, so be warned.

    I use TrueCrypt for all of my drives. For mechanical drives ther performance my ageing Q6600@3.2GHz gives me is fine, but I picked up a couple of SSD drives in the recent sales and I’m not running into the limits of this CPU. Reading and writing to an SSD is limited by the speed at which the CPU can decrypt/encrypt data.

    Now, it’s coming up on time to upgrade my system. Any new MB will have SATA3 ports (which all of my SSDs support), so the limit that SATA2 could have put on my SSD performance (if the CPU hadn’t hit its limit first) will be gone. So, I’m going from needing a CPU that can do AES at ~100-150MB/s to one that needs to be able to do it at >600MB/s (even 1.2GB/s if I’m copying from one SSD to anyother, though that’s going to be so infrequent as to not matter much).

    He’s where the product segmentation bashing comes in. I can’t even consider an Intel chip less than an i5 family because they fused off AES-NI. So, for me, that raises the bar on what AMD chips I can consider as the Intel competition will be much more expensive than if I could have just used an i3 part.

    Now, it’s an open question if an AMD part in that price range will beat or match a similarly priced Intel part. But (!) the IVB part will not be upgradable. SB and IVB share a socket and the next Intel chip will have a new one. AMD has said that FM2 will be here for a few years. So, this gives a huge advantage to AMD. Heck, I could pick up a very low end AMD part now and upgrade if/when the next generation comes out to a higher end part.

    If Intel didn’t do this silly product segmentation or if AMD had better CPU performance, this would be a simple decision.

    Any insight, anyone?

      • mikato
      • 7 years ago

      Some SSDs do encryption through their controller now (whether or not you are setting a password). No CPU usage. You could pick one of those up and set a password. You’d better have a backup though because data recovery won’t be an option with some failure in the SSD or lost password.

        • willmore
        • 7 years ago

        Given the quality (or lack there of) of the firmware for some SSDs, I’m not sure I trust the drive to do the job right. TC does it right and I trust them. There’s already been one brand of SSD that messed up encrption. IIRC, they said it does AES256 while it really did 128. That’s pretty serious as some of the drives were used in government and business settings that are *required* to use AES256. Oops.

      • flip-mode
      • 7 years ago

      i5 3570K. It will last well longer than any typical drop-in upgrade window. Hey, you’re still on a Q6600 – you’re not striking me as a frequent upgrader.

        • willmore
        • 7 years ago

        Good point, but the FM2 socket would let me upgrade twice on the same MB. Once now at birth and once at EOL three years or so from now.

        And, yeah, you can safely say I’m not a frequent upgrader. 🙂

        I think your CPU choice is good. That’s probably the Intel entry into the competition. I probably won’t push my 240mmx120mm radiator with the TDP of that guy. The Q6600 is using 125W at peak according to the MB monitor program and the water cooling setup doesn’t go over 50C on the die with that load.

          • flip-mode
          • 7 years ago

          Food for thought: it is possible that three years from now any chip you can plug into FM2 may only be as fast as the i5-3570 is right now. That’s if we’re lucky and AMD starts making successive generations faster instead of slower like they did from Deneb / Thuban to Bulldozer / Piledriver.

      • Deanjo
      • 7 years ago

      [quote<]AMD has said that FM2 will be here for a few years. So, this gives a huge advantage to AMD. Heck, I could pick up a very low end AMD part now and upgrade if/when the next generation comes out to a higher end part.[/quote<] There should be a big asterdisk with that statement. AMD has also in the past made claims of long support and fall short like FM1 and 939.

        • chuckula
        • 7 years ago

        FM1: True
        939: It was around for a while… do you mean socket 754 instead? That one was a real fiasco but it occurred during the “golden age” post Prescott and Pre-Core 2 when AMD could get away with that stuff.

          • willmore
          • 7 years ago

          Its children lived on in many generations of laptop CPUs.

    • puppetworx
    • 7 years ago

    [quote<]A very budget desktop PC in which the user does a small amount of gaming with non-casual titles? That needle is hard to thread.[/quote<] I think that's actually quite a common situation in a lot of lower income families where the PC is still shared by the whole family - but it's not exactly a growth market. It's more likely that these will get thrown in a cheap PC for someone's kid. I'm very disappointed and can't help but think that another die shrink is needed desperately to keep in the running.

    • sschaem
    • 7 years ago

    “What I’d really like to see AMD do next is release some desktop Trinity variants geared for lower power envelopes, like 35W and 45W.”

    Isn’t the 5800k unlocked for that reason ? You can turn it into ‘anything’ ?

    ex: I seen review where they dropped voltage by 20% at stock clock (that chip seem factory overvolted)
    Seem to me that you could run this chip without any active cooling.

    [url<]http://www.fanlesstech.com/2012/10/amd-celebrates-trinity-in-silence.html[/url<]

      • cynan
      • 7 years ago

      I have to agree.

      If I was putting together an HTPC and didn’t care about enthusiast level gaming performance, Trinity would be at the top of my list. While 100W is a little high for an HTPC, there’s no reason you can’t undervolt a little. The fact remains, if your main concern is entry cost and HTPC performance, an undervolted 5800k seems a bit more compelling than an i3 (particularly since many seem to prefer the video rendering and general image quality of AMD GPUs over Intel’s).

      On the other hand, I don’t think Damage is wrong in making the point that, as a 100W part, Trinity isn’t exactly optimal for HTPC use.

      However, as a <60W part -something seems pretty feasible with these chips, it becomes much more compelling. I almost wonder if AMD wouldn’t be better off releasing an undervolted/underclocked Trinity HTPC edition – maybe in conjunction with a motherboard line that offers some sort of video input, in addition to the standard digital out. They need all the marketing angles they can get.

    • derFunkenstein
    • 7 years ago

    I’m thinking this might be a really appropriate setup for my wife, to replace her laptop that’s dying of old age. It’s an internet and media consumption box, and it plays a bit of Diablo III. An A10-5700 would do all of that at a relatively low power/thermal requirement, assuming I can find one in stock. Not on Newegg or Amazon or Tiger Direct that I can find. Just the 5800K. That would work ok, too, but at a higher power/thermal requirement. If it wasn’t for the Diablo III requirement it’d be Intel, but the IGP does it at a price I can live with and a performance level she’ll accept. I already have the case, PSU, peripherals, and drives. Just need to add the “guts” (FM2 mATX board, 8GB of RAM, and the CPU) for around $230-240.

      • Airmantharp
      • 7 years ago

      I think your wife is the use case for this CPU :). Though I will say that Intel’s graphics will run Diablo III more than acceptably, at the same price point, I’d rather have the extra IGP grunt too.

        • derFunkenstein
        • 7 years ago

        Yeah, she certainly seems to be. The laptop she’s coming from has a Core 2 Duo at 2.5GHz and a GeForce Go 9600M GT. Diablo III plays fine at low settings and 1280×720, but I figure this would be a big upgrade. She doesn’t carry the laptop anywhere, but the battery is toast and the fan has turned kind of whiny, and now starting this week she’s using an external keyboard because a couple of keys are just dead on the keyboard. I could put $100 into it and replace the keyboard, fan, and battery, but I’m loathe to do that on a 4+ year-old system and not get an upgrade. I’m fine, however, with giving her something that’ll run everything faster. And she’s fine with faster for sure.

    • cobalt
    • 7 years ago

    Thanks, great review.

    And yes, at least some of us really do love all the extra data. You’ve made it very easy to digest that kind of quantity. (The little buttons on some of the charts are a great help, by the way.)

    • willmore
    • 7 years ago

    [quote<] That is, they can pair a low-end Radeon graphics card with the IGP in a CrossFire-style multi-GPU config.[/quote<] Can we start calling this dual-wielding? Or, maybe just Gunzerker mode. Way cooler than CrossFire. Please?

      • mikato
      • 7 years ago

      Akimbo mode

    • Chrispy_
    • 7 years ago

    I could link threads from the forums where I predicted this exact “same as bulldozer” cpu performance (hardly a prediction, since there’s loads of news about the minimal changes to piledriver IPC) but instead, I would like to draw attention to this quote about socket FM2:

    [quote<]the pin layout is different to prevent the insertion of an incompatible processor by all but the most determined.[/quote<] Maybe it's a slow day here, but the mental image of a trollface/4chan/reddit/fanboi/herp-derp trying to mash a processor into a socket along with the phrase FUUUUUUUUUUUUUUUUUUUUUUuuuuuu made me chuckle; "Most determined" indeed 😛

    • anotherengineer
    • 7 years ago

    great review as always.

    One thing though.

    ” I’m having a hard time envisioning a system guide build where the A10 makes more sense than the Core i3. What’s the concept? A very budget desktop PC in which the user does a small amount of gaming with non-casual titles? That needle is hard to thread.”

    Both of my parents are on 5-6+ yr old socket 939, the smallest trinity would be a great upgrade for them and the cheapest one is 60 bucks, which is about half of an i3.

    IMO that is a great budget APU.

    I do agree about the power consumption, but it comes to diminishing returns once again, even if it was 20W more all the time, most phantom power draw is more than this and at a few hours a day it would be negligible on the power bill (for a home user).

    • flip-mode
    • 7 years ago

    And the answer is: i3-3225 or i5-3470.

    I am not willing to give up CPU performance for an IGP, and really, I feel I’d be doing a disservice to friends and family if I recommended they do so.

    Scott, thanks much for an excellent article. If you’re looking for any opinions:

    1. I think you leaned too heavily on load power consumption and not enough on idle power consumption. Idle power consumption is going to be what dominates over the computer’s lifetime. For all mundane tasks, the computer will still essentially stay at idle. And for HTPC applications – is 113 watts max total system power really that bad? I don’t know, that’s an honest question. And what if this HTPC is never used for gaming – what’s the A10-5800’s max loaded power consumption then? I think on the basis of idle power consumption, the A10-5800 does deserve some applause.

    2. Single threaded performance is still so important that – in my opinion – it’s worth adding another button to the concluding price/performance chart to break out geometric mean single threaded performance. Even my “multi-threaded” apps are primarily single threaded until I do a specifically multi-threaded task. Take CAD apps, for instance – only the rendering is multi-threaded, but as you go about building the model and interacting with it, all of that is single-threaded. Multi-threaded is extremely important, especially for handling background processes and such, but single threaded probably still remains king in terms of what types of loads typically dominate a CPU’s time.

      • grantmeaname
      • 7 years ago

      If you wanted an HTPC processor you’d be a hundred times better off with the A10-5700K, which is the same price and 35W less power consumption.

        • flip-mode
        • 7 years ago

        Indeed, the A10-5700 looks like the most handsome Trinity. I think TR should even standardize on that one: The A10-5800 give you barely any more performance but the TDP is quite a bit higher. The question is, how much does that lower TDP translate into actual power consumption savings.

          • Celess
          • 7 years ago

          Link back to the power consumption page in the article:
          [url<]https://techreport.com/review/23662/amd-a10-5800k-and-a8-5600k-trinity-apus-reviewed/7[/url<] Unfortunately theres no A10-5700 for test numbers. Would be nice to see real numbers on this one. I agree looks like a sweet spot.

      • anotherengineer
      • 7 years ago

      power consumption for you flip
      [url<]http://www.xbitlabs.com/articles/graphics/display/amd-trinity-graphics_11.html#sect0[/url<] and "I am not willing to give up CPU performance for an IGP, and really, I feel I'd be doing a disservice to friends and family if I recommended they do so." And if the PC is 5 years old???? I would recommend even the smallest trinity if my parents/friends wanted a new budget PC over their "old beeter" My dad is still using a 2.2GHZ single core s939 with 1GB of ram on XP and a 120GB IDE hdd and x800 radeon.

        • flip-mode
        • 7 years ago

        I feel different. I’d tell my dad to “do the smart thing” (those are the words I’d use) and cough up another $60.

        Your xbitlabs link showing power consumption during HD video playback is exactly what I was talking about – the A10-5800 (and family) have pretty fantastic power consumption during HD video playback and during idle. AMD has fixed a major issue by bringing idle power consumption down significantly, although Ivy Bridge still wins overall – it’s idle power consumption is equally fantastic, while it’s load power consumption is dramatically lower than the equivalent AMD part.

          • anotherengineer
          • 7 years ago

          “”do the smart thing””

          While I agree, older people are cheaper and the “smart thing” to them is spending less money.

          Basically the only questions they have, are
          -will it start when I push the button
          -can i check email
          -can i watch utube

          Hence why he is still using his old 2.2ghz single core s939, and probably will until it dies.

          edit- I would put that 60 bucks towards an SSD

            • flip-mode
            • 7 years ago

            At the end of the day it’s the customer’s decision, but that doesn’t change my recommendation. If price is the one and only criteria – which is the picture that you’re painting – then the answer is to get the cheapest Dell / HP system available. If I’m going to go to the trouble of building something for someone, I’m going to push them towards a well balanced system – SSD, CPU, GPU, RAM, case, PSU, mobo. If that person keeps asking me to shave some dollars off each component – eff that, give Dell a call. That’s what Dell / HP are for.

        • Lazier_Said
        • 7 years ago

        If your only criteria is “better than my relic circa 2004” then of course Trinity is a great step up.

        Unfortunately for AMD, they aren’t competing with 2004 anymore.

          • clone
          • 7 years ago

          HTPC users and casual users who don’t care, refuse to care, nor are interested in spending more will be happy with Trinity.

          sadly for AMD while their are a great many of those users they by definition aren’t spending much and AMD won’t be making much given many of those won’t care enough to look past the Intel brand name to notice their is a company called AMD.

            • Jason181
            • 7 years ago

            Users who don’t care aren’t going to be upgrading at all. It’s hard to make a profit on people who don’t want to buy. I has nothing to do with Intel vs AMD. HTPC users do and will care, but why would they spend the same money for a slower cpu? The gpu isn’t really gonna matter at all when you’re using a home theater pc, but what will matter is speed and heat (or lack thereof).

            Now if you want a pc you can use both as HTPC and for light gaming, that’s different. But a pure HTPC is actually way better off with the i3.

            I’d say enthusiasts are more likely to consider the brand name of the processor than the market you’re describing.

            • clone
            • 7 years ago

            even those who don’t care will upgrade / replace when they have too….. more to the point they are the Dell and HP buyers of the world and that success or failure will depend on Intel & AMD’s dealings with Dell and HP.

            while i3 is certainly an adequate piece for HTPC it’s not a compelling one especially on the OEM side like Trinity could become, arguably is.

            why would Dell or HP spend more money on CPU at the expense of versatility and cost?….. a balanced system isn’t about the cpu it’s about the system and it’s performance overall, especially given Trinity does most HTPC aspects better while using less heat at idle and acceptable amounts under load….worse still for i3 if you look at the video encoding tests is that overall it’s beaten by Trinity because video apps in general as Nvidia has been hammering for the past 5 years are using gpu extensions when available.

            a “pure” HTPC is only better off with i3 if the budget allows for it but then that would ignore heat, power and pricing concerns because to get the versatility would require an add in card raising heat, power & case requirements.

            Trinity’s success or failure will be on the OEM side of things not enthusiast, while it has a value to enthusiasts it’s limited to the functional / pragmatic side which is sort of “anti enthusiast”… .lol.

            Flip Mode has a point regarding single threaded perf but it’s a weak point in a world where ARM cpu’s are streaming video augmented by GPU’s, where tablets, smart phones and notebooks are handling HTPC like tasks with no effort whatsoever….. single threaded centric focus is an antiquated concept especially in HTPC…. gaming is where it still has a strong position but then gaming favors Trinity.

            note: edited to add a whole whack of stuff.

            • flip-mode
            • 7 years ago

            Trinity is not a bad CPU in the great scheme of things, but – generally speaking – I can’t see the sense of it in a desktop instead of an Ivy Bridge. It’s important to note the term ‘generally speaking’ because it’s pretty easy to conjure some scenarios where Trinity – A10-5700 in particular – is attractive. The scenarios that come to mind are if you have someone who is making a new computer purchase with light gaming in mind but is also doing some extreme penny pinching, or if you have someone that does low-res or light gaming but nothing CPU intensive, or if you have someone that is simultaneously pinching pennies and yet still needing some very specific CPU features such as virtualization support.

            You’re fighting very hard to ensure these outlier cases are spoken for, and your even fighting to have them cast as more the norm than the outliers that they really are. It’s just not a common case for someone to be best served by the specific mix of CPU and GPU that Trinity brings. My guess is that the great majority of the time, people will either be served well enough by Ivy’s IGP or else they’ll want a real midrange or better video card.

            • clone
            • 7 years ago

            desktop pc in all variations to me is now the outlier, the market is ever more capable notebooks, tablets and smart phones, they are killing it because the vast majority of “CPU intensive apps” aren’t intensive anymore.

            in truth “cpu intensive app” is a straw mans argument afaic when applied to the mainstream market, equal to anyone arguing a need for GTX 680 or AMD HD 7970.

            surfing the web, live streaming, facebook e-mail, youtube, video encoding, online chat, hasn’t been demanding since 2005 even when run alongside one another while also gaming.

            this is why Trinity for those on a budget under $800 for a complete system is interesting if not compelling.

            to be clear I wouldn’t buy Trinity, I know you wouldn’t, but you & I both are outliers / enthusiasts, I wouldn’t buy Trinity because the upgrade path is closed, to me Intel is the only option for new atm, you I suspect wouldn’t buy one because you’ve got a price in mind that you feel PC’s should cost and consider anything less to be a sacrifice regardless of what it offers.

            that’s not meant to be offensive, it’s just how it appears.

            • flip-mode
            • 7 years ago

            And yet, Trinity sucks and has no relevance compared to Ivy, or even Sandy.

            • kc77
            • 7 years ago

            Only if your grandmother reads Cinebench scores instead of reading her email.

            • clone
            • 7 years ago

            lol.

            note edited: to change the post from what it was to “lol” 🙂

            • flip-mode
            • 7 years ago

            Like clone said, the desktop is an outlier, so grandma’s using a tablet. Trinity sucks there too.

            • clone
            • 7 years ago

            I wouldn’t so much say tablet because they are new and a little overpriced atm but notebook being used as a desktop…. oh yeah see that more often than not, my parents included in that group.

            Trinity is good in that segment.

            • kc77
            • 7 years ago

            The desktop is an outlier? It’s about 45% of the market. The other half is notebooks and portables. I would hardly call 45% an outlier. People are starting to lose all logic when discussing this. All of a sudden we are CPU constrained (when we haven’t been for the past 5 years) and GPU’s all of sudden don’t matter ….until you have enough money for discrete.

            This is crazy talk.

            • clone
            • 7 years ago

            WHOA….. hang on.

            U.S. desktop PC sales aren’t 45% of the market, they were 27% as of 2011 which is expected to drop to 23% in 2012, notebooks accounted for 44% of the market, mini PC’s and netbooks accounted for 17% and tablets as of 2011 were 13%.

            none of that accounts for smart phone sales which outsell them all combined dropping desktop presence down to 12% nor does it reflect the trends which are a global shift away from desktop with even lowly tablets expected to outsell desktop by 2013……. now how much of those desktops go to business vs home, additionally of those sales 75% are OEM leaving an ever shrinking portion being custom built.

            desktop is fast becoming an outlier.

            [url<]http://www.inquisitr.com/76157/tablets-to-overtake-desktop-sales-by-2015-laptops-will-still-reign/[/url<]

            • kc77
            • 7 years ago

            My numbers were from a report which spilt up emerging desktop and workstations. I’ll see if I can find it again.

            EDIT: Whoops I looked at that graph wrong. My bad flip. I try to be honest where possible.

            [url=http://www.guardian.co.uk/technology/2012/apr/25/tablet-pc-market-analysis<]http://www.guardian.co.uk/technology/2012/apr/25/tablet-pc-market-analysis[/url<] It's a good read nonetheless. It makes the case that desktops aren't going to go away any time soon if ever.

            • clone
            • 7 years ago

            that’s a tablet article that is talking about business trends…. the premise seems to be focused on Tablet sales stealing PC sales that finishes with a caveat that desktop will still be needed for intensive tasks.

            I agree with most of the article but when it comes to home desktop which the article doesn’t discuss I believe desktop PC’s decline will eventually be the result of notebooks and the overall “consolification” that is happening in desktop PC now.

            will their always be a desktop PC market?….. sure I wouldn’t discount that but really by the time it dips below 5% does it matter?…. it’s arguably down to 11% now and it’s getting largely ignored.

            game developers created the mainstream demand for home desktop and they’ve been ignoring it for years, Microsoft the largest player is walking away from it with Win8 chasing tablet, applications in general don’t push the desktop and haven’t since 2005.

            my point was and the information seems to be backing it up now better than I’d expected that the entire desktop PC market is fast becoming a fringe market.

            • kc77
            • 7 years ago

            I think it explicitly says that desktops WON’T disappear. You will always need workstations. Tablets and phones (like the iPhone) usually need to sync with something.

            In the case of notebooks Tirinity looked the same as on the desktop. Mediocre CPU performance but really strong GPU performance. You’ll still have the same argument to make even there. In terms of phones well…. let’s just say Intel hasn’t really broken through into that market yet. So that point is moot.

            • clone
            • 7 years ago

            never said they would disappear.

            I’ve also never once said Trinity was the best… at much of anything, but it is a good choice for those I’ve already mentioned which was always as far as I was going with a Trinity recommendation.

            regarding smart phones, they are getting more and more desktop functionality which combined with a notebook in the home makes the decision to own a bulky, impractical desktop PC with all of it’s associated peripherals & space requirements ….pointless…. which is why desktop PC is rapidly moving to the fringe.

            • clone
            • 7 years ago

            thanks for that 🙂

            “CPU perf are da abtholutists besteth.”

            note, edited to include: “CPU are da abtholutists besteth.”…. 🙂

            • Jason181
            • 7 years ago

            A lot of mainstream users care about performance, and only play the types of games that will be the same on either gpu (flash games, card games, pretty much the whole casual market). If you told them they’d get better performance for about the same money, and as a bonus it will use less power and generate less heat, they’re going to look at you like you’re an idiot for even asking them which one they want.

            People really do want performance (and yes, it will perform better in a LOT of mainstream applications, some by very large margins). It’s clear that you’re fighting a losing battle here.

            • clone
            • 7 years ago

            so now you are claiming mainstream users like to game…. which I agree with but they don’t like gaming performance…. .lol, I’ll be taking that into consideration in the future when you post.

            when comparing 2 processors that don’t use much power nor generate much heat I’m absolutely certain no one cares save an enthusiast who can’t get past synthetic graphs and consider anyone pushing that triviality an idiot.

            I’ve never once said ppl don’t want performance but you’ve been pushing this straw mans argument that Trinity is incapable of performing mainstream applications, it’s a lie, it’s a straw man, it’s vapor as graph after graph shows repeatedly that Trinity is no i5 killer but that it does surpass i3 in a large number of ways.

            I can’t lose this discussion because the review results make your claims trivial at best and worse still you’ve embraced power consumption as your foundation along with heat while ignoring the need for an add in card along with it’s additional power and heat just for i3 to keep up with trinity in gfx.

            • flip-mode
            • 7 years ago

            clone, WHATEVER HAPPENS, you NEED to make sure to have the LAST WORD, because the person who has the last word in this discussion will determine Trinity’s fate on the market. So you must get in the last word. If you don’t Trinity will fail on it’s merits, but if you do, your fanboyism will propel Trinity to success.

            • clone
            • 7 years ago

            why are you even talking, you cry, you whine, you piss, you moan, but being so miserable you still can’t stop, just can’t help yourself…. I’m not holding your hands Flip-Mode, god knows the quality of your posts would improve if I was.

            I talk, I make logical arguments, I don’t cry, I don’t moan, I’m good with it, if it’s so bad Flip go cry on your own and not in public.

            the funny part is I’ve already stated quite openly that I wouldn’t buy Trinity nor would I be likely to recommend it unless customers were firm on their needs, I compiled a list of reasons why i3 is better but then their is also a list of reasons why Trinity is superior.

            instead of showing some character Flip and acknowledging those valid reasons you whine, you cry, you piss, you moan, you complain and then you make stupid comments that lack objectivity in an effort to pigeon hole any reason why Trinity might be of value….. and then faced with logical responses you whine and you cry and you scream about fanboy’s and how it’s all not fair and poor flip-mode, everyone but flip-mode is a loser because he’s whining and crying and pissing and moaning, so unhappy….. sniff.

            note edited to remove a cppl “lol” because it wasn’t funny enough. :-)…. and a cppl other times to alter the post. :-)….. I used to think it was ridiculous that ppl cried about edits, considered them total idiots for doing so…. still do actually but to be honest the comedic value is growing on me.

            • flip-mode
            • 7 years ago

            If I’m going to let you have the last word you are going to need to post something more edifying and dignified than that. Just put in a fanboy’s last good word for Trinity, using some of that logic and reason you claim to possess. Otherwise, Trinity will be left to fail on its merits, and fanboys will be ashamed.

            • clone
            • 7 years ago

            you still can’t stop, just can’t help yourself…. I’m not holding your hands Flip-Mode, god knows the quality of your posts would improve if I was.

            in a thread where I say I personally wouldn’t buy Trinity
            where I would have a hard time recommending Trinity
            in a thread where I stated the reasons why i3 is superior to Trinity
            in a thread where my list is much smaller as to where Trinity has the advantage
            in a thread where I say Trinity might play well to OEM’s but likely not with enthusiasts
            that it’s desktop sales likely won’t matter
            in that thread you make the stupid comment that I’m a fanboy.

            I talk, I make logical arguments, I don’t cry, I don’t moan, I’m good with it, if it’s so bad Flip go cry on your own and not in public.

            instead of showing some character Flip and acknowledging those valid reasons you whine, you cry, you piss, you moan, you complain and then you make stupid comments that lack objectivity in an effort to pigeon hole any reason why Trinity might be of value….. and then faced with logical responses you whine and you cry and you scream about fanboy’s and how it’s all not fair and poor flip-mode, everyone but flip-mode is a loser because he’s whining and crying and pissing and moaning, so unhappy.

            note edited: I didn’t compose this post I just grabbed the contents of 2 others and combined them, it’s gone full circle and only the silly remains.

            • flip-mode
            • 7 years ago

            It seems I’ll have to settle. At least I’ve brought you around to acknowledging Trinity’s suckage.

            note: not edited: this post was not edited to say “note: not edited: this post was not….” uh oh, infinite loop.

            Let’s do this again some time!

            • clone
            • 7 years ago

            my objectively speaking on the comparative strengths and weaknesses of a product in previous posts has nothing to do with you….. it’s called being objective, having character and vision, being informed…. already had it, they are things you’ve lacked.

            you came in childish may as well leave childish I guess.

            cheers.

            • Jason181
            • 7 years ago

            No, a pure htpc is for home theater. We long ago surpassed the necessary graphics horsepower for home theater use. Now it’s a matter of heat and noise. You’re way better off with a faster, cooler cpu for the extra $5; it would be ridiculous to order a complete system and then save enough for a meal at Taco Bell at the expense of performance. Eventually you wouldn’t even be saving that; you’d end up ahead due to power consumption buying the i3.

            • clone
            • 7 years ago

            how expensive is Taco Bell now?….. jesus for i3 to catch Trinity in games requires another $50 on video, then power consumption, heat and case will likely become be a problem….. throw in some additional fans to keep it all under control and damn if Trinity hasn’t grown from interesting to compelling in the blink of a thought.

          • anotherengineer
          • 7 years ago

          I think AMD and Intel both compete with 2004 and everyday before today.

          I mean I don’t think anyone would get rid of a 3 month ivy for a 1 week old ivy. However anyone running anything older than a few years could be eyeing an upgrade, whether is be a new Intel or a new AMD.

            • flip-mode
            • 7 years ago

            Sadly not. My X4 955 launched April 2009 and there’s not an AMD CPU out there that is a meaningful upgrade for me.

      • sschaem
      • 7 years ago

      The i3 is not faster, its often slower.. why would you get that and pay more?

      Plus as time goes on the i3 will be hindered by its weak GPU.

        • flip-mode
        • 7 years ago

        i3 is like a boss at single threaded stuff. Per point #2 in my original post, TR’s benches are weighted towards highly threaded scenarios. That’s the only reason that the “8 core” FX-8150 approaches the 4 core i5-2500, and likewise the A10 to the i3.

          • TO11MTM
          • 7 years ago

          I tend to agree with this; It takes -very- highly threaded workloads at the office for my i7 920 to beat my colleagues low end gen 1 i5 (Don’t know the model offhand, oops.)

          I would actually be able to do my job better (As far as heavy computing on a low thread count) with an i3-3xxx series.

      • clone
      • 7 years ago

      what are you or your friends doing that so desperately needs every last bit of raw unbridled rip roaring i3 CPU performance vs losing the ability to play any current or newer games ever?

      is surfing the web, live streaming, facebook e-mail, youtube, online chat vastly more demanding in your area?

      just wondering.

        • flip-mode
        • 7 years ago

        Specious question is specious.

          • clone
          • 7 years ago

          childlike response is childish.

          p.s. it’s a reasonable question that you seem afraid to answer for what are likely obvious reasons.

            • flip-mode
            • 7 years ago

            It’s a stupid question from a well known AMD fanboy. The answer is in the first post – at least one part of it – see “CAD” under point #2. There’s gaming as well. There’s also the matter of principle: when I buy a CPU at a given price point I’m going to get the fastest CPU selling at that price point. I’m not going to sacrifice CPU performance because of an IGP. If the IGP kicks ass then that’s a great bonus, but it’s a CPU first and foremost – you can still run another IGP, but you can’t plug in another discrete CPU, so get the fastest CPU you can at the given price.

            • clone
            • 7 years ago

            oh my god the tears.

            CAD…. that is your answer….. computer aided design….. a mainstream application to be sure, why I’m using it right now, in fact I was talking on the phone to a friend and he said his grandmother is using CAD to help in her baking.

            if you can only speak for 3% of the market that’s fine but don’t presume to speak for the other 97% which is why my honest, genuine, not contentious question was valid, is still valid and not notably stupid like both of your responses.

            wipe away the tears, take a deep breath and put on some big boy pants, attacking a reasonable question by screaming out like a child doesn’t aid you in any way.

            • flip-mode
            • 7 years ago

            LOL, didn’t you read the rest of the post? It doesn’t matter, Clone, nothing I say is going have you acknowledging that there are any good reasons to pick the i3-3225 over an A10. That’s why it’s a specious question – you’re asking the question but there’s no answer you’d agree to – also known as TROLLING. Edit: I didn’t give you any of those thumbdowns either.

            • clone
            • 7 years ago

            “I feel I’d be doing a disservice to friends and family if I recommended they do so.” (regarding going Trinity over i3)… you also demanded your dad spend more regardless of his planned computer use.

            this is the comment I responded too, the rest doesn’t apply, you attack instead of answer.

            2nd quote: “It doesn’t matter, Clone, nothing I say is going have you acknowledging that there are any good reasons to pick the i3-3225 over an A10.”….. and now you run away.

            budget over $500…. good reason to buy i3-3225, budget gaming system…. great reason to buy i3-3225, future proofing…. fantastic reason to buy i3-3225, upgrade path ….. outstanding reason to go i3-3225, …… dedicated HTPC….. not always the best choice.

            reasons to go Trinity….dedicated HTPC, budget under $500, system for the uninterested (dedicated web surfer / chat / video streamer casual user)….. Trinity is a pretty solid CPU for those who think Tablets rule, smart phones are enough and notebooks are more than is required which is a decent chunk of the web despite your absurd position that “CPU are da abtholutists besteth.”

            I asked a simple question, a valid question open for plenty of answers, instead you choose to cry.

            regarding thumbs up/down…. lol, I don’t do politics, I speak honestly which doesn’t require approval.

            note edit, added “instead”…. lol. so pedestrian.

            • Jason181
            • 7 years ago

            It’s clear you’re an AMD apologist. The problem you’re having here is that people know what they’re talking about, so you can’t snow us. I have no allegiance; if AMD has the better price/performance ratio in the performance arena that I’m interested in, I’m there. It just happens that for the last few years there wasn’t even a choice in the matter. I wanted high performance and AMD just isn’t delivering, so consideration of value goes out the window when choosing which company to buy from.

            The funny thing is that it looks like you believe you might actually change peoples’ minds here, but we are informed and made our judgments based on facts, and not some nebulous group of people who don’t really care about performance except when it comes to graphics.

            When the best that can be said in the reviews conclusion is that it’s an alternative if you don’t want the desperately slow Atom, you know it’s not a great deal.

            • clone
            • 7 years ago

            LOL, in a thread where I say I wouldn’t buy a Trinity, where I would have a hard time recommending Trinity, in a thread where I stated the reasons why i3 is superior to Trinity while also showing a much smaller list of reasons where Trinity holds the advantage, in a thread where I say Trinity might play well to OEM’s but it’ll likely not do well with enthusiasts, that it’s desktop sales likely won’t matter….. in that thread you make the stupid comment that I’m an “AMD apologist”……. apparently not full on AMD fanboy which would have been hilarious but an “AMD apologist”.

            that is f’ing awesome.

            Trinity is a decent low end cpu…… you can complain all you want, that you compare it to an Intel Atom kinda makes you a bit of a “fanboy.”… sort of?

        • Jason181
        • 7 years ago

        If you’re using discrete graphics, you give up the ability to play games on the Trinity imho. If you use single-threaded applications, which doesn’t use anywhere near the “raw…i3 performance,” you could see a 30% improvement using an i3.

        Pretty much all the examples of applications you give play to the strength of Trinity’s integer cores, but are going to be much slower because they typically aren’t multithreaded. It will take measurably longer to open on a Trinity cpu, and the graphics horsepower won’t matter a bit. But… that’s far from all people do with their computers. You’re essentially setting up a straw man by implying that those are the only applications people use.

        So for any application that is single-threaded or uses more than two threads and the fpu, you get a hotter, significantly slower processor with less overclocking headroom. It’s just a terrible choice if you don’t plan on using the igp for games, but then people who are using the igp for gaming probably aren’t too concerned about performance in the first place.

          • clone
          • 7 years ago

          I’m not setting up a “straw man”, they are just a bunch of mainstream apps that came to mind… I’m not nor will I ever compile a list of every mainstream application in the world just to avoid the accusation….silly at best…… I agree they certainly aren’t the only ones but don’t care, the point is and was that mainstream apps in general are not CPU intensive.

          will they run slower on Trinity when compared in a synthetic test? sure… but noticeably slower in the real world… of course not.

          web connection, hdd perf and ram quantity are vastly more important than cpu perf in most cases.

          as to the last, you have 2 cpu’s that don’t generate much heat, one in comparison will generate more than the other but regardless both will run cool enough for HTPC use….. the distinguishing difference between the 2 is that the cheaper one can play games the other cannot manage nearly so well if at all.

          Trinity is not a perfect CPU/GPU but if on a budget it is definitely interesting if not compelling to some…. as I mentioned previously Trinity will likely find it’s home in Dell and HP systems, I don’t expect enthusiast sales to be substancial.

            • Jason181
            • 7 years ago

            They run slower in the real world. Most people will notice differences of 10% or more, and when you’re buying you have the luxury of comparing, or asking someone who knows before making your decision. You’re foisting your apparent lack of concern for performance on a large group of people, many of whom are going to look for the best performance for the dollar. That lies squarely with the i3. Even most of your arguments are based on performance (albeit gpu). You can’t have it both ways; either performance matters or it doesn’t. Just because you might not notice a 20% improvement doesn’t mean a lot of people don’t.

            • clone
            • 7 years ago

            why don’t you reread the article before you post again because Trinity isn’t nearly so slow nor as power hungry as you are trying to push which is why I can have it both ways.

            Trinity is 50-100% faster than i3 in games….. that’s fact, that’s i3 being unplayable vs Trinity being playable in newer and the latest games at 720p….. end of story.

            on the cpu side it’s almost a draw, Trinity wins a lot of tests, ties a lot and loses a few but only by a small margin and worse for i3 the ones it loses are synthetic not real world.

            I want you to show this massive list of mainstream applications you claim cannot be run on Trinity… and if you can’t compile a massive list then just stop because you have no credibility.

    • eofpi
    • 7 years ago

    On the Skyrim IGP graphs, the frame time graphs don’t match the time beyond 50ms graph. All 5 configurations clearly spike above 50ms, yet only two chips are shown in the latter as spending any time beyond 50ms.

    Is this an error, or am I just missing something?

      • Damage
      • 7 years ago

      This time around, the guys helped me do the graphing, since it was way too much work for one person. Unfortunately, they pulled the frametime plots from the first of the five test runs, which is always a little slower since caches aren’t warm yet, textures have to be loaded and such. As a result, the data you’re seeing is correct for that run, but not representative of all five and not the median number we report later.

      In the future, we will by standard pull from a later, more representative test session for those plots.

    • chuckula
    • 7 years ago

    This review was well worth the wait, especially for the inside the second benchmarks.

    It’s also not much of a surprise in the outcome:
    1. Want to game using only the IGP? Trinity wins.
    2. Want to game using a discrete card? i3 wins.

      • Chrispy_
      • 7 years ago

      Actually, the inside the second benchmarks showed me something that I hadn’t seen before:

      [b<]i3 chips significantly outperforming their Pentium equivalents in games, even when using a discrete GPU.[/b<] I used to recommend people on a tight budget get a Pentium G840 and a cheap discrete card like the 6570 (480 shaders @ 650MHz) because that offered a much better experience than an i3 for the same money as an i3. Now, that option clearly is starting to suffer because the two extra threads from an i3's HT are making a difference. I'm still loathe to recommend a Trinity for anything other than HTPC because GPUs depreciate so fast that they are usually the first thing replaced. Upgrading a Trinity to a discrete GPU would leave a sour taste in my mouth, knowing that the CPU component is truly bottom-of-the-barrel performance, and bottlenecking the GPU performance of any upgrade that had been purchased.

        • Bauxite
        • 7 years ago

        Yep, pentium/celeron + discrete is a terrible suggestion to save $, but it hides its problems inside decent average fps, so people keep spouting it as gospel on various forums.

    • codedivine
    • 7 years ago

    Nice review. Appreciate the inclusion of Qtbench!

      • chuckula
      • 7 years ago

      +++ For the Qtbench! Compiling is a non-gaming activity that lots of people do in real life and that actually uses multiple CPU cores, so it is a great addition.

        • NeelyCam
        • 7 years ago

        [quote<]Compiling is a non-gaming activity that lots of people do in real life[/quote<] "Lots of people"? I only know one, and he's using far more powerful systems..

          • Airmantharp
          • 7 years ago

          Lots of people in school using cheap systems… 🙂

          (I’m not one of them, I know better than to get into coding)

    • esterhasz
    • 7 years ago

    I find that the problem for these chips is not so much the immediate competition in the same price range, but a) the promise of a significant upgrade path on socket 1155 (the fact that I can put a quad-core Ivy into a 1155 socket makes the motherboard more valuable even if initially more expensive) and b) the relative proximity of the i5-3470.

    On the second point, the scatter plot misses two things (by choice and design): processor cost is in most cases a part of system cost and in terms and an additional $60-$80 dollars on top of a $500 floor is less forbidding than looking at the CPU price alone. The second cost is doesn’t show is assembly and install time, which should be identical for both systems. Granted, this is recreational for many people but if you are short on time you may want to quantify that investment (e.g. 10 hours for a clean build at $50 / hour). That means that for $60-$80 (+$80 for a discrete GPU) on top of a $1000 investment you get a much faster machine, that will need replacement only much further down the road and could even be upgraded to an i7 at some point.

    The quantities I used here are of course contestable, but the point I want to make is that, on a larger perspective not accounting for legitimate but ultimately irrelevant niches, these chips only make economic sense in assembled systems at Walmart, where every dollar counts. Or in a console.

    • Rza79
    • 7 years ago

    [quote<]Forgive me if this sounds like our Llano review on replay, but it's hard to see where these Trinity APUs fit into the desktop PC landscape.[/quote<] Imagine you want to build a computer on a budget of $400-$450 for someone. Then Trinity really makes sence to me, especially if that someone has kids. I think average joe would be hard pressed to notice a 10-20% deficit in 'single thread' performance. If the application opens in a timely manner, they're happy. But even average joe's kids will be upset if a game is way to slow or doesn't run well. Then that family needs to invest in a videocard which it might not have money for. I've considered a Pentium + discrete videocard but i feel you give up too much on the cpu side to save up some money and then still, you're limited to a HD6670 at most. I feel that the Pentium's lack of AVX support might be hurtful in the future. But i do think that the 100W Trinity models are useless. The A10-5700 makes much more sence. Only giving up 10% on the cpu and almost nothing on the gpu tells me that AMD is past the sweet spot with the 100W models. It also seems that AMD has been very conservative with the voltage since i've seen many websites undervolt it without issues. ([url<]http://www.computerbase.de/artikel/prozessoren/2012/test-trinity-vs.-ivy-bridge-im-cpu-test/13/[/url<])

      • MadManOriginal
      • 7 years ago

      Something strange about the power consumption results:

      Comparing IGP to discrete card systems: the i3-3225 adds 18W from idle to load, while the A10-5800K adds 39W. What’s up with that? Is the AMD CPU using the graphics card to assist encoding?

        • Rza79
        • 7 years ago

        Many reasons:
        – Intel is using 22nm and AMD 32nm
        – AMD uses very aggresive clock gating, even more so than Intel, to keep idle consumption very low
        – AMD’s 32nm process uses gate-first manufacturing (in contrast to Intel which uses gate-last)
        Gate-first manufacturing uses less power at lower clock frequencies (idle) but it turn out that it’s not too good at very high frequencies (load). (Globalfoundries is switching to gate-last for their next node)

        I think the three point i’ve made (plus architectural deficiencies) are the main cause for the big gap between idle and load.

          • MadManOriginal
          • 7 years ago

          I understand perfectly well why the i3 is generally more efficient than the A10. You are not understanding what I’m pointing out.

          “Peak power efficiency x264” graph here: [url<]https://techreport.com/review/23662/amd-a10-5800k-and-a8-5600k-trinity-apus-reviewed/7[/url<] i3-3225 (discrete) - i3-3225 (IGP) = 66-48 = 18W A10-5800K (discrete) - A10-5800K (IGP) = 152-113 = 39W I would expect that differential to be close to the same, if not exactly the same within measurement error, because all that's added in each case is a discrete card. But somehow the A10 system power draw with the discrete card increases by 21W more than the i3-3225 system increases - it's as if the discrete card in the A10 system draws 21W more. Aside from measurement error the only thing I can think of is AMD is using the discrete card to assist encoding, that would make the power draw look worse but also (presumably) make the encoding faster, except I don't think x264 uses GPU acceleration.

            • chuckula
            • 7 years ago

            Good point but other factors could be in play. For example, both the i3 and A10 have built-in PCIe controllers, but I wouldn’t be surprised to see the i3 have a more power efficient controller (although I’m not saying that is responsible for 100% of the difference, just a portion of it). Another point could be that the i3 is more efficient at completely killing the IGP when using a discrete card while the A10 still gives some power to some of the functional units in its IGP even when using a discrete card.

            • flip-mode
            • 7 years ago

            Maybe Trinity can’t turn the IGP off completely but Ivy can?

            • Damage
            • 7 years ago

            More likely has to do with the power use of PCIe on the chip and how fine-grained the power gating is for it.

            • flip-mode
            • 7 years ago

            Who in the world voted you down for that? LOL. Fixed.

            • Rza79
            • 7 years ago

            Sorry reading comprehension 101 failure

          • NeelyCam
          • 7 years ago

          [quote<]Gate-first manufacturing uses less power at lower clock frequencies (idle) but it turn out that it's not too good at very high frequencies (load).[/quote<] Do you have anything to back that up? I don't see a reason for this - if anything, higher variation in gate-first transistors should make the leakage worse. Also, SPCR shows that Sandy Bridge (32nm gate-last) idles at lower power than either Trinity or Ivy Bridge. [url<]http://www.silentpcreview.com/article1259-page3.html[/url<] Compared to Trinity, Ivy Bridge uses less power at load because 22nm transistors are more efficient at load, but more power at idle because 22nm leak more (or that Ivy Bridge design doesn't allow low enough supply voltage at idle to cut down the power well... or maybe Intel screwed up something).

            • Deanjo
            • 7 years ago

            [quote<]Also, SPCR shows that Sandy Bridge (32nm gate-last) idles at lower power than either Trinity or Ivy Bridge. [url<]http://www.silentpcreview.com/article1259-page3.html[/url<] [/quote<] Again you are using a horrible example trying to pit SB vs IB. It is an apples to oranges comparison unless you have those two on the same motherboard. All that extra connectability on the IB board that they used draws more power. Ideally also you would be using SB/IB of roughly the same specs such as the i3-2125 vs the i3-3225.

            • NeelyCam
            • 7 years ago

            I agree – it’s just that I haven’t found a better comparison yet. If you know of one, please let me know. (I wonder if somebody have tested both SB and IB on an Intel H67 board..)

            The point remains, though – I don’t think Trinity idling lower than IB has anything to do with gate-last vs gate-first.

            • Deanjo
            • 7 years ago

            Well, PC Mag did cover this a few months back with their evaluation of IB.

            [url<]http://www.pcmag.com/article2/0,2817,2405317,00.asp[/url<] [quote<]5. Ivy Bridge uses less power. With die shrinks typically also come a reduction in the amount of power that processor needs to operate. That's certainly true in the case of Ivy Bridge. [b<]As long as we were testing the Core i7-2700K and the Core i7-3770K with otherwise exactly the same hardware setup,[/b<] we decided to take some power readings using an Extech Datalogger. [b<]Though the full systems idled at almost the same electricity draw (about 71 watts)[/b<], there was a stark difference when we maxed out all four of the processors' cores:[b<] The Core i7-2700K system needed 166.5 watts, but the Core i7-3770K drew only 136.3—a remarkable change.[/b<] [/quote<] SilentPC's methods of benchmarking are completely flawed and leave a lot to be desired.

            • NeelyCam
            • 7 years ago

            [quote<]SilentPC's methods of benchmarking are completely flawed and leave a lot to be desired.[/quote<] Why are you so pissed off today? AAPL price dipping? Saying that SPCR reviews are "completely flawed" isn't fair. SilentPC is one of the few sites that benchmark without high-power HDDs, GPUs and a 1200W PSU. And I haven't seen others de-embed PSU efficiency out of the results (with the exception of Xbit). Yes; having the same hardware would certainly help. But from that PCMag review you have absolutely no idea how much the idle power of the CPU is. You can only try to conclude that i7-2700 and i7-3770K idle roughly at the same level. With those results you can't compare i7s to Trinity at all (as they are different platforms). SPCR's results give you a better chance to compare the results to Trinity. [quote<]The Core i7-2700K system needed 166.5 watts, but the Core i7-3770K drew only 136.3—a remarkable change.[/quote<] This should be rather obvious. But it's still pretty meaningless - task energy is what matters. If looking at power numbers blindly is how you look at efficiency, your method is completely flawed and leaves a lot to be desired.

            • Deanjo
            • 7 years ago

            [quote<]Saying that SPCR reviews are "completely flawed" isn't fair. [/quote<] You are right, I should have said your interpretation of SPCR's results towards IB is completely flawed as you are looking at the complete system current draw on two completely different platforms and concluding that one CPU is more efficient then the other. This is not the case at all however as the only thing that you can conclude is that one complete system configuration is more efficient then the other with no isolation as to what is causing the difference. [quote<] But from that PCMag review you have absolutely no idea how much the idle power of the CPU is.[/quote<] Guess you missed this [quote<]Though the full systems idled at almost the same electricity draw (about 71 watts)[/quote<] Given that they used the EXACT same hardware and merely swapped the cpu indicates that IB is just as efficient at idle as SB making your statement of [quote<]Also, SPCR shows that Sandy Bridge (32nm gate-last) idles at lower power than either Trinity or Ivy Bridge.[/quote<] false. It shows that the SB complete system was more efficient. To top it off you are trying to compare an i7 IB to a i5 SB and they are even running at different clock speeds. You cannot by any stretch of the imagination conclude that one CPU architecture is more efficient then then others based on the data given there. I also can't understand how you can even throw trinity in there as there is not a trinity based platform even in the mix. The A8-3850 is a llano part. You really have no supporting evidence from that article to back any of that claim. With IB/SB vs trinity then we have no other choice to but compare complete systems since the last time AMD and intel shared a common platform was the Socket 7 days.

            • NeelyCam
            • 7 years ago

            Oh, poor Deanjo.. how should I try to explain this to you…?

            [quote<]Guess you missed this [quote<]Though the full systems idled at almost the same electricity draw (about 71 watts)[/quote<][/quote<] No I didn't, but I guess you missed this: "You can only try to conclude that i7-2700 and i7-3770K idle roughly at the same level." That's a direct response to the sentence you quoted. Note that the PCMag article is pretty crappy in many ways... even the hardware wasn't mentioned. See the SPCR article - they show the supply voltages for SB and IB. Were power management features enabled/disabled for the mobo on PCMag? Who knows. Did the mobo lower the idle supply voltage as much as possible? No idea. None of this was mentioned. Different boards/bioses use different voltages for different CPUs (32nm CPU nominal voltage != 22nm CPU nominal voltage). [quote<]To top it off you are trying to compare an i7 IB to a i5 SB and they are even running at different clock speeds. [/quote<] You know that, at idle, their max clocks or multithreading really don't matter. Or you should know that.. When I said you have no idea what the idle power of the CPU is, I meant that the numbers are useless if one tries to compare these CPUs to, say, Trinity. There is too much external crap burning power (none of which is mentioned in the PCMag article). Yes - Trinity wasn't in that review, but Llano was, and Llano was compared to Trinity in another review. Complicated, but what else can I do? To truly compare idle efficiencies of both AMD and Intel CPUs, all I'm left with is looking for reviews that show each CPU with the lowest possible system power draw. The rest of the system efficiency certainly plays a role, but please suggest a better way.. Anyway, let me once again repeat my main point: Trinity's lower idle power vs. IB isn't because of gate-first. SB results from SPCR pretty much prove that. Somehow you just decided to start attacking me on an unrelated aspect, using bad reviews to "prove" your point.

            • Deanjo
            • 7 years ago

            [quote<]Anyway, let me once again repeat my main point: Trinity's lower idle power vs. IB isn't because of gate-first. SB results from SPCR pretty much prove that. [/quote<] The only thing you can take away from SPCR's review is that system a's particular configuration utilizes less then system b's. It does nothing to isolate if a particular architecture is more efficient then an others. Just mere connectivity alone can make a huge difference in power consumption alone even if no devices are hooked up to them. You want another article that backs up PC-Mag, Check out Anand [url<]http://www.anandtech.com/show/5771/the-intel-ivy-bridge-core-i7-3770k-review/20[/url<] Now if you start adding in computing usage into efficiency Trinity gets slaughtered.

            • NeelyCam
            • 7 years ago

            Another link to an article that has hugely high idle power for the system. You [i<]still[/i<] have no idea of how much power those chips consume when idling. And are you still under the impression that Ivy Bridge power efficiency benefits over Sandy Bridge are still entirely based on architecture..?

            • Rza79
            • 7 years ago

            I can’t find the original technical article that i remember reading but i found this:
            [url<]http://www.bit-tech.net/news/hardware/2011/01/20/ibm-and-globalfoundries-go-gate-last-for-20/1[/url<] So you're right, i should have read this again because it seems my memory was vage. Gate-first only provides a (supposed) density advantage.

      • DragonDaddyBear
      • 7 years ago

      I agree, especially if you don’t play the latest games. My cousin is building a computer and plays mostly source engine games on a lower-rez LCD. This perfect for him. The lower TDP A10 APU would be just fine for him.

      Also, the single threadded performance between the IVB Core i3 and Pentium, so a dedicated card in the future wouldn’t be sooooo bad. And if they hybrid graphics work OK then you have an avanue to upgrade the GPU specs. So long as your FPS are 30+ minimum and latency isn’t too bad (as shown on TR) I don’t see the APU being a severe bottle neck for a few years.

      I’m goign with Rza79 on this one.

    • Hattig
    • 7 years ago

    Thanks for the review. AMD still need to get a grip on their power consumption and CPU performance per thread.

    Would be nice to see undervolting results, as well as overclocking results – but even then, the i3 will overclock better surely.

      • Deanjo
      • 7 years ago

      [quote<]the i3 will overclock better surely.[/quote<] I think you mean [quote<]the i3 will undervolt better surely.[/quote<] and yes the i3-3225 undervolts quite nicely.

        • derFunkenstein
        • 7 years ago

        Yeah, the i3 doesn’t overclock at all. Kind of a weird statement for him to make.

          • Hattig
          • 7 years ago

          Ah, well that changes things then. At least you can overclock the Trinity – and both can be undervolted.

            • flip-mode
            • 7 years ago

            I’ve heard that Trinity OC sucks.

            • clone
            • 7 years ago

            depends on which cpu you get….. A-10 likely doesn’t have much headroom but lower end units should be worthy especially as mentioned no overclocking with i3 at all.

            • mikato
            • 7 years ago

            If there was ever a time for a good ole “your mom”…

            • derFunkenstein
            • 7 years ago

            You don’t have to hear it; you can see it on the internets. The IGP OCs nicely from what I can tell (Anandtech took their 5800k from 800MHz to 1083MHz on the GPU core) but the CPU tops out around 4.4-4.5GHz.

            • derFunkenstein
            • 7 years ago

            Yeah Ivy and Sandy bridge both have some sort of crazy un-overclockable bus, and a completely locked upper multiplier.

      • clone
      • 7 years ago

      i3 doesn’t overclock at all.

    • dragosmp
    • 7 years ago

    Thanks for the review, the 22-way CPU comparison should prove useful for the next few months when deciding what to buy/advice others.

    An idea for which possible system may be better off with an APU: any system that goes for cheap. You’ve shown (or that’s what I understood) that an i3 Intel processors is slower on average than the corresponding APU, as you go down the ladder (Pentium G850/A6 and Celeron/A4) it’s probably even more true (tbc). So for a 300$PC the A8-5xxx is the answer, power consumption be damned.

    • Bensam123
    • 7 years ago

    [b<]Multitasking: Gaming while transcoding video[/b<] I'm going to highlight this first as I consider it quite important. Consider using a streaming setup instead of simply encoding a video in the background. If you use a application like Xsplit, which captures the video, encodes it with x264, and then sends it off to a streaming service it would capture everything you want to convey with this category and more! There are other applications that do the same thing, such as FFSplit and Adobe Flash Media Live encoder, but most people use Xsplit. It's also possible to pair it with a different type of capture software, such as Dxtory to do the same thing only better. Streaming most definitely produces microstuttering to a certain extent. Some systems are a lot worse then others. It's something quite worth covering as no other websites do anything like this! Not to be biased, but I can definitely see live streaming becoming quite a bit bigger in the future (for gamers especially). You could almost say this is the future of gamers of every shape and size. Being able to watch your friends when you want to for fun could become very mainstream, even being integrated into Facebook (Youtube Live also coming into play). User created content on demand. Bare with me, I have quite a few thoughts: Where are the overclocking results? It may sound counterintuitive, but this could be very lucrative for budget chips as they may have surprising potential. Sadly the best pick of the bunch isn't unlocked - A6-5500. I'm guessing that one would overclock like a boss and almost make it pointless to buy any of the more expensive derivatives. I don't understand where AMD is going with things either. Intel and AMD hopped on the whole IGP thing like a boss, but it really doesn't make any sense what so ever once you surpass the 'it streams 1080p videos'. They keep improving their integrated graphics as well. Until they find some niche for these IGPs to fall into, they're like a answer for a problem that doesn't exist. Adding to it, these processors seem rather confused too. Like these are definitely stepping into normal desktop processor territory, but they don't deliver. Like none of the above processors will replace a 8150 and at the lower ends they start poking at the sides of the cheaper BDs. What's the point? Perhaps Lucid will come up with a problem the IGPs will answer to. Like a multithreaded scheduler for graphics. "In fact, the only item of note that isn't really up to date is the PCIe connectivity, which remains at Gen2. Third-gen PCIe offers twice the data rate." Weird I expected more commentary on this... like WHY, HOW?!? Only reason I can see this is so it doesn't step on the 8150s balls, but other then that...? Why does the Phenom test system use different memory? Why not use the same memory in all tests? I understand they run at the same speed and same timings, but keeping all else equal... Consider adding average FPS to the labels on the left of time spent beyond X graphs so people can easily reference the differences or a overlay of some type. A number for change in ranking may also be appropriate, for how far it moves down a list when changing between average FPS and time spent beyond X. "Virtually no time is spent beyond our customary 50-ms threshold, and even 33 ms isn't much of a challenge, so we've ratcheted our threshold down to 16.7 milliseconds—the equivalent of 60 FPS." I was thinking about this the other day, while 16.7ms results are inclusive of 50ms results, 50ms results are exclusive from 16.7ms results. Meaning when a graphics card spikes beyond 50ms it's much more noticeable then 16.7ms. Not sure how many people realize this. So any results that show up on a 50ms spike are much more perceptible then those showing up on the 16.7ms, even if there is more of them in the 16.7ms graph (variance / standard deviation would also take this into account, although not nearly as descriptive). A graph showing distribution would accomplish this quite well as well. Not to bring up the whole frame time thing again, but I still don't understand why this is an important metric for highlighting micro-stutters. You've explained a few times after my last big incohesive rant of confusion that frame time is 99% of all frames (such as in the 660 review I didn't have time to comment on), with the 1% of the worst frames being removed. Wouldn't the worst frames be the ones you want to highlight? As those show off worst case scenarios where the CPU/GPU break immersion. I originally thought frame time was a measure of variance, but it doesn't look that way. Like in contrast time spent beyond X threshold is a better metric for highlighting which pieces of hardware break immersion. Where frames take a lot longer to render and subsequently ruin the overall experience (standard deviation takes this into account as does variance to a certain extent).

      • Arclight
      • 7 years ago

      TL;RA

      • Ringofett
      • 7 years ago

      I disagree on one point; I do think there’s a decent niche these belong in. Any low-end PC at all, basically, and all HTPC’s. I’d considered this chip for both, but, after TR’s energy efficiency analysis (which is the #1 reason TR is my go-to site for CPU reviews) the chip went from the top of my list for such builds after reading other sites reviews, to eliminated from the running.

      Those load power draws.. wow. It’s like Global Foundries is playing a joke on them, hiding some heating elements in there that switch on under load, just to spite them. For a system that actually gets used and then powered down, the load draw might totally negate its excellent idle power.

      Maybe if I wanted an HTPC that doubled as a space heater for cold winter evenings, I’d be interested. But for now, if my folks C2D craps out, or I finally build a dedicated HTPC to replace the dm1z E-350 drafted in to service, it’ll probably be getting an i3.

        • raddude9
        • 7 years ago

        Are you sure you’re reading the load power draws correctly? because there’s two set of numbers in the graphs, using IGP and using external graphics, and a HTPC build with a Trinity chip has no need of an external graphics card. Also, don’t you think that low “idle” power usage is quite important in a HTPC? You also seem to have overlooked the 65W versions of these chips, hopefully we’ll see some reviews of these soon as their TDP rating implies that their load power usage will be a lot lower than the 100W versions reviewed here.
        I’m planning a new HTPC build myself and at the moment the 65W A10-5700K is at the top of my list.

          • Deanjo
          • 7 years ago

          Ya he is reading the plots right. With only a difference of two watts idle between trinity and the i3 those extra few watts of saving is quickly made up for during any type of load on the system such as playing back media/transcoding/etc.

            • raddude9
            • 7 years ago

            Nope, I disagree.

            For most usage scenarios, media playback for instance, the power draw is still going to be very reasonable:
            [url<]http://www.xbitlabs.com/articles/graphics/display/amd-trinity-graphics_11.html#sect0[/url<] The moderate load of HD media playback only bumps the power draw by 13W or so, pretty much the same as the i3-3225. And seeing as most HTPCs are going to alternate between media playback and being idle, I'd say that using an i3 is not going to save you any significant amount of money.

          • Ringofett
          • 7 years ago

          I read them right; I mentioned a usage scenario of doing what I want to do with the given PC, then powering it down, so multi-hour stretches of idling at the lowest p-state should be ruled out.

          At the VERY best, its merely an acceptable choice. I’m not big on rewarding failure, so I said I’d go i3.

          You raise a good point with the 65w model, though. Still probably not enough, but it’d presumably be binned and ripe for undervolting. But in contrast, nobody needs to look for an excuse to rationalize using an i3.

          Then again… Might be able to undervolt an i3 too? (Not sure)

          But I definitely read the charts right. That load power draw lays bare the fact AMD’s architecture is way behind Intel’s, as well as its manufacturing process. Damage says right under the relevant chart I believe that there’s no way of getting around it, the chip has some of the highest load power draws out there.

            • raddude9
            • 7 years ago

            Ok, you may have read the charts correctly, but I don’t think you realize the implications of that low idle-power draw. Firstly, you do know that the Load power means constant full power yea? For typical HTPC use, full HD video playback would be one of the main criteria for picking a particular chip, and even with full HD, these chips will be idling most of the time. I found this review:
            [url<]http://www.xbitlabs.com/articles/graphics/display/amd-trinity-graphics_11.html#sect0[/url<] which shows that when doing something semi-intensive like playing back HD video, the 5800K uses pretty much the same power as the i3-3225. I think the TR review was a bit skewed, they only looked at idle-power and load-power which completely ignores the most common usage of PCs, i.e. using some power, be it office apps, video playback and web surfing.

        • Bensam123
        • 7 years ago

        Scott also covered this. IGP is important, but for a HTPC build only up to the point of running 1080p video. Once you start getting to the point of playing games, you could purchase a video card that is infinitely better then these IGPs for next to nothing.

        Power consumption is neat and everything, but I don’t think this plays a very big issue into any of this. People are just over hyping it as energy efficiency is the new fad. All of the chips drop into low power states and newer video cards do as well, so the only time they’ll be producing a lot of heat (and noise) is under heavy load which only takes place with some hardcore gaming action. Usually if this is happening you’ll be unable to hear it anyway due to speakers or headphones, so it doesn’t even matter.

        If you’re talking about money you’ll save, that’s peanuts when you actually take into account how long it’ll actually be under load. Even at the biggest envelope, 100w is a very small space heater… we’re starting to walk into the range of lightbulbs here.

      • flip-mode
      • 7 years ago

      Bruno needs to tweak TR’s code such that posts over a certain length only display the first 20 lines.

        • Bensam123
        • 7 years ago

        That doesn’t encourage certain behavior… >>;

      • cobalt
      • 7 years ago

      [quote<]I still don't understand why this is an important metric for highlighting micro-stutters. You've explained a few times after my last big incohesive rant of confusion that frame time is 99% of all frames (such as in the 660 review I didn't have time to comment on), with the 1% of the worst frames being removed. Wouldn't the worst frames be the ones you want to highlight?[/quote<] The 99th-percentile frame time is, I think, exactly what you're asking for. The reason it's not 100th percentile is that you can probably ignore statistical outliers; if there's a micro-stuttering problem, it will show up in more than 1% of frames. Concrete example: Suppose a game spends 990 frames at 30FPS, 10 frames at 20FPS, and had some system interruption that caused 2 frames to be rendered at a terrible 10FPS. You want to ignore those 2 frames; those are outliers, and so rare they won't cause problems, and may not even be there when you run the next time. A straight FPS average would be about 28 FPS --- pretty close to the 30 FPS of most of those frames. But the 99th percentile frame time would be 20 FPS -- you've captured those bad 1% of frames, and thus the micro-stuttering, quite well, but haven't let the extremely rare events affect your results. Edit: oops, stupid math error, fixed now. Edit 2: I realize my example might have been better with more frames. Think of it at 99000 frames at 30FPS, 1000 frames at 20FPS, and still only 2 frames at 10FPS.

        • Bensam123
        • 7 years ago

        Outliers are what we want when we’re talking about microstuttering though. Microstutters essentially are outliers. That’s what we want to see. What the 99th percentile does is skew the results, so it’s more of a average then even average FPS. Time spent beyond X highlights this a lot better and does completely the opposite of what 99th percentile frame time does. It’s not ’99th percentile frame time spent beyond X’.

        Unless I’m mistaken tests are run three time still so that usually rules out outliers which are caused by other things happening (such as disk access). Ideally if these were each run 10, 100, 10000 times you could safely say those outliers are actually part of the normal distribution even if they seem anomalous. It’s part of standard operation for that device. Throwing out data is a no-no when it comes to statistics.

          • cobalt
          • 7 years ago

          But what the 100th percentile would give you is the minimum framerate. If you played for hours at a constant 60fps, and one single frame out of a million took 100ms, we’d get 10FPS as the result, even though I think you’d have a hard time arguing that single 100ms frame affected your experience — that’s not what microstuttering is.

          Edit: Here’s another way to think about it that might help: the 99th percentile frame time is, literally, the median of the worst 2% of frames.

          Edit 2: Also, regarding the way he averages his five playthroughs. I’m not sure, but I think he keeps and analyses only one whole playthrough. I assume he just keeps the one that gave the (median) average framerate of the five. I don’t know how else you’d get those per-frame plots. So I don’t think that process gets rid of outlier frames within a single playthrough, it just makes sure the playthrough he uses is representative of the five.

            • Bensam123
            • 7 years ago

            You don’t look at the minimum number, you look at a range inside a set of quartiles… That’s what beyond X fps does. This isn’t one number, it’s a distribution.

            No, it’s not the median of the worst 2% frames. How can 99% of all frames besides the 1% of the worst, be the median of 2% of the worst which isn’t even excluded?

            Median is not average, mean would be the average.

            Mode cannot represent a distribution as mode does not take into account any of the values of the other distributions, only its position relative to them. It’s entirely possible to average the results of a distribution at certain points, but since the points aren’t actually defined with TRs testing you are correct in saying they can’t be averaged. It’s still possible to look at one entire distribution and take a set of percentiles out of it (TR is already doing this).

      • Bauxite
      • 7 years ago

      I’m still waiting for the 7xxx GCN drivers to do this, we were supposed to get “nearly free” real time encoding of final 3D output: whatever happened to that?! It would be the ideal streamcasting tool.

      Also it is supposed to be after [u<]all[/u<] gpu effects* are done, a nice bonus feature that current same-pc software capture cannot do. (fraps et al) (*framebuffer stuff like blur etc, even if they are over used and annoying in some games)

        • Bensam123
        • 7 years ago

        I haven’t had issues with screen caps or direct DX capture not showing certain effects… DXtory and Xsplit seem to capture all of it, although this definitely sounds neat.

      • Jason181
      • 7 years ago

      PCIe 3.0 is not really even necessary for high-end graphics cards, much less the graphics cards people buying these cpus are likely to purchase.

      The line graphs show the spikes; it sounds like you’re just looking at the bar charts. The same goes for the frametime distribution; look at the line charts.

      Try running a few benchmarks; you’ll see that the minimum framerate varies from run to run (sometimes a LOT), so to counteract this, the outliers are eliminated by discarding the worst 1%. Some of that’s just due to a computer doing what it does; it schedules tasks and sometimes it doesn’t schedule them well, and so hitches caused by those sorts of things aren’t really representative of the [i<]video card's[/i<] performance, so much as system performance. The line graphs are probably the best that can be done without some sort of interactive chart (which I'm sure would be possible, but it might become a logistical problem from both a server load standpoint and usability). They really pretty much address most of your concerns. The illusion of motion (or immersion) is very subjective; you'll never be able to please everyone, and on those tests where they've lowered the threshold on the frametime, they're doing so because otherwise the graph wouldn't tell you much. In games where framerates are generally high, using a 50 ms threshold might mean seeing a graph of all zeros; that's actually [i<]less[/i<] informative than lowering the threshold, so I'd trust that they're doing it for a good reason, and you can always take a closer look at the line graphs if you're really curious.

        • Bensam123
        • 7 years ago

        It’s not about what is necessary or ‘good enough’, it’s about what is subpar. What ever happened to future expansion too? People seem to completely disregard forward thinking when making claims like this.

        Bar charts aren’t used to sumarize results. I’m not suggesting looking at JUST one point, rather a distribution of points. That would take into account quirks and if it was added up, such as in the Time spent beyond X, it can be averaged. So more then one could be done and the results would gain reliability with the number of runs being done.

        You don’t need to be able to please everyone, but the worst possible cases still exist and 99th percentile frame time doesn’t take them into account. If you use standard deviation or variance as I suggested, it’s used a long side averages. Standard deviation and variance could be averaged as well.

    • Sam125
    • 7 years ago

    Okay review, although I think there could definitely be a part two with a more intelligently set up testbed. I think if a person wants to add a discrete card somewhere down the line, going with an A10 and a hybrid crossfire setup is a much more intelligent upgrade path as it utilizes the IGP and the additional GPU for a higher overall rating for games. That along with testing DDR3-2400 in Trinity setups are interesting. Being buried in data is not. :p

    Also, although I’ve stated this in other TR articles about Trinity the A10-5700 is really the only APU worth looking at for HTPC and silent PC applications and would definitely be the APU in those instances!

    • MadManOriginal
    • 7 years ago

    *begs for IGP testing with older game titles*

      • willyolio
      • 7 years ago

      why older ones? there’s still plenty of current games that don’t require top-end cards.

      torchlight 2, starcraft 2, TF2, even borderlands 2…

      huh. must be something about the “2” games.

        • Arclight
        • 7 years ago

        2 0 1 2
        The number 2, written 2 times. There you can even count from 0 to 2 (0 1 2). Must be a conspiracy.

        • superjawes
        • 7 years ago

        TF2 is a “current” game? I thought it was released in 2007 and is still using Source from 2004…

        Although since TF2 is still played, it would be a great candidate for IGP testing. Could be fun to build a tiny machine that can handle source games like that.

    • Arclight
    • 7 years ago

    If only the Pentium [s<]2120[/s<] Corei3 3225 could be overclocked...... [quote<]Ooh! Ooh! Look at the curve for the A10 versus the FX-4170. (The A10 largely overlaps with the FX-8150.) The A10 delivers lower latencies from the 50th to the 80th percentiles or thereabouts. Could be a Piledriver IPC improvement spotted in the wild, perhaps. Hush, kids, and enjoy the view. Also, I'm still geeking out over the fine differences between the curves for various speed grades of Intel processors.[/quote<] Hehe Edit: BTW, when can we see a review of the 5700? Seems to me that chip has the best specs for what Trinity is meant to do and the GPU is only sligtly underclocked compared to 5800K (i'm assuming it can be OCed to match the 5800k iGPU). Coupled with a little undervolting love the power consumption should be quite good (i presume).

      • JuniperLE
      • 7 years ago

      wrong, delete please

    • shank15217
    • 7 years ago

    According to x-bit labs, you need to use 2T for 2133Mhz and 2400MHz times, you may wanna try that again, seems that trinity really benefits from faster memory.

    [url<]http://www.xbitlabs.com/articles/graphics/display/amd-trinity-graphics_8.html#sect0[/url<]

      • Damage
      • 7 years ago

      I did use 2T; wasn’t stable.

        • shank15217
        • 7 years ago

        I guess there are probably other factors involved then, maybe motherboard and memory vendor.. good to keep in mind for any purchases.

      • Arclight
      • 7 years ago

      Yeah bro but aren’t RAM modules expensive at that frequency? Why would someone looking to save pennies on the APU spend like a baller on RAM?

        • shank15217
        • 7 years ago

        actually 2400 Mhz ram is like $15 more for dual 4GB

          • Arclight
          • 7 years ago

          Not in my country they’re not.

          Edit, i just looked it up and an 8 Gb kit (2 x 4Gb) at 2400Mhz costs about as much as a 5800K……

            • Spunjji
            • 7 years ago

            Are you in the UK? DDR3 2400 is indeed upsettingly expensive. However, 2133 8GB kits compare favourably at around £46 to the £39 or so of the 1866 kits. It’s not inconceivable to think that a little tweaking could turn that into a 2400 kit at the end of the day.

            Doesn’t help the target market much (super-budget), but it’s food for thought.

      • Arag0n
      • 7 years ago

      Well, at least they didn’t test with 1333Mhz memory like they did with Llano…. scott really misses to understand that there is plenty people out there that wants a pc that can play games but doesn’t cost money. Trinity is pretty good at 768p and 720p. Most of people will likely compromise graphics quality in order to play.

      Another thing I find funny is that I can play Civilization V at much higher settings than what they say using a HD5470M that is likely much slower than the HD7660 from trinity, so I guess that Scott is missing something here. I found that sometimes the performance is not related to texture quality or others and games can improve plenty quality without much performance penalty. Just turn everything to low doesn’t really shows the difference between Trinity and HD4000. Seems we forget geometry and texture filtering are isolated issues, maybe you need to turn down the quality of models to lower polygons but you can keep texture quality without a hit.

        • Spunjji
        • 7 years ago

        I have to say I agree with this completely. The graphs all seem to have been calibrated so that they run just-about-passably on the Core i3, then providing a base of comparison. That’s lovely in some respects because it gives apples-to-apples, but the problem is that one of these apples is only really suitable for the compost heap when it comes to gaming, and gearing the tests to that could be hampering the AMD chip’s potential.

        It would be really, really nice to have at least one game tested to demonstrate how much extra image quality you can squeeze from the Trinity chips in comparison to HD4000. I know for a fact that some games show hugely diminishing returns on dropping image quality on certain hardware configurations.

        The upshot is you might well find that turning up texture quality, shader quality and suchlike hardly hits the Trinity GPU at all in terms of frame-rate for some games. But that’s all hypothetical because we don’t have the tests. 🙁

        • Jason181
        • 7 years ago

        Don’t you remember that they then tested with faster RAM and it had little impact on performance? At the time, 1333 was standard, and iirc it was the highest supported speed.

        No everyone wants to play games as a slideshow either. I think it’s unreasonable to complain about using low settings on an igp. If the people were that concerned about image quality polygon counts, they wouldn’t be using an igp in the first place.

        I think maybe what you’re missing is that, as demonstrated, with current igps memory bandwidth is the constraining factor. Increasing texture size is a great way to tank performance because the igp must stream those textures from main memory, which is horribly, horribly slow compared to a discrete gpu’s memory. All you’d be doing is testing the memory subsystem, and it would say less about the igp performance, not more.

    • Unknown-Error
    • 7 years ago

    Thanx for the review! (and I am the 1st to say thanx :p)

    [quote<]What I'd really like to see AMD do next is release some desktop Trinity variants geared for lower power envelopes, like 35W and 45W.[/quote<] And Spot on there Scott. 100W just doesn't appeal to me. More interesting would be the 65W A10-5700 or even the 65W Athlon X4 740 @ 70+ USD, but have not seen them in stock.

      • Bauxite
      • 7 years ago

      Tigerdirect has the 65W models for same price as 100W equivalents which is fair as AMD’s MSRP is the same on both.

      They apparently didn’t repeat the llano availability mistake. (to this day, its damn hard to find a reasonable 3800 or 3820, and its probably a system pull)

        • sschaem
        • 7 years ago

        Do you guys even realize that the k mean unlocked ?

        You can make a k TDP to be 35w if you care to. (Or 140+w)

        This is not an i3 where you are locked to factory settings. the unlocked model let you turn your CPU into any model variation.

        Please, TR, please do some of your reader a favor and do an HTPC review where you experiment with an unlocked CPU.

        First the 5800k seem to undervolt by 20% at stock clock.
        Piledriver is supposed to be a super low voltage design, so voltage could be drop way below 1v.
        Thats actually where AMD show its energy efficiency, check the server market for numbers.

        .. god, I shounld’ be the one telling people this things… where in the world is AMD marketing ?

          • Deanjo
          • 7 years ago

          [quote<]This is not an i3 where you are locked to factory settings. the unlocked model let you turn your CPU into any model variation.[/quote<] The i3 you can easily undervolt.

            • sschaem
            • 7 years ago

            Any link to satisfy my curiosity ? 🙂

            I think the potential is there, but limited. Plus the i3 is already super power efficient under load.

            For a HTPC you want the lowest idle platform power, i3 & A10 are in the same league ~20W
            And you want the lowest platform power when playing back video, I think there is a 10w? difference in favor for the i3. Nice but not a deal maker.

            But for the occasional gaming the A10 go the edge (massive) if you dont plan to use a discreet card,
            and if you are that power conscious you probably wont. And if you do, I think dual graphic might be a better solution.

            All in all I dont see the appeal of the A10-5700 or the i3 for any type of HTPC configuration.

            I’m kind of building the spec of my next HTPC, and so far no argument or benchmark make any Intel solution attractive at any price. I might be able to wait for haswell, but I kind of like AMD stating that they will still support FM2 in 2015. If I can get a cheap 14nm APU upgrade in 2016, I would be 100% satisfied.

            • NeelyCam
            • 7 years ago

            [quote<]For a HTPC you want the lowest idle platform power[/quote<] I disagree. For a HTPC, you want the most efficient one under load, so you can cool it quietly while actually doing something with it. You could argue that idling at low power is important because that's what the HTPC is doing most of the time, but the difference in cost of electricity is rather unimportant when idle power hits that <30W range.

            • sschaem
            • 7 years ago

            I agree, when you get into 20w entire platform idle power you are ‘there’.

            Also an extra ~10w during video playback (If you go with an HTPC case that require an active fan)
            is not what much.

            I dont have the number, but I would be willing to bet playing video on an i3 or A10 would have the fan speed running at the lowest setting in both case.

      • brucethemoose
      • 7 years ago

      Why not just grab 3870k or 5800k and undervolt/underclock it?

      If you’re serious about power efficiency though, it’s hard to beat an Ivy i3 with a 7750 or IGP, depending on your needs.

Pin It on Pinterest

Share This