AMD’s A10-6800K and A10-6700 ‘Richland’ APUs reviewed

AMD has had a rough time keeping up with Intel in the CPU market for the past couple of years. The firm’s Bulldozer CPU architecture hasn’t worked out as planned, and AMD has been forced to limit the scope of its ambition. Rather than competing with Intel across the entire market, it’s had to choose its battles, carefully positioning its products and seeking any possible seams in Intel’s armor. That strategy has succeeded in places, producing some potentially solid values for end users in the process.

When we reviewed the desktop version of AMD’s Trinity APUs last fall, the A10-5800K and A8-5600K, we found that they were a near-miss on the value front. The price was right compared to the competition. The CPU performance was good in some applications—those that use multiple threads well, like image processing tools or video encoders. And the integrated Radeon graphics simply outclassed what Intel had to offer in its Core i3.

But AMD was asking folks to accept some compromises in other areas. Single-threaded performance was relatively pokey, which could lead to reduced responsiveness in everyday tasks and momentary hiccups while gaming. More worryingly, the A10-5800K and A8-5600K chips consumed quite a bit more power than the competing Core i3—nearly twice as much at peak, 100W versus 55W. More power draw means more noise and heat, higher energy bills, and the need for larger PC enclosures.

In the end, that mix of advantages and drawbacks didn’t make a lot of sense to us. We just couldn’t envision many home-built PCs or pre-built systems where accepting a 100W power envelope to get somewhat better integrated graphics was a winning combination.

Just half a year later, AMD has refined the formula for its A-series processors under the umbrella of a new code name: Richland. Has the firm made enough progress to earn an unqualified win over the Core i3? We’ve done entirely too much testing in order to find out.

Trinity becomes Richland

Products based on Richland technology are distinct from the prior-generation Trinity parts, but not in the way you’d expect. Richland and Trinity share the same 32-nm silicon, with four integer cores, two floating-point units, and integrated Radeon graphics. On the desktop, they fit into the same Socket FM2-style motherboards.

Callout of the Trinity/Richland die. Source: AMD.

The differences have to do with the power management capabilities programmed into the chip’s firmware. Richland adds three big things that AMD simply didn’t have time to include in the first generation of products based on this chip. All of them have to do with dynamic behavior, how the APU’s onboard power management microcontroller directs the CPU and graphics cores to scale their clock speeds in response to different workloads.

First, AMD has spent more time in its labs characterizing what these chips can do—what clock frequencies they can tolerate and how much voltage they need to get there. Thus, Richland-based products have more operating points between their base and peak Turbo Core clock speeds. This finer-grained control should translate into higher efficiency, better performance, or both.

Second, Trinity contained an embedded network of temperature sensors across the chip, but those sensors weren’t used in determining how far Turbo Core could push on clock speeds. Instead, Trinity estimated its own power use by looking at activity counters. Because conditions vary in the real world, this method of estimating power use, and thus temperature, must rely on some fairly conservative assumptions. Richland’s rebuilt power management algorithm, called Hybrid Boost, takes direct input from the chip’s temperature sensors. Armed with better intelligence, Hybrid Boost can be more aggressive about pursuing higher clock speeds, giving Richland chips more frequency headroom.

Finally, AMD has added some smarts to Richland’s firmware that attempts to determine when one of the two major components of the chip, either the CPU or the integrated graphics processor (IGP), is the primary bottleneck in the current workload. For instance, if the IGP is the main holdup, Richland can rein in the CPU cores and shift more of its power budget to graphics in order to achieve better overall performance.

As you might imagine, all of this power-saving wizardry should have the most tangible benefits in laptops. Still, Richland delivers some modest improvements on the desktop, as well. CPU base and Turbo frequencies have risen by 200-300MHz, and IGP speeds are between 40 and 84MHz higher, both within the same power envelope. The new A-series lineup looks like so:

Model Modules/

Integer

cores

Base core

clock speed

Max Turbo

clock speed

Total

L2 cache

capacity

IGP

ALUs

IGP

clock

TDP Price
A10-6800K 2/4 4.1 GHz 4.4 GHz 4 MB 384 844 MHz 100 W $142
A10-6700 2/4 3.7 GHz 4.3 GHz 4 MB 384 844 MHz 65 W $142
A8-6600K 2/4 3.9 GHz 4.2 GHz 4 MB 256 844 MHz 100 W $112
A8-6500 2/4 3.5 GHz 4.1 GHz 4 MB 256 800 MHz 65 W $112
A6-6400K 1/2 3.9 GHz 4.1 GHz 1 MB 192 800 MHz 65 W $69

The A10-6800K is the new flagship, and it sets the template. Compared to A10-5800K introduced last fall, the 6800K has a 300MHz faster base clock and a 200MHz higher Turbo peak. The IGP clock adds another 44MHz, too, while the max power (TDP) remains steady at 100W.

The model numbers that end in K, like 6800K, indicate unlocked parts whose multipliers can be raised at will for easy overclocking. Unlike Intel, AMD doesn’t delete features on its unlocked CPUs, so the 6800K still has all of its virtualization capabilities and advanced instruction support intact. Also, alone among the new Richland parts, the 6800K officially adds support for DDR3-2133 memory.

That said, perhaps the most eye-opening improvement over Trinity comes in the form of the A10-6700. Compared to the A10-5800, the 6700 has similar CPU clocks (100MHz lower base, 100MHz higher Turbo max) and a 44MHz faster IGP speed—and it manages that in a 65W power envelope, 35W less than the 5800K. That’s true progress, folks. Better yet, AMD has supplied us with an A10-6700 for testing, so we can see exactly how it matches up against the Core i3.

Special FX

We’ve also taken this opportunity to test some chips we’ve so far neglected: the lower-end models in AMD’s FX lineup. Like Richland, their CPU cores are based on the “Piledriver” microarchitecture. Beyond that, the two chips diverge. All of the FX models are based on a chip code-named Vishera, which natively has eight integer cores and, unlike the APUs, an 8MB L3 cache. Vishera doesn’t have integrated graphics or PCI Express, so it must rely on external chips (a discrete GPU and the 990FX chipset) to provide those capabilities. Naturally, then, the FX processors make use of a different CPU socket, dubbed Socket AM3+.

The fastest Vishera-based offering is the FX-8350, which we reviewed last year. AMD has since added a couple of cheaper options based on Vishera chips with portions disabled, the FX-6350 and FX-4350.

Model Modules/

Integer

cores

Base core

clock speed

Max Turbo

clock speed

L3

cache

TDP Price
FX-8350 4/8 4.0 GHz 4.2 GHz 8 MB 125 W $195
FX-6350 3/6 3.9 GHz 4.2 GHz 8 MB 125 W $132
FX-4350 2/4 4.2 GHz 4.3 GHz 8 MB 125 W $122

These CPUs have similar clock speeds with varying core counts. From a computer-nerd standpoint, it’ll be interesting to see how these differences affect performance. Also from a computer-nerd standpoint, it’s a little disappointing to see that even the quad-core FX-4350 requires 125W to do its thing.

Happily, like the K-series Richland chips, all of these FX processors have unlocked multipliers without having any features disabled. They’re also fairly cheap. The six-core FX-6350 costs less than the quad-core A10-6800K.

You know, it wasn’t supposed to be this way. AMD’s CPU lineup is stacked closely together at fairly modest prices due to competitive pressure. Almost assuredly, the plan was for Socket FM2-based APUs to compete against Intel’s Ivy Bridge and Haswell quad-core CPUs with integrated graphics. The FX series, which is derived from server-class Opteron tech, would then go up against Intel’s high-end platform based on the X79 chipset, which is based on Xeon server tech. Instead, the eight-core FX-8350 sells for under $200, and everything else must cost less than that.

For now, the closest competition for the A10-6800K and A10-6700 is Intel’s Core i3-3225. That’s a 22-nm chip with dual cores, four hardware threads, a 3MB L3 cache, and Intel HD 4000 integrated graphics. The i3-3225 requires a bit less power than even the A10-6700, with a 55W TDP, and has a slightly lower list price of $134. That price also sets the Core i3 against the FX-6350, albeit in a much smaller power envelope. The Core i3-3225 is based on Intel’s older Ivy Bridge architecture; Haswell hasn’t quite made it into this price range yet, although it’s sure to get there eventually. When it does, Haswell should bring better graphics performance with it. For now, though, AMD has a bit of an opening. Let’s see how well Richland takes advantage of it.

Our testing methods

We ran every test at least three times and reported the median of the scores produced.

The test systems were configured like so:

Processor
AMD FX-4350

AMD FX-6350

AMD
FX-8350


AMD A10-5800K

AMD A10-6700

AMD A10-6800K

Motherboard Asus
Crosshair V Formula
MSI
FM2-A85XA-G65
North bridge 990FX A85
FCH
South bridge SB950
Memory size 16 GB (2 DIMMs) 16 GB
(2 DIMMs)
Memory type AMD
Performance

Series

DDR3 SDRAM

AMD
Performance

Edition

DDR3 SDRAM

Memory speed 1600 MT/s 1600 MT/s
Memory timings 9-9-9-24
1T
9-9-9-24
1T
Chipset

drivers

AMD
chipset 13.4
AMD
chipset 13.4
Audio Integrated

SB950/ALC889 with

Realtek 6.0.1.6873 drivers

Integrated

A85/ALC892 with

Realtek 6.0.1.6873 drivers

OpenCL
ICD
AMD APP
1124.2
AMD APP
1124.2
IGP
drivers
Catalyst
13.5 beta 2
(Trinity)

Catalyst 13.101 RC1 (Richland)

 

Processor
Core i3-3225

Core i5-3470

Core
i7-2600K

Core i7-3770K

Core i7-4770K Core i7-3970X
Motherboard Asus
P8Z77-V Pro
Asus
Z87-Pro
P9X79
Deluxe
North bridge Z77
Express
Z87
Express
X79
Express
South bridge
Memory size 16 GB (2 DIMMs) 16 GB (2 DIMMs) 16 GB (4 DIMMs)
Memory type Corsair

Vengeance Pro

DDR3 SDRAM

Corsair

Vengeance Pro

DDR3 SDRAM

Corsair

Vengeance

DDR3 SDRAM

Memory speed 1600 MT/s 1600 MT/s 1600 MT/s
Memory timings 9-9-9-24
1T
9-9-9-24
1T
9-9-9-24
1T
Chipset

drivers

INF
update 9.4.0.1017

iRST 12.5.0.1066

INF
update 9.4.0.1017

iRST 12.5.0.1066

INF
update 9.4.0.1017

iRST 12.5.0.1066

Audio Integrated

Z77/ALC892 with

Realtek 6.0.1.6873 drivers

Integrated

Z87/ALC1150 with

Realtek 6.0.1.6873 drivers

Integrated

X79/ALC898 with

Realtek 6.0.1.6873 drivers

OpenCL
ICD
Intel SDK
for

OpenCL 2013

Intel SDK
for

OpenCL 2013

Intel SDK
for

OpenCL 2013

IGP
drivers
Intel
9.18.10.3177
Intel
9.18.10.3177

They all shared the following common elements:

Hard drive Kingston
HyperX SH103S3 240GB SSD
Discrete graphics XFX
Radeon HD 7950 Double Dissipation 3GB with Catalyst 13.5 beta 2 drivers
OS Windows 8
Pro
Power supply Corsair
AX650

Thanks to Corsair, XFX, Kingston, MSI, Asus, Gigabyte, Intel, and AMD for helping to outfit our test rigs with some of the finest hardware available. Thanks to Intel and AMD for providing the processors, as well, of course.

We used the following versions of our test applications:

Some further notes on our testing methods:

  • The test systems’ Windows desktops were set at 1920×1080 in 32-bit color. Vertical refresh sync (vsync) was disabled in the graphics driver control panel.
  • We used a Yokogawa WT210 digital power meter to capture power use over a span of time. The meter reads power use at the wall socket, so it incorporates power use from the entire system—the CPU, motherboard, memory, graphics solution, hard drives, and anything else plugged into the power supply unit. (The monitor was plugged into a separate outlet.) We measured how each of our test systems used power across a set time period, during which time we encoded a video with x264.
  • After consulting with our readers, we’ve decided to enable Windows’ “Balanced” power profile for the bulk of our desktop processor tests, which means power-saving features like SpeedStep and Cool’n’Quiet are operating. (In the past, we only enabled these features for power consumption testing.) Our spot checks demonstrated to us that, typically, there’s no performance penalty for enabling these features on today’s CPUs. If there is a real-world penalty to enabling these features, well, we think that’s worthy of inclusion in our measurements, since the vast majority of desktop processors these days will spend their lives with these features enabled.

The tests and methods we employ are usually publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Power consumption and efficiency

The workload for this test is encoding a video with x264, based on a command ripped straight from the x264 benchmark you’ll see later. We’re tracking power consumption over time in the first set of plots below. The main plot comes from our standard test systems with discrete Radeon graphics. The second one shows system-level power consumption with only the integrated graphics processors in use.


The story of progress from Trinity to Richland is told by comparing the green line on the plot for the A10-5800K with the blue-green line for the A10-6700. The A10-6700 consumes less power during the workload and finishes the job sooner, dropping back to idle before the 5800K does.

The Socket FM2 platform has nice, low power use at idle. The fact that a desktop system based on a full-sized ATX motherboard draws only 23-25W at the wall socket is remarkable.

Despite having a TDP rating of 65W, the A10-6700 system draws 48W more power while executing our test workload than our 55W Core i3-3225 system. In fact, it pulls more juice than a whole collection of configs based on Intel desktop CPUs with TDP ratings of 77W and 84W. At least the 6700 is a bit of an advance over AMD’s own A10-5800K.

We can quantify efficiency by looking at the amount of power used, in kilojoules, during the entirety of our test period, when the chips are busy and at idle.

Perhaps our best measure of CPU power efficiency is task energy: the amount of energy used while encoding our video. This measure rewards CPUs for finishing the job sooner, but it doesn’t account for power draw at idle.

If you compare the A10-6800K to the 5800K that precedes it, there’s only a sliver of difference between Trinity and Richland. Pit the A10-6700 against the 5800K, though, and the progress is a bit more pronounced. We’re happy to see the growth, but AMD will need to make much larger strides in order to catch up to Intel.

IGP performance: Tomb Raider

Next, we’ll move to gaming performance with integrated graphics. As usual, we’re testing games by measuring each frame of animation produced. The uninitiated can start here for an intro to our methods.

We’ve included a host of configurations in our IGP tests, building on what we did in our Haswell review. One of the entries, the Core i7-4950HQ, is a bit of a ringer; it uses an external 128MB eDRAM chip as a graphics cache, and it’s not a socketed processor. You can read more about it here. This time around, we have results for the 4950HQ running in a higher TDP mode of 55W, to see how it performs, because it’s an intriguing product. Intel tells us some PC makers will ship systems with this config. This CPU isn’t a direct competitor for the desktop versions of the A10. In fact, it’s priced to compete against a different version of the A10 that would be paired with a low-end discrete GPU.

The A10’s most direct competition is the Core i3-3225, of course. You’ll also want to keep an eye on the Core i7-4770K. The 4770K is a higher-end CPU, but it shares its Intel HD 4600 graphics with cheaper Haswell variants, down to the Core i5-4430 at $189. Eventually, the A10 will almost certainly have to compete directly with a lower-end CPU that includes HD 4600 graphics.

Oh, and we’ve tested several of the integrated graphics solutions with higher-clocked DDR3-2133 memory. The bandwidth available in a CPU socket is a major constraint for graphics performance, as the results will illustrate.



Although there are a few latency spikes in the frame time plots for all of the Radeon-based solutions, few of them amount to much in terms of absolute frame rendering times. (In fact, I’m pretty sure AMD is just batching up a few frames worth of work at once and dispatching it. That doesn’t appear to create any animation hiccups, so it must work with how this game engine advances the timing for its internal simulation of the game world.) The FPS average and our latency-focused metric, the 99th percentile frame time, tend to agree about the overall performance picture.

Compare the A10-6800K to the 5800K, and you’ll see that there’s very little daylight between them. In fact, the move to DDR-2133 memory makes more of a difference than the 44MHz gap between the two CPUs’ integrated graphics cores does. The A10-6700 doesn’t allow 2133MHz memory clocks, or it might be more impressive here. What it does do is essentially match the 5800K in our standard DDR3-1600 config.

Also, all of the AMD APUs outperform the Core i3-3225 by a very wide margin. The Haswell-based 4770K with HD 4600 graphics offers a much closer contest, though.


Practically speaking, the difference in animation smoothness between the A10 APUs and the Core i3 is bound to be huge. The i3-3325 is kind of a basket case when asked to run this game at these quality levels; over 13 seconds of our 60-second test are spent beyond our 50-ms threshold for frame rendering time. In other words, if you try to play Tomb Raider at these settings on the Core i3-3225’s IGP, you’re gonna have a bad time.

IGP performance: GRID 2

Forgive me for not having a video of this test session recorded. We just tested in the opening race of the game for 60 seconds.




GRID 2 shows us a little more progress from Trinity to Richland, but otherwise, the results are pretty similar to what we saw from Tomb Raider. AMD’s integrated graphics simply have the Core i3’s HD 4000 IGP outclassed. Haswell’s new HD 4600 edges closer to the 6700 and 6800K, but intermittent spikes in rendering times to nearly 80 ms mean that it doesn’t run this game as smoothly as Richland’s built-in Radeon.

IGP performance: Metro: Last Light

Latency-focused game testing is a lot of work, so I’ve thrown in this quick, automated test of Metro in order to give us another example of performance in a recent game. As you can see, there are few surprises given what we’ve already witnessed.

Crysis 3

Now we’ll turn our attention to gaming performance with a discrete graphics card.



Interesting. Although the FPS averages would appear to suggest strong performance from each of the CPUs, with only a small gap between them, the render time plots paint a more complex picture. Frame times spike pretty substantially at certain points the test run, and when they do, the slower CPUs with fewer cores tend to suffer. The 99th percentile frame time reflects this reality, showing a sizeable gap between the six-core FX-6350 and the quad-core FX-4350. With only two cores and four threads via Hyper-Threading, the Core i3-3225 is on the wrong side of this divide, as are all of the A10 APUs.

The FX-6350’s frame latencies are substantially lower than those of the A10, Core i3, and FX-4350 throughout the last 15-20% of frames rendered. I don’t think we’ve ever seen such clear evidence of a game making good use of more than four cores in a way that matters.

Far Cry 3



Look through all of the results above, and you’ll see a measurable advantage for the Core i3-3225 over the A10 APUs. However, the Core i3’s advantage is tiny. We’re talking two milliseconds in the 99th percentile frame time—and very little time spent working on frames that take longer than 50 milliseconds to render. Truth is, all of these CPUs run this game well.

Tomb Raider



Here’s another case of clear separation between the six-core FX-6350 and the quad-core FX-4350, Trinity, and Richland chips. The Core i3-3225 again has a slight advantage over the A10 APUs, too. All of these CPUs run this game quite competently, but the FX-6350 and the higher-end Intel CPUs sling out frames every 16.7 ms or less (that is, over 60 FPS) virtually no matter what.

Metro: Last Light

One last gaming test shows us a familiar pattern, with the i3-3225 just ahead of the A10 APUs.

Productivity

Compiling code in GCC

Our resident developer, Bruno Ferreira, helped put together this code compiling test. Qtbench tests the time required to compile the QT SDK using the GCC compiler. Here’s Bruno’s note about how he built it:

QT SDK 2010.05 – Windows, compiled via the included MinGW port of GCC 4.4.0.

Even though apparently at the time the Linux version had properly working and supported multithreaded compilation, the Windows version had to be somewhat hacked to achieve the same functionality, due to some batch file snafus.

After a working multithreaded compile was obtained (with the number of simultaneous jobs configurable), it was time to get the compile time down from 45m+ to a manageable level. This required severe hacking of the makefiles in order to strip the build down to a more streamlined version that preferably would still compile before hell froze over.

Then some more fiddling was required in order for the test to be flexible about the paths where it was located. Which led to yet more Makefile mangling (the poor thing).

The number of jobs dispatched by the Qtbench script is configurable, and the compiler does some multithreading of its own, so we did some calibration testing to determine the optimal number of jobs for each CPU.

TrueCrypt disk encryption

TrueCrypt supports acceleration via Intel’s AES-NI instructions, so the encoding of the AES algorithm, in particular, should be very fast on the CPUs that support those instructions. We’ve also included results for another algorithm, Twofish, that isn’t accelerated via dedicated instructions.

7-Zip file compression and decompression

SunSpider JavaScript performance

Video encoding

x264 HD video encoding

We’ve devised a new x264 test, which involves one of the latest builds of the encoder with AVX2 and FMA support. To test, we encoded a one-minute, 1080p .m2ts video using the following options:

–profile high –preset medium –crf 18 –video-filter resize:1280,720 –force-cfr

The source video was obtained from a repository of stock videos on this website. We used the Samsung Earth from Above clip.

Handbrake HD video encoding

Our Handbrake test transcodes a two-and-a-half-minute 1080p H.264 source video into a smaller format defined by the program’s “iPhone & iPod Touch” preset.

Image processing

The Panorama Factory photo stitching
The Panorama Factory handles an increasingly popular image processing task: joining together multiple images to create a wide-aspect panorama. This task can require lots of memory and can be computationally intensive, so The Panorama Factory comes in a 64-bit version that’s widely multithreaded. I asked it to join four pictures, each eight megapixels, into a glorious panorama of the interior of Damage Labs.

picCOLOR image processing and analysis

picCOLOR was created by Dr. Reinert H. G. Mueller of the FIBUS Institute. This isn’t Photoshop; picCOLOR’s image analysis capabilities can be used for scientific applications like particle flow analysis. Dr. Müller has supplied us with new revisions of his program for some time now, all the while optimizing picCOLOR for new advances in CPU technology, including SSE and AVX extensions, multiple cores, and Hyper-Threading. Many of its individual functions are multithreaded.

At our request, Dr. Müller graciously agreed to re-tool his picCOLOR benchmark to incorporate some real-world usage scenarios. As a result, we now have four tests that employ picCOLOR for image analysis: particle image velocimetry, real-time object tracking, a bar-code search, and label recognition and rotation. For the sake of brevity, we’ve included a single overall score for those real-world tests.

The AMD APUs perform well in our productivity suite, in part because these applications are nicely multithreaded, so the Trinity and Richland chips’ four integer cores are put to good use. The A10-6700 performs very much like the 5800K, and the 6800K is a bit quicker.

A couple of examples deserve some extra attention.

First, notice the Core i3-3225’s relatively abysmal performance in the TrueCrypt AES encryption test. That happens because, in this particular product, Intel has disabled the tailored instructions built into Ivy Bridge to accelerate AES encryption. AMD hasn’t played those games with its APUs, so they’re six to seven times faster at encrypting data with AES.

Second, the AMD chips make a relatively weak showing in SunSpider. This test is lightly multithreaded and involves a stacatto series of actions that takes place inside of a web browser. Intel’s CPU cores achieve much higher throughput in lighly multithreaded workloads like this one, and I think that’s worthy of note. For many day-to-day tasks, the Intel processors will be a little quicker and more responsive.

3D rendering

LuxMark

Because LuxMark uses OpenCL, we can use it to test both GPU and CPU performance—and even to compare performance across different processor types. OpenCL code is by nature parallelized and relies on a real-time compiler, so it should adapt well to new instructions. For instance, Intel and AMD offer integrated client drivers for OpenCL on x86 processors, and they both support AVX. The AMD APP driver even supports Bulldozer’s and Piledriver’s distinctive instructions, FMA4 and XOP. We’ve used the Intel ICD on the Intel processors and the AMD ICD on the AMD chips, since that was the fastest config in each case.

We’ll start with CPU-only results.

Now we’ll see how a Radeon HD 7950 performs when driven by each of these CPUs.

We can try combining the CPU and GPU computing power by asking both processor types to work on the same problem at once.

Now, let’s pull the discrete GPU out of the test systems and see how their IGPs perform in OpenCL.

Finally, we can use both the CPU cores and the IGPs working in concert.

You can see the promise of converged applications here. Look at the increase from the CPU-only test to the final CPU-plus-IGP result. Chips like Richland have the potential to employ their integrated graphics cores as a sort of super-FPU to tackle parallel computing problems like this one. AMD has talked a lot about that fact, but apparently two can play at this game. Intel’s Core i3 outperforms the AMD APUs in LuxMark.

Cinebench rendering

The Cinebench benchmark is based on Maxon’s Cinema 4D rendering engine. It’s multithreaded and comes with a 64-bit executable. This test runs with just a single thread and then with as many threads as CPU cores (or threads, in CPUs with multiple hardware threads per core) are available.

POV-Ray rendering

Scientific computing

MyriMatch proteomics

MyriMatch is intended for use in proteomics, or the large-scale study of protein. You can read more about it here.

STARS Euler3d computational fluid dynamics

Euler3D tackles the difficult problem of simulating fluid dynamics. Like MyriMatch, it tends to be very memory-bandwidth intensive. You can read more about it right here.

The Core i3 and the A10 APUs are pretty well matched through the rest of these benchmarks, which concentrate on the sorts of problems one would ideally want to tackle with a faster processor. Once again, take note of the AMD chips’ relatively weak performance in the single-threaded tests, including both Cinebench rendering and Euler3D fluid dynamics. The Core i3-3225 outperforms even the fastest AMD CPU on the market, the FX-8350, in those cases.

Overclocking

Since the 6800K has an unlocked multiplier, setting the multiplier to 48, upping the voltage a bit, and booting the system at 4.8GHz was incredibly simple. What came next was more complicated.

To test stability, I ran Prime95 and monitored temperatures with MSI ControlCenter software. The big tower cooler I installed on the 6800K easily kept temperatures in check; they never got higher than 59° C. However, the MSI software showed that, intermittently, CPU speeds were dropping down to 3.2GHz or so. Hmm.

I thought maybe the MSI monitoring software wasn’t reporting the CPU temperature correctly, but a quick series of finger tests on the CPU cooler and the motherboard VRM heatsinks told me that overheating was unlikely. After puzzling over it for a while, I noticed that the MSI motherboard was giving the 6800K quite a bit more voltage than requested. I had the BIOS set to 1.5V, and ControlCenter reported voltages as high as 1.608V. That’s a lot of juice, and I suspected that the motherboard was doing some throttling to keep the CPU from exceeding a power limit.

Sure enough, reducing the CPU voltage seemed to reduce the amount of throttling. I also set the BIOS option labeled “Core OCP Expander” to “Enhanced,” which seemed to help. Eventually, I found a stable config at 4.8GHz and 1.475V that produced only minor, occasional throttling in ControlCenter with Prime95 running. (Going below 1.475V led to lock-ups and blue screens.) The MSI board still ran the CPU voltage a little hot, reading as much at 1.576V under load, but CPU temperatures remained below 60° C. I figured I could run a few tests and see what my overclocking exploits had gained me.

Frustratingly, when I ran several benchmarks like Cinebench and x264 encoding, my overclocked 6800K appeared to be throttling more than expected. It was sometimes slower than the stock speed. 7-Zip fared a little better, but the performance gains weren’t terribly enthralling.

I suspect the A10-6800K could be overclocked further with a beefy water cooler, and either the right BIOS tweak or swapping in a different motherboard might help eliminate the throttling. Still, for the average guy looking to extract a little extra performance out of a CPU without too much extra expense, the 6800K doesn’t appear to have lots of easily accessible clock frequency headroom. Our sample doesn’t, at least.

Conclusions

Richland is a small step forward for AMD, as our ridiculous wealth of benchmark results has indicated. We can summarize things with a few of our famous value scatter plots, which mash up price with performance in several categories. As always, the better values will be closer to the top left corner of each plot.


The value scatter plots illustrate one of the strange things about Richland. The A10-6800K’s performance is a bit better than the older 5800K’s, and the new A10-6700 matches the 5800K within a smaller power envelope. Yet whichever one of these products you choose, AMD is asking an extra $20 for it. The price increase means these chips now cost more than the Core i3-3225, and it means Richland’s value proposition hasn’t really improved much from Trinity.

What AMD is offering you for $142 is better performance than the Core i3 in the sort of multithreaded applications that make up the bulk of our productivity suite, along with much nicer integrated graphics. The downside of this deal is that the A10 doesn’t perform as well in games when paired with a separate graphics card, in part because Intel’s individual CPU cores tend to be much more potent. Also, either version of the A10 burns quite a bit more power in real-world use than the Core i3-3225, regardless of how close the numbers in TDP specs might be. That will result in more heat and more noise than the competing Intel solution.

That said, I do think the A10-6700 seems like a more rational offering than the other Richland and Trinity parts we’ve encountered. I can see how a system builder—either a hobbyist or a big PC maker—putting together a basic system for a certain sort of user might like its mix of reasonably solid CPU performance, best-in-class integrated graphics, and a 65W power envelope. For the right system in a compact enclosure at a modest price, the A10-6700 could make more sense than the Core i3-3225. That’s more than I was able to say for the Trinity-based A10-5800K, which just didn’t seem to have a natural spot in the market.

Going forward, Intel apparently has plans to introduce Haswell-based Core i3s with HD 4600 graphics some time in the third quarter of this year. That will create a much tougher challenge for AMD’s APUs. Fortunately, before the year is out, AMD is slated to release an all-new replacement for Richland code-named “Kaveri.” That chip will feature the updated “Steamroller” CPU architecture and AMD’s excellent GCN graphics architecture. Who knows how the competitive picture will shape up once Haswell meets Kaveri? I suspect we’ll have an appropriately outsized collection of benchmarks to size things up when the time comes.

Gluttons for punishment can follow me on Twitter.

Comments closed
    • JuliaOfeefe4
    • 6 years ago
    • MarionLima00
    • 6 years ago
    • plonk420
    • 6 years ago

    i’d love to replace my power-sipping but anemic E-350 in my HTPC with one of these if i won…

    • Eulji.Mundeok
    • 6 years ago

    It will be a time (soon) when all integrated graphics will be equal in terms of performance (the discrepance between Radeon and Intel already is much smaller than before, and Radeon’s progress in performance is also small). That is because all are limited by memory bandwith. DDR4 will give some room for progress, but memory bandwith will be againg insuficient in 1/2 years after DDR4. The solution will be integration of GDDR5 memory in motherboards, and so the integrated gpu becames a “dedintegrated gpu”.

    • MildaNee05
    • 6 years ago
    • chuckula
    • 6 years ago

    To the TR Gerbil Gods:
    I have a huge request. Given the leaked information about Kaveri, it looks like its IGP has on-paper specs that are similar to the 7750 (512 GCN cores, Kaveri may have slightly lower core clocks than the 7750). Could you do a roundup (or just a recompilation) of benchmarks for the 6800K, Iris Pro, and the 7750 to give us a rough estimation of what we can expect from Kaveri?

    I’m well aware that it won’t replace a real benchmark when the chips come out, but it will do a good job of giving ballpark performance estimates that will shutup fanboys from both sides of the aisle.

    • bwcbiz
    • 6 years ago

    Here’s a question I wish your review had addressed: Why is the 6800K consuming so much more power than the 6700 with so little performance benefit. This looks like the chip isn’t making efficient use of its cores. Perhaps turbo-ing the clock rate on cores that are actually unused?

    • TAViX
    • 6 years ago

    Man, those A10 processors are really pure garbage junk to be honest.

      • HisDivineOrder
      • 6 years ago

      AMD’s bad choices with Bulldozer have doomed an entire series of refreshes to high power, low performance per watt (the new metric that matters) and pricing that leaves AMD worse off than when they were just offering Athlons or Phenoms instead. I can remember thinking AMD really needed to get Bulldozer out the door to get some performance and imagine my surprise when it came out worse or the same.

      Here we are years later and still trying to drag this design out from the muck. Perhaps if AMD hadn’t been delaying releases all along the way, we might have gotten to a point where all of this might have been worthwhile, but … nope, delays, delays, more delays.

      So this is what we have from AMD. Now Intel’s a cat batting a half-dead AMD bird around with its paws, grinning happily while the ARM mouse creeps by looking for some cheese. Just then, the cat’s ears perk up and it looks around absently.

      Something’s amiss.

        • Spunjji
        • 6 years ago

        I’m not sure I entirely agree. Bulldozer was a resounding disappointment, but Piledriver has given Trinity a notable boost over Llano in the majority of circumstances with equally excellent idle characteristics on the exact same manufacturing process. I’m not sure how much better they could have done there.

        The conclusion, though… yes, you’re right there. It’s sad to see.

      • anotherengineer
      • 6 years ago

      Hmmm if you think it’s “pure garbage junk” I wonder what you would consider a LGA775 P4 prescott with integrated graphics to be?

        • Jason181
        • 6 years ago

        A decent overclocker in its day, but nobody in their right mind bought a board with Intel integrated graphics on the motherboard if they planned on anything except office work. The difference is you could get a motherboard with integrated graphics by a different manufacturer in concert with the Prescott; something that’s unlikely with A10s.

    • Wirko
    • 6 years ago

    [quote<]system-level power consumption with only the integrated graphics processors in use.[/quote<] Scott, was the HD 7950 plugged in when you measured IGP-only consumption, or was it taken out?

    • Chrispy_
    • 6 years ago

    Since the main use for the IGP in these things is gaming, I’m not sure I see the point of [b<][i<]desktop[/i<][/b<] APU's, given they're still bandwidth starved and based on older VLIW4 instead of GCN. $142 for an A10-6700 $20 for the typical extra cost of DDR3 (assuming typical 8GB) of 2133MHz compared to 1600Mhz $10 for the typical extra cost of entry-level FM2 boards compared to entry-level S1155 boards. Total "cost" of an A10 is about $170, right? [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16814131461<]$80[/url<] for an HD7750. [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16819116886<]$65[/url<] for a G2020. Total cost of this is $145, so that extra $25 could be put into: [list<][*<]changing to a more-expensive low-profile HD7750 for a slim HTPC[/*<][*<]upgrading to an HD7770GHz edition for [i<]much[/i<] better gaming performance[/*<][*<]changing to a DDR3 HD7750 with lower power use; cost is the same, so spend $25 on beer :)[/*<][/list<] [b<]The HD7750 will utterly destroy Richland or Trinity's IGP.[/b<] The contest is so unfair that if, say, you and the HD7750 were having a few drinks at the bar (spending the $25 you'd just saved, I guess) - and Richland's IGP wandered over, looking for a fight - you'd have an unconscious IGP out cold on the floor, and the 7750 wouldn't have even spilled any of its drink. [b<]The G2020 is a pretty weak processor[/b<] ...but it has higher IPC than the two modules of a 6800K, even given the clockspeed difference; it's basically a 2.9GHz i3 without the HT. It's also only a 55W part - probably significantly less because of the lower clocks, and that so much of the Ivy Bridge core is inactive (HT, IGP, QuickSync). [i<]For gaming[/i<] there are very few instances where not having four threads really makes an enormous difference. [url=https://techreport.com/review/23750/amd-fx-8350-processor-reviewed/7<]This [b<]one[/b<] instance[/url<] is the only one where I've seen a really big difference between four threads and two in games, and even then you have to realise that the lowly G2120 is still averaging 81fps. That makes the difference between a G2020 and a Richland's two Piledriver modules rather a moot point for gaming, since 81fps is higher than Richland's IGP could ever dream of. There are certainly a handful of games where having a true quad-core makes a difference, but the vast majority of games run well on even low-end processors, as long as the GPU is decent.

      • willmore
      • 6 years ago

      FM2 boards are cheaper than Socket 1155 boards. Also, the G2020 lacks AVX/AVX2.

      Games are trending to more threadded and, with he new consoles being architected as they are, the G2020 might be a bit of a dead end.

        • xeridea
        • 6 years ago

        Don’t forget AES, virtualizition, and all the other stuff Intel conveniently ripped out of the chips.

          • Prototyped
          • 6 years ago

          . . . none of which the AMD processors have either. (Yeah, the “virtualization” bit Intel ripped out of the Pentium is VT for Directed I/O, i.e. an IOMMU — it has VT-x so you can run Windows XP Mode or VirtualBox or VMware or Hyper-V or Xen or KVM or Parallels or what-have-you accelerated just like the high-end stuff, just no direct PCI hardware assignment to a VM, which nobody uses anyway.)

          AMD’s IOMMU is called AMD-Vi, most directly comparable to Intel’s VT-d.

          [url<]http://forums.anandtech.com/showthread.php?p=34621895#post34621895[/url<]

            • willmore
            • 6 years ago

            AESNI is in all Bulldozer, Piledriver, and Jaguar chips. For Intel, AESNI is for i5 and above (except for one crazy SB era i3 chip) [url<]https://en.wikipedia.org/wiki/AES_instruction_set[/url<] All A Series, FX series, and Opteron (at least recent ones) have IOMMUs. On the Intel side, it's an i5 and above except for one crazy SB era i3 chip. [url<]https://en.wikipedia.org/wiki/List_of_IOMMU-supporting_hardware[/url<] So, the only AMD chips that lack AESNI and IOMMU are the Bobcat based chips.

            • Prototyped
            • 6 years ago

            Ah, I wasn’t aware AMD had already licensed AES-NI for Bulldozer and newer.

            The question then arises whether AES-NI acceleration of, say, TrueCrypt or BitLocker actually allows Richland to beat out the AES-NI-less G2020. 🙂 (It very well might; the G2020 might well be beaten by a similarly priced AMD processor on single-threaded workloads.)

            • willmore
            • 6 years ago

            Well, lucky for us, TechReport has reviewed a Kabini laptop. Take a look: [url<]https://techreport.com/review/24856/amd-a4-5000-kabini-apu-reviewed/5[/url<] So, we see 927 MB/s for an A4-5000 which is a 1.5GHz quad core. So, there's parts that are 33% faster available. It's a G2120, which, IIRC is just 200MHz faster than a G2020, but: [url<]https://techreport.com/review/23750/amd-fx-8350-processor-reviewed/10[/url<] says it gets 275MB/s The Kabini gives us 154.5MB/GHz/core. The G2120 does 44.35MB/GHz/core. It would take 1.665GHz*core of Jaguar to tie a theoretical scaled G2020. The slowest Jaguar which is headed for a tablet is a 1.0 GHz dual core 3.9W part. One can project it to do 309MB/s. So, taking out AESNI is a signifigant handicap for a system that needs to do any encryption.

      • flip-mode
      • 6 years ago

      Yup. They’re not powerful enough for desktop gaming in my opinion. You can use them for the casual stuff, but not anything more.

      • xeridea
      • 6 years ago

      There are a ton of games that take advantage of more than 2 threads. It doesn’t tend to be a factor on lower settings, but it would make a difference on the 7770ish. My 6850 only had 70% GPU utilization on my dual core.

      There are a plethora of tasks that do well with more cores, where the G2020 would easily be outclassed. The 6800k would even match it on single threaded performance.

      Then there are those who need more CPU performance, but do light gaming, or don’t play the absolute latest games (because they don’t want to pay $60+). Popular games such as SC2 would run fine on the 6800k.

      Some may want decent main graphics, and extra cards for compute heavy tasks…. cryptocurrencies, protein folding, PhotoShop, password cracking, 3D rendering.

      There is the drawback of extra power, but it is all around a capable APU.

        • Chrispy_
        • 6 years ago

        You’re missing my point:

        The G2020 is certainly weaker than an A10-6800K’s two Piledriver modules in games where all four threads are well utilised.
        However, the IGP in the A10 is so much weaker than even a 7750 that it just doesn’t matter.

        I chose a G2020 because I have one and I know how well it runs games; Adequately and I think that in the majority of games it will outperform the Piledriver modules in an A10.

        But okay, Let’s just say you [b<]*do*[/b<] want to mostly play games that run much better on quad-cores than dual-cores; you can pick up an Athlon X4 and a 7750 for about the same price, and that'll still run rings around the A10 when supported by a dGPU. I just don't see the point of paying such a high surcharge over low-end Intel or AMD FX series chips to get an IGP that is slow, outdated, and hampered by a lack of memory bandwidth. There are situationally superior CPU-only alternatives to an A10 based on Sandy, Ivy, Piledriver or even K10 that cost anything up to $100 less than an A10, and $100 buys you a lot more graphics card than any IGP can hope to deliver.

          • xeridea
          • 6 years ago

          Athlon X4 is discontinued, the only ones on newegg are llano/trinity ones with IGP disabled, and there are only 2 for sale. Performancewise, the 6800k will beat them by a longshot. If you wanted, you could get the 6600k, which is only slightly slower CPU wise, for a good bit less.

          Your statement about the A10 being so much weaker than the 7750 is just flat out wrong. On paper, its about 25% slower, which isn’t to shabby for an IGP. In reality it will be a bigger difference due to memory speed, but not huge.

          For intense games, high/ultra details, yes, you would be better off getting a dedicated card. There are a lot of people who don’t require higher end graphics though. You speak as if it makes graphics like stick figures, when it is totally capable of playing many games, even new ones (Crysis 3 for example, medium settings, 1080p, 55 FPS, Far Cry 3 same settings, a bit higher FPS with solid frame times). For those who don’t care for the absolute best looking graphics, this is plenty capable.

          Comparing to your G2020, it would be about the same performance as piledriver chips with dedicated GPU, look at the review for proof (they are just slightly behind i3 3225, about equal to the diff between the i3 3225 and the G2020).

            • Chrispy_
            • 6 years ago

            [quote<]On paper it is about 25% slower[/quote<] What are you smoking? On paper the IGP has only 60% of the FLOPS throughput and only 47% of the memory bandwidth of even a 7750. With a 7770 the gap is starting to look pretty embarassing. [quote<]Your statement about the A10 being so much weaker than the 7750 is just flat out wrong.[/quote<] Uh, what? Anandtech [url=http://www.anandtech.com/show/7032/amds-richland-vs-intels-haswell-gpu-on-the-desktop-radeon-hd-8670d-hd-4600/2<]tested the A10[/url<] alongside a lowly DDR3-equipped GT640: The GT640 [i<]hammered[/i<] the IGP. Using TR's 1600x900 resoltion, it was 37% at worst (sleeping dogs), 75% faster at best (BF3). [url=http://www.anandtech.com/bench/Product/535?vs=612<]The 7750 is so much faster than the GT640[/url<] that I don't even know where to start. At a rough guess, extrapolating from those two links, I would say that the 7750 is about two and half times faster than the IGP in a 6800K. Were you serious, or are you trolling me?

            • xeridea
            • 6 years ago

            Going strictly by shader counts and clock speeds, A10 has 25% fewer shaders, and 5.5% faster clock. Memory bandwidth is a concern, and I mentioned that.

            I would say from links its more like 50-60% the speed, not 40%. I know a decent discrete card will almost always beat the IGP, but that doesn’t mean the IGP will be unplayable.

            But you are missing the point entirely. If you are someone who likes extra pretty games, you are going to be better off getting discrete card of course, this is what I always do. But there are millions out there who would get adequate performance from the A10 as shown in this review, it does adequate in may games, even at 1080p.

            • Chrispy_
            • 6 years ago

            GCN shaders are not comparable to VLIW4 shaders, this is why your numbers (and reasoning) is wrong.

            And no, I’m not missing the point. I know the APUs are [i<]adequate[/i<] for gamers but only the ill-informed would settle for [i<]'adequate'[/i<] when they can have [i<]'much better'[/i<] and save themselves $25 at the same time.

            • xeridea
            • 6 years ago

            Because you can’t do it. 7750 is about $100, adequate CPU is at least $65, so thats spending $20 more, not saving $25. You are also getting a faster CPU. There are a great many people who play games, but don’t care about having the greatest graphics, you are completely ignoring these people and assuming everyone runs at 4k res with 16x AA on Crysis 4. I have already stated multiple times that for more heavy gamers, they may be better off getting at least 7770 or even higher, but you fail to listen.

            Since peole like links, lets say the only game you care about is StarCraft 2:
            [url<]http://www.anandtech.com/show/6332/amd-trinity-a10-5800k-a8-5600k-review-part-1/5[/url<] 60+ FPS... on the 5800k. The issue is your extreme tunnel vision in assuming everyone is exactly like you.

            • xeridea
            • 6 years ago

            GCN shaders are comparable to VLIW4 shaders. Performancewise, there isn’t much difference. The GCN advantage is that they are much more flexible, especially for compute tasks.

            • chuckula
            • 6 years ago

            OK… Chrispy_ is completely right and here’s why: On paper [b<]KAVERI[/b<] has the same number of GCN units as a 7750. Go read it from AMD's own slides about "Berlin," which is the exact same chip as Kaveri put into a server, here: [url<]http://wccftech.com/amd-berlin-server-apu-glimpse-upcoming-kaveri-apu-4-steamroller-cores-512-gcn-sps/[/url<] So, Kaveri gets 512 GCN stream processors or whatever AMD calls them. That's identical to the 7750. So we are getting AMD's next generation high-end IGP in the same ballpark as the 7750, although it won't have the dedicated graphics memory like the discrete card. So unless you think that Kaveri has zero improvement over the existing Trinity parts, you had better start saying that the existing Trinity parts are nowhere near the level of the 7750. Frankly, Chrispy_ showed mountains of evidence to back up his point without me even needing to point this out.

            • xeridea
            • 6 years ago

            You both miss the point. Obviously a dedicated card should be faster than an IGP, but that doesn’t make a good IGP such as this one worthless like he is trying to say. A Ferrari is several times faster than a Civic, but that doesn’t make the Civic bad. A Hayabusa is 10 times faster than an Altima, and cheaper, so we should all buy a Hayabusa instead. Different products have their place.

            • swaaye
            • 6 years ago

            Its place is in those Walmart budget machines with all possible corners cut. Which is indeed where it is found along with its other AMD APU siblings.

            • xeridea
            • 6 years ago

            HTPC, SFF, people who don’t care to get separate card, the millions of casual games out there with friends who can build them a system, cryptocurrency miners who want usable main display while dedicated cards are being used. People who would rather have a quad core with decent graphics than a dual core, people who want AES, Virtualization, or anything else Intel decides to disable. In future with HUMA it will be faster and easier to use GPU resources on an APU than a dedicated card for many tasks.

            • Chrispy_
            • 6 years ago

            Bad analogy: In your terms the Ferrari is not only faster than the Civic, it’s also cheaper and more practical, with equal or better gas mileage.

            • xeridea
            • 6 years ago

            Sweet, you figured out how to put words in my mouth.

            • Chrispy_
            • 6 years ago

            Thanks, but at this point I think he’s just trollin’ and we’re just rolling with it.

            • xeridea
            • 6 years ago

            I am not trolling, I am stating scenarios where the APUs would be a good fit but you fail to expand your tunnel vision. You say there are cheaper options with better performance but there is not, your figures are off. You say there is no point to them, but you fail to realize there are use cases that are different from yours. You say the APU is worthless, and can’t do anything, but according to the review and other tests, it obviously can.

            • Chrispy_
            • 6 years ago

            Uh, this whole post is about [b<]GAMING[/b<]. I said it within the [i<]first ten words[/i<] of my post.... I'm not stupid, you know; I [b<]*can*[/b<] actually read benchmarks that show how the A10 is much better than the G2020 at several non-gaming tasks (and yes, [i<]even[/i<] gaming tasks, when paired with a suitably powerful discrete GPU, though you'd be mad not to buy a better CPU-only in that instance). It is [i<]you[/i<] who has tunnel vision; You're arguing about [i<]non-gaming[/i<] performance in response to my original post that I made about [b<]gaming[/b<], and have repeatedly stated is [b<]ABOUT GAMING[/b<].. If you still think that a desktop APU is a better [b<]GAMING[/b<] solution than even a fairly low-end discrete graphics cards, please continue to make a fool of yourself.

      • Spunjji
      • 6 years ago

      /edit

      The G2020’s performance starts to bottleneck way too quickly for it to be a sensible consideration. The only real upside is the possibility of later upgrading to a better Intel CPU at which point you’ll be held back by the terrible motherboard you’d have to buy for it to be anywhere near the price of the AMD one. That’s before you even start to think about things like overclocking…

      • swaaye
      • 6 years ago

      Yeah I don’t see the appeal either. I don’t even find them interesting in notebooks unless it’s a very cheap machine that is simply below the pricing that’s realistic to build a book with a discrete GPU. Some of those have single channel RAM though (eeek). The CPU is so slow per-core (most apps need single thread speed) and the IGP is just barely adequate.

      Of course the reason these APUs came to be are simplification and the need to get the IGP near the memory controller to facilitate a bandwidth boost. The old northbridge IGPs had so little bandwidth once the memory controller was integrated into the CPU and the IGP had to traverse the system bus to get to RAM. Simplification comes from fewer chipset needs, reducing mobo complexity and enabling even lower costs to build all those cheapie notebooks that are everywhere.

      • smilingcrow
      • 6 years ago

      I agree and as causal gamer I looked into APUs versus cheap CPUs + dGPU and came to the conclusion that APUs are only worth it if you absolutely can’t use a dGPU.

      My concern with AMD’s current APUs is that they are power inefficient which mitigates against them as they generally make more sense in a smaller form factor where efficiency is often an issue due to cooling/noise issues.
      I think the next generation will cross the threshold and become more compelling, in my eyes at least. Looking forward to that and hopefully Intel will provide some competition as well but that’s a big if.

      • Zizy
      • 6 years ago

      Uh, the point of these APUs is a decent allround system. Not that powerful CPU, nor that great GPU, but a nice balance of both for home use. Imo it does this job better than i3. Pentium might be fine with its low price compared to quadcore AMD APU competitors.

      The problem is actually not G2020, but X4 750K. Same CPU performance as A10 parts, together with 7750 comes at similar price as 6700/6800K + faster ram. Yeah, power consumption is going to be higher so 6700 might still make better HTPC, but for a normal desktop computer it shouldnt be an issue.

      So, yeah, I agree with your opinion these A10 are too expensive for what they offer. A10 5700/5800K on the other hand are priced nicer, you cannot make a system better in CPU and GPU for that price.

      • ronch
      • 6 years ago

      Yeah, I get your point. I was faced with a similar dilemma last year (or was it 2011) when my cousin asked me to help her buy a few PC parts. I considered the A8-3850 vs. an Intel Core i3-2100 paired with an HD5570. The HD5570 can outperform the A8-3850’s IGP and the Core i3-2100 is a better all-around performer than the A8-3850’s CPU cores. Total cost of the AMD machine would’ve been roughly equivalent (IIRC, it was actually a bit more expensive) than the i3+HD5570 combo. As much as I wanted to go with the APU because I was curious about Llano, we had to get the Intel option. I would have gone with AMD if I were the one buying but I just had to choose the better option when someone else’s money is involved.

    • willmore
    • 6 years ago

    I’m quite taken by the performance of Crysis 3 vs numbers of cores. I’m glad to see games being able to take advantage of more threads. This bodes well for the next generation of consoles with many relatively weak cores. I imagine we’ll see this trend continue.

    • JessMcGuire26
    • 6 years ago
    • anotherengineer
    • 6 years ago

    Nice review.

    Now when are you going to do an A85X motherboard roundup?

    • ZGradt
    • 6 years ago

    I think the best part about the Trinity / Richland chips is the idle power consumption. I have an always-on Trinity based ESXi server at the house. It replaced an E-350 one that was a little too puny. It handles fileserving, routing, and DVR duties (Windows Media Center).

    It’s fast enough and cheap. The low idle power consumption cuts down a bit on electricity and room temperature. Until the lower end Haswells or higher end Kabinis come along, I can’t think of a better chip for my needs. Even then, Intel will probably cut out a lot of features like on the current i3’s.

    From the review though, I don’t see much reason to pay the premium for Richland.

    • ronch
    • 6 years ago

    Just a thought, guys. Everytime I check Newegg or Microcenter for Trinity or Richland APUs, it’s like the boxes AMD puts them in are so CHEAP. These e-tailers probably try to get a nice-looking box sample to photograph and post on their site. Even unboxing videos on Youtube like [url=http://www.youtube.com/watch?v=87Bd83AYOW0<]this one[/url<] show a box that looks really cheap, thin and worn out. It's like AMD held a bid to see who in the entire world can produce these boxes at the lowest ever possible prices for them. The edges aren't even well-folded. One time I also saw a 'new' Llano box at a computer store. The box felt so cheap it was like the box was left out in the rain and brought in to dry. Compare that to Intel's Ivy Bridge boxes. I don't know what to call it, but Intel's boxes have both matte and glossy surfaces, have embossed areas, are thicker, etc. Intel must be paying twice as much for their boxes as AMD does for theirs. Heck, even those white boxes for 6- and 4-core FX chips feel so cheap. The only nice boxes AMD has are the tin boxes they sell 8-core FX chips in. Edit - Oh yeah, in the Youtube video I linked to above, even the sticker on top of the box had A10-680K written on it. This is the second video I saw that has this error. How much is AMD paying their people? SO amateurish... Edit 2 - I think it's very important for a company to package their products well. Remember, these things are [u<]displayed[/u<] at computer shops. Many companies (not only computer-related ones) go to great lengths to make their products stand out on the shelves. Looking good on the shelves is important to increase your products' 'wow factor'. So, if AMD's products' boxes look beaten up and dull on the shelves, how can you expect people to think that they're good products? This isn't a cheap padlock they're selling.

      • smilingcrow
      • 6 years ago

      The new 220W TDP range are shipping in Pyrex boxes apparently with a matching AMD branded oven glove set.

        • ronch
        • 6 years ago

        That’s probably the sole reason why they’re selling them for $920.

      • nanoflower
      • 6 years ago

      Ronch, it looks like the A10-680K label on the stick on top of the box is intentional. Watch the unboxing video that you linked and go to the end when the chip is shown. The label on the actual CPU is A10-680K so the box label is matching the chip label. Why AMD choose a shortened version of the name I don’t know but the box and CPU do match.

        • ronch
        • 6 years ago

        Regardless of whether they match or not, that’s not the point. Somewhere out there some guy got it wrong and thought there’s an A10-680K and labeled both the box and IHS wrong. We all know that’s not what AMD’s charts say, don’t we? There’s never been a 680K on AMD’s charts. Doesn’t AMD do QA on their products? I even watched some Bulldozer unboxing videos when they came out and in one video, there wasn’t an FX bezel sticker inside the box. That’s AMD QA for you. These little things are important and says a lot about their QA process.

        • derFunkenstein
        • 6 years ago

        Non-K CPUs have 4 digits in that place instead. A10-6700 for example.

        [url<]http://www.cpu-world.com/info/AMD/AMD_A10-Series.html[/url<] AD5700OKA44HJ < 5700 tray AD5700OKHJBOX < 5700 retail box AD580KWOA44HJ < 5800K tray AD580KWOHJBOX < 5800 retail box AD6700OKHLBOX < 6700 retail box AD680KWOHLBOX < 6800K retail box

      • Prototyped
      • 6 years ago

      I’d rather have a package on which I can spend two bucks less than have a higher price just so I have a nice box that I’m going to stick in the recycling as soon as I have it.

        • smilingcrow
        • 6 years ago

        As long as the box has good enough single threaded recycling capability I’m happy.

        • ronch
        • 6 years ago

        Ok, downvotes don’t cost me money, but my point is, I’m giving some constructive criticism here. AMD is already seen as a cheap brand compared to Intel. Even when I go to computer shops and overhear some folks talking about CPUs, they normally say that AMD is ‘the cheaper variety but they’re ok.’ Making the box just a bit thicker [u<]does not[/u<] cost $2. Not even $0.50. I bet they're made in Asia too because they usually assemble the chips in China or Malaysia. So we're not talking US labor costs here. Yes, I know most folks just chuck the box after the purchase. That's not the point. Go to the supermarket and check out the shelves. Why do manufacturers go to such lengths to make their product look nice, packaged? It's part of marketing. Apple does it. Intel does it. Procter and Gamble does it. Sony does it. Almost every great company who cares about brand perception does it. AMD knows this too, that's why they use nice tin boxes on their FX 8-core chips. Unfortunately their APU boxes really look like something that's ready for the dumpster. If you think it's ok for AMD's products to look cheap and shoddy on the shelves, sure, go ahead.

      • Bensam123
      • 6 years ago

      I don’t know… I was pretty impressed when I received my 8350 in a tin can (first time I ever saw anything like it)…

      Perhaps it has to do with price points…

        • ronch
        • 6 years ago

        I was talking about the boxes of AMD’s APU lineup. Llano, Trinity, Richland, and to some extent, their 4-core and 6-core FX products (although their boxes look great compared to APU boxes).

        The tin boxes used by 8-core FX chips are great. That’s not what I’m criticizing here.

    • maxxcool
    • 6 years ago

    Excellent review of Iris… it did very well even though it may be a few $ more…

      • rxc6
      • 6 years ago

      “A few $ more”

      That over there is the understatement of the century… Iris just doesn’t make sense in the desktop so far.

        • maxxcool
        • 6 years ago

        Since it is faster. it does.

    • EJ257
    • 6 years ago

    Why can’t Intel release a Haswell without the IGP at the same time as the rest of the lineup.

      • chuckula
      • 6 years ago

      If you want Haswell without the IGP, just use a regular GPU and disable the IGP…. I do and it works fine.

        • EJ257
        • 6 years ago

        Sure I can do that but that’s not the point. The disabled IGP is now wasted silicon. I would have to wait for Haswell-E basically to get what I want. Just saying they should release both at the same time. Sucks that there is no competition forcing them to.

          • Peldor
          • 6 years ago

          It’s not wasted silicon, it’s a built-in heatsink!

          • Chrispy_
          • 6 years ago

          It’s not wasted silicon – even if you don’t pipe your display through, think of it as a Quicksync encoding module.

            • derFunkenstein
            • 6 years ago

            I think of it as “backup graphics if my discrete card dies”

            • NeelyCam
            • 6 years ago

            I don’t use backups. I live on the edge

            • Peldor
            • 6 years ago

            *push*

            (Sorry, it’s bad habit)

            • NeelyCam
            • 6 years ago

            That’s a massive waste of silicon area for a Quicksync encoding module.

            I’m with EJ. Haswell without the IGP would be great for high-end gaming desktops, assuming the price would correspond to the silicon area (which of course isn’t the case…)

      • smilingcrow
      • 6 years ago

      Due to the cost of producing a separate line without the GPU. When you consider that most systems sold don’t use a discrete GPU it doesn’t make sense to spend more money to create a mainstream CPU with the GPU removed. Extra resources to as well as cost.

        • NeelyCam
        • 6 years ago

        Just cut out the GPU. You can even use the same socket (so you could use any mobo) – just leave some of those pins unused. How hard can it be?

        Of course the real answer is that Intel likes selling the GPU to those who don’t need it, and AMD isn’t giving Intel any competition. So, here we are

          • MadManOriginal
          • 6 years ago

          No, the real answer is smilingcrow’s first sentence “Due to the cost of producing a separate line without the GPU.” Come on Neely, you know how semiconductor manufacturing works, making a separate die without the GPU would make both kinds of chips more expensive. Intel did eventually release the ‘P’ suffix CPUs that have the GPU disabled for die harvesting though, so maybe the ‘I don’t want an IGP!’ people just need a little patience.

      • HisDivineOrder
      • 6 years ago

      Because they can sell you Sandy Bridge without the IGP for $500-1k and a lot of people will still buy it. Then when they finally run out of people to do that, they’ll switch to Ivy Bridge tech and there’ll be loads more people waiting eagerly to spend $500-$1k on that.

      • Prototyped
      • 6 years ago

      There are plenty of already launched Haswells with no IGP. They’re called the Xeon E3 series.

      [url<]http://ark.intel.com/search/advanced/?s=t&FamilyText=Intel%C2%AE%20Xeon%C2%AE%20Processor%20E3%20Family&CodeNameText=Haswell&ProcessorGraphics=false[/url<] People often forget that the Xeon E3 series include not only processors that are very similar to the Core i3/i5/i7 sequence at only a few dollars more, but they also include varieties such as higher-clocked processors and processors without the IGP. (Xeon E3-12x0 v3 are IGP-less. Xeon E3-12x5 v3 have an IGP. The ones with v3 in the name are Haswell; those with v2 are Ivy Bridge; and those without an annotation are Sandy Bridge.)

        • smilingcrow
        • 6 years ago

        The IGP is there but just disabled which you can do with the other CPUs anyway when using them with a discrete GPU.
        I really doubt that the difference in power consumption etc is at all significant when you look at how good Intel are at power gating unused silicon.

          • Prototyped
          • 6 years ago

          I don’t see much of a point either, but there are always going to be people who have the superstition that it makes even a lick of difference.

            • smilingcrow
            • 6 years ago

            I hear you but it makes no sense to address that market demographic unless your product is tin foil hats.

    • flip-mode
    • 6 years ago

    Looks to me like the hero of AMD’s lineup is the FX 6350. It shadows the FX 8350 very closely for $50 less. I’d love to see TR explore what kind of overclock can be had out of its FX 6350.

    Richland on the desktop is a solid “meh”. I’d spend my money on the FX 6350 or else an i5k chip. While the i5k’s will be better performers, AMD doesn’t play silly feature segmentation games and the FX 6350 also costs significantly less. An overclocked FX 6350 would probably be a meaningful upgrade from my X4 955. But then there’s AMD’s aging AM3+ platform which gives me too much pause, and I feel I’d rather pay extra to get a much more up to date platform in addition to better CPU performance with something like a 4670 k.

      • jossie
      • 6 years ago

      The FX-6350 does end up looking like a beast on the value chart. I’m wondering if future value charts shouldn’t factor in energy costs as well though.

        • flip-mode
        • 6 years ago

        Tricky, very tricky. Most computers sit at idle most of their lives. If you’re going to include power consumption in the value equation, it better be power consumption at idle and not load, or else show them separately.

        Ultimately, considering all the variables, I think it is best to leave power consumption out of the value equation. Different usage scenarios. Different electricity rates. Sounds imprecise.

    • Unknown-Error
    • 6 years ago

    For once, power consumption and performance has improved a little. But overall performance is just “$h!tty” and with Iris Pro Intel has the IGP crown easily. AMD fanboys should retire or jump ship quickly.

      • derFunkenstein
      • 6 years ago

      With a $600 CPU, Intel has the iGPU crown. Just think about that for a minute.

    • R2P2
    • 6 years ago

    The A10-6800K “appeared” to be throttling when it was overclocked? Isn’t there some kind of utility you could run to chart the clock speeds of each core during a benchmark run? I would think someone would have built one by now, with all the dynamic clocking going on these days.

      • R2P2
      • 6 years ago

      I’ll answer my own question, since nobody else seems to care. This people who make CPU-Z have a program for this: [url<]http://www.cpuid.com/softwares/tmonitor.html[/url<] No idea how much overhead it has.

      • Damage
      • 6 years ago

      MSI’s ControlCenter will do that, but again, it means overhead that will likely affect performance. The truth is, throttling was happening to some degree with the OCed 6800K in our tests. That’s clear from the performance results. If you read, the OCed chip was *slower* than stock in some tests. Whether or not I plotted the frequency didn’t matter much to me in the grand scheme, since I couldn’t find any BIOS options to tweak or other avenues to stop the throttling. I’m not sure what you want, really, but after putting in a lot of time trying to make it work right, I felt like I was out of options.

        • R2P2
        • 6 years ago

        I don’t doubt that throttling was happening, it was just the “appeared” that bothered me, and I was wondering if there was anything that could give you some concrete data, so you could say “The CPU was supposed to be running at X GHz, but it actually spent most of its time at Y GHz”.

    • chuckula
    • 6 years ago

    About what you’d expect from an overclocked Trinity, but at least the power consumption is sane and AMD’s desktop IGPs are still #1 if you exclude the Iris Pro parts. The prices need to drop though since these chips are targeted at the value segment and the performance increases are insufficient to justify the price hikes. I’m sure the prices will drop over the next couple of months.

    As I’ve said many times in the past, Kaveri (especially desktop Kaveri) will have a nice boost in GPU performance. I’ll go out further on a limb and estimate a ~50% boost if AMD keeps the same power envelopes from Trinity. That number could go higher if all the GDDR5 gets integrated, although the latest rumors are indicating against that option for at least the initial batch of Kaveris (it may show up further down the road).

      • ronch
      • 6 years ago

      I bet AMD knows very well that their APU’s GPU portion is memory bandwidth-starved. Look at how GPU performance jumps from DDR3-1866 to -2133. Improving memory bandwidth should be on their to-do list.

        • khands
        • 6 years ago

        I wonder if they’ll do something similar like Intel and put a bunch of eDRAM on the chip, would look a little like the Xbox One there too.

      • HisDivineOrder
      • 6 years ago

      Trouble is this is what AMD always has “going for them.” Just one more release, one more launch, one more…

      It’s always going to be that next release that finally brings them up to par. Yet it never happens.

        • Spunjji
        • 6 years ago

        I know you like grinding your axe wherever you think you see a wheel, but chuckula’s talking about a case where AMD are already up to par…

    • vargis14
    • 6 years ago

    I do not know about anyone else but I am super impressed with the 4950HQ. I really wish they would make a unlocked desktop version of that chip!!!

    Or even a CPU adapter so it can be mounted on a desktop board like they did way back with the pentium Ms.
    Would be fantastic for a M-itx build or any desktop build. But for sure the HD4950HQ is a impressive CPU.

    Edit: I am sure we will be seeing the 4950HQ in a few AIO’s , Imac’s along with X86 AIO’s

      • smilingcrow
      • 6 years ago

      They are releasing separate SKUs for AIO systems in BGA packaging. Should be cheaper and with higher TDPs offer higher performance.

      • chuckula
      • 6 years ago

      What I can say about the Iris Pro chips is that they are coming to notebooks in the roughly $1K range (not super cheap but not uber-expensive either). System76, which is a small Linux integrator, has pre-orders available for the [url=https://www.system76.com/laptops/model/galu1<]Galago[/url<] models that include the 4750HQ and start at just under $1000 (14" 1080p screens too).

      • ZGradt
      • 6 years ago

      I’m sure some motherboard maker will offer boards with that chip built-in. Like they do with Atom’s and E-350’s now.

    • smilingcrow
    • 6 years ago

    I’m curious to see how Iris 5000/5100 performs as if Intel gets more serious about the GPU that is closer to what their mainstream chips will have rather than Iris Pro. I think that is currently the interesting comparison to see as it would show how well Intel’s graphics architecture scales versus AMD.

    • Cataclysm_ZA
    • 6 years ago

    A good read there, but the results from overclocking are disappointing. I look forward to a future update with a different motherboard, if possible.

    • ronch
    • 6 years ago

    Perfect for the boss’s office PC. Burn as few watts as possible during the day when the boss just barks orders, leaving his PC to just download Torrent files in the background, and enough graphics muscle to handle Feeding Frenzy after everyone has left.. (except the pretty secretary, that is… LOL).

      • dpaus
      • 6 years ago

      4:30am?? Sorry, I was already leaving for the airport, after getting home just after 9pm last night. But it’s OK, as I got most of Sunday off last weekend, so I’m rested enough for my next 7-day week.

      On the flight, I’ll watch that torrent I downloaded – after I finish my presentation.

        • ronch
        • 6 years ago

        [quote<]4:30am??[/quote<] Well, that's because the world ain't flat... Have a safe flight, dpaus! Enjoy those torrent movies. 🙂

    • dragosmp
    • 6 years ago

    Great review, thanks for the time you put in. This time you used the 6700 too. We gave you a hard time for not using the 5700 back in the day, it’s nice to see you managed to get both TDPs this time.

    With this out of the way I would like to propose something for the OCing tests: go back to low-end OCing. It began with a cheap CPU, the 300A, to get the performance of the 5x more expensive CPUs. In all reviews you OC only the high end to have an even higher end. It might just be more interesting buying a 6700 to OC and save a few $ for a better cooler than the 6800K.

    • ET3D
    • 6 years ago

    I’d love to see undervolting become a regularly tested feature, and underclocking when appropriate, which I think it is here: compare an underclocked 6800k with 2133MHz RAM to a 6700. They’re the same price, if the 6800k can be underclocked to achieve better performance (thanks to faster RAM) with the same power draw, that’s a good finding to report.

    Undervolting, I think it should be a standard test. Power use is tested because people care about it, but many CPU’s can be undervolted and continue to run well at stock speed. It would be good to see what effect this has on power draw.

      • dragosmp
      • 6 years ago

      Quite a few AMD CPUs I’ve had/seen lived all their lives undervolted and usually overclocked. It may just be that AMD doesn’t validate their Vcore properly and choose a higher “safe” setting per batch. A Phenom II I’ve used a while lives 0.2V lower than stock and 300MHz higher for a net gain of some 50W at the plug (TDP=Stock 125-50W?)
      However I don’t think that TR should include undervolting in their reviews, it varies too much; I’m just thinking at the (MSI) board used in this review that throttles probably based on on VRM load in stead of the CPU’s – it’s a cheap trick to rise voltage on the CPU in order to reduce current draw (and thus losses on the mosfets). Undervolting, as well as OCing on such a board can’t be representative for what a savvy user might choose to use.

        • ET3D
        • 6 years ago

        Yet overclocking is part of every review. I’m sure there’s just as much variation in overclocking as there is in undervolting. While overclocking has traditionally been of interest to enthusiasts I think that power draw has become interesting enough that it’s worth making undervolting standard too. Doubly so in the context of CPU’s which are likely to find a place in HTPC’s.

    • Bensam123
    • 6 years ago

    “You know, it wasn’t supposed to be this way.”

    Truth…

    The FM series chips still look like they have solid spots in the HTPC and casual/low end gaming market. The CPU alone isn’t all that impressive, but when combined with a very competent GPU, it makes for a very nice looking cheap solution.

    I’m actually quite excited for the end of this year when Steamroller chips sneak in. It may be another epic disappointment, but I’m still going to cheer and hope for the under dog. A lot of what AMD has talked about sounds really promising… even if they deliver half of what they’ve talked about it still should be good.

    Yeah, I’ve heard a lot of people talk about FM motherboards doing weird things… It seems as though some manufacturers haven’t put a lot of effort into their AMD offerings and they’ve suffered. A different motherboard would most definitely lead to different results.

      • vargis14
      • 6 years ago

      Trust me I have my finger crossed for steamroller!! I have been a AMD fan since my 1st amd chip the 900 tbrd…my last was the 4800×2. Then i hpped on the Intel wagon since AMD did not come close to performing like my 2600k did when i could finally get a new rig.

      Hopefully after i get another 2 years at least out of my 2600 AMD will be competitive enough! I sure a rooting for them!

        • Bensam123
        • 6 years ago

        Yeah, I only recently switched to AMD for a couple really specific reasons (I was Intel even during the AMD 64 era when they were ahead). I hope the next generation gets a leg up, since Haswell was so disappointing.

    • Zizy
    • 6 years ago

    I agree that 5800K draws too much power, but there was another part, 5700. Same 65W as 6700, same small difference between 5800K and 5700 as there is between 6800K and 6700.
    So, I wouldnt say Richland is much of an improvement. I would get 6700 instead of 5700 despite price difference, but if Trinity didnt convince me, I dont see why would Richland.

    The unfortunate part of all these APUs is they still miss some GPU performance to play on FHD with medium-ish details in most new games, at least on stock.
    Could you please try overclocking the GPU?

    • kc77
    • 6 years ago

    Kudos for the compilation tests. I don’t know why anyone would subject themselves to the pain of doing it in Windows, but kudos nonetheless.

    That being said it would be nice if these types of tests included the compilation flags ( bdver2 , native, nocona).

    • ALiLPinkMonster
    • 6 years ago

    GCN on an APU? Sounds like Intel’s IGP performance crown is just a short lived little burst of luck. I think kaveri is going to be the first A-series chip that is seriously worth considering for a moderate gamer on a budget. I mean if they can squeeze the horsepower of a 7850 into a console-bound x86 chip, then surely they can do the same (if not more) on the desktop at an affordable price. If I can build a gaming-capable ITX system without needing to compensate for the height of a discrete card, I’ll be a very happy man.

      • MadManOriginal
      • 6 years ago

      You’ll still need a decent cooler though, but at least there are some down-blowing coolers that aren’t as tall as discrete GPUs.

      I too look forward to seeing how Kaveri does, Richland is really just a stopgap. This might be a pipedream, but if there are motherboards with 2GB of GDDR5 ‘sideport’ memory for the IGP that would be awesome.

      • xeridea
      • 6 years ago

      Yeah, I am excited to see how well they do. Updated x86 architecture and small process shrink to help with power as well as GCN. GPU portion is at the point where its really memory bandwidth limited so that does need to be addressed, but thats not a terrible problem to have.

      • vargis14
      • 6 years ago

      Yep something like the Playstation 4s chip but axe the temash cores for faster and bigger cpu cores with a 7870 graphics and AMD would have a winner. I would even take the 8-16gb of gddr5 memory and 170+ gb sec bandwidth…even if it had to be pre installed onto motherboards just for the GPU.

      I wonder if they could use ddr3 dimms along with the gddr5. Or the Gddr5 memory comes on dimm sticks so you can add up to say 32gb with 4 8gb gddr5 dimm sticks.

        • Spunjji
        • 6 years ago

        You can’t have GDDR5 on a DIMM.

      • swaaye
      • 6 years ago

      There’s like zero chance of a desktop/notebook chip with a GPU like that integrated. It would be huge and cost too much to make sense for the budget builds these CPUs are used for. I expect something like an integrated 77xx (or perhaps slower). They’ll need to get much more memory bandwidth somehow too because they are already very bottlenecked there.

      GCN actually consumes considerably more transistors than VLIW4/5. In other words, it takes more transistors to get the same game performance compared to VLIW. That improved compute capability that we barely use costs a lot.

      • l33t-g4m3r
      • 6 years ago

      Steam Box, here we come! The PC will finally be able to compete against console gaming, because it will be a console. More upgradable, though.

      • maxxcool
      • 6 years ago

      Your not going to see a huge transistor count on the consumer apu. It would be to large, to hot and to expensive. I predict a 10-15% lead of the IRIS in *THIS* review with GCN though. But if intel dedicates more silicon to IRIS that lead will evaporate.

      • Star Brood
      • 6 years ago

      Man, I really hope so. desktop performance has pretty much been at a standstill since Sandy Bridge was launched. Bringing that class of on-die GPU’s to desktop will maybe spur a much needed market shift.

    • Bubster
    • 6 years ago

    HD 4600 is surprisingly good. About 30% behind the a10-6800k in games but equal or better compute performance.

    Small refresh but a good one. Obviously can’t touch intel and its disappointing to see that the 8350 generally performs on par with the 45/55 watt i7-4950 mobile chip.

      • xeridea
      • 6 years ago

      Yes, and on a CPU that costs $350. At similar price points, Intel can’t come close. If one was spending $350 on a CPU they won’t likely be using the IGP for games, or compute, since they can be outclassed by…. a $50 discrete GPU. For $350 you could get an FX-6300 with a 7850 or GTX 660 and mop the floor.

        • ET3D
        • 6 years ago

        I’d expect a Core i3 with 4600. Intel had their better cores on Core i3 in previous generations, so I imagine it will be the case here too. So it’s a relevant comparison.

          • xeridea
          • 6 years ago

          And by the time that happens, there will be a newer APU, and the cycle will continue.

            • ET3D
            • 6 years ago

            These Core i3 CPU’s will be out in Q3, and I think that Kaveri is only expected at the end of the year.

      • Mr. Eco
      • 6 years ago

      Intel Core i7-4950HQ is $657, FX-8350 is $200.
      On the desktop I would always get the FX-8350 and I will have approx. $450 for graphics card.
      In laptop, if I want to play games, again I would smth. with separate graphics, for cheaper.

        • smilingcrow
        • 6 years ago

        I fail to see any point to your comparison:

        1. Desktop v laptop CPU.
        2. 125W TDP (CPU only) v 45W TDP (APU)
        3. Comparing a top spec i7 on pricing without acknowledging that there will be much cheaper desktop versions including i5 SKUs.
        4. In laptops power efficiency is often an issue so just saying get a dGPU is stupid.

    • tbone8ty
    • 6 years ago

    i know you probably get MSI sponsored mobos, but alot of people have had better luck with overclocking using asus motherboards specifically for amd apus. i dunno why that is lol.

    have you tried igp overclocking at all?

      • dragosmp
      • 6 years ago

      +1 for using non-MSI boards, they add to many unknowns. And no ECS either!

      • Xamir21
      • 6 years ago

      Computerbase has a more comprehensive review of the GPU-compute performance of the $125-$150 A10 Richlands compared to $350+ Intel Haswells and Ivys.

      [url<]http://www.computerbase.de/artikel/grafikkarten/2013/amds-richland-im-gpu-test/4/[/url<] The lower priced AMD Richlands win over higher priced Haswells in the many or most of the GPU-compute tests.

      • derFunkenstein
      • 6 years ago

      There’s a thread in the forum right now about wonky temperatures on an MSI AM3+ board, and TR saw what appeared to be throttling at 59C on an MSI FM2 board. I wonder if they’re just not getting something right in the BIOS for AMD’s chips.

        • tbone8ty
        • 6 years ago

        could be some power feature not disabled. or a future bios update needed.

        richland doesn’t draw to much more power when overclocked to 4.8ghz, which is nice, they got the OC power under control like they did with going from bulldozer to piledriver.

        it is nice that the power scheme (turbo 3.0) allows richland to stay in turbo speeds alot more frequently.

    • JoshMST
    • 6 years ago

    FX-4350 actually has 8 MB of L3 cache, not the 4 you have listed in the table.

      • Damage
      • 6 years ago

      Ah, yes. Fixed. Thanks.

        • MadManOriginal
        • 6 years ago

        Also, you list an i5-3470K in the chart on page 2.

    • tbone8ty
    • 6 years ago

    great trinity refresh on same node.

    AMD really tweaked the performance out of trinity.

    great review cant wait for kaveri!

    • Prion
    • 6 years ago

    Great [i<]first[/i<] impressions of Richland, thanks for the review.

Pin It on Pinterest

Share This