AMD’s A8-7600 ‘Kaveri’ processor reviewed

For several generations, since Llano, AMD has been slowly but methodically marching toward its vision of accelerated computing, where traditional CPU cores and graphics share space on a chip and work together to process data. This vision was called “fusion” back when the process began, although you won’t hear that term coming from AMD these days. Regardless, AMD’s latest processor, or APU (short for “accelerated processing unit”), is a major milestone on the path toward fused computing—and AMD is taking the wraps off of it today.

Compared to AMD’s current APUs, the chip code-named Kaveri is packed with sweeping changes, including enhanced “Steamroller” CPU cores, updated Radeon graphics, and a first-of-its-kind ability for the onboard CPU and GPU cores to share memory and work together to tackle a problem. Those are just the big-ticket items. Virtually every unit in Kaveri has been enhanced in some fashion.

Same space, another billion transistors

An incredibly vague picture of the Kaveri die. Source: AMD.

The changes in Kaveri start with the transition to a new chip fabrication process that packs more transistors into the same space.

The prior-gen Trinity/Richland APUs were built at GlobalFoundries using a familiar sort of manufacturing process for AMD CPUs, with feature sizes as small as 32-nm and a silicon-on-insulator (SOI) substrate. This 32-nm SOI process is tuned expressly for CPUs and helps enable the clock frequencies above 4GHz that are common in AMD’s desktop processors.

For Kaveri, AMD and GloFo have developed a 28-nm SHP (short for “super-high performance,” presumably) process that trades SOI for traditional bulk silicon. The 28-nm SHP process is tuned differently, to allow for higher transistor densities and somewhat lower peak switching speeds. AMD describes the process as a “happy medium” tuning point, one more accommodating to the GPU portion of Kaveri’s die.

Code name Key

products

CPU

cores/

modules

CPU

threads

Last-level

cache size

Process node

(Nanometers)

Estimated

transistors

(Millions)

Die

area

(mm²)

Lynnfield Core i5, i7 4 8 8 MB 45 774 296
Sandy Bridge Core i5, i7 4 8 8 MB 32 995 216
Ivy Bridge Core i5, i7 4 8 8 MB 22 1200 160
Haswell (Quad GT2) Core i5, i7 4 8 8 MB 22 1400 177
Llano A8, A6, A4 4 4 1 MB x 4 32 1450 228
Trinity/Richland A10, A8, A6 2 4 2 MB x 2 32 1303 246
Kaveri A10, A8 2 4 2 MB x 2 28 2410 245

Thanks to this new manufacturing process, Kaveri crams about 1.1 billion more transistors—most of them dedicated to graphics—into approximately the same die area as Trinity. However, Kaveri has lower CPU operating speeds, especially in the higher power envelopes typical of most desktop processors.

If you’ve been following these things, this story may sound familiar to you. Intel has taken a similar path with its 22-nm fab process, tuning for better low-power operation at the expense of additional peak performance. Given that chips like Kaveri and Intel’s Haswell are geared primarily for laptops, this sort of tuning makes sense.

That said, AMD and Intel aren’t exactly aligned in their approaches to highly integrated CPUs. In the last couple of generations, Intel has pushed into ever-lower power envelopes with its Core processors. Haswell Y-series parts can squeeze into power envelopes as low as 6W, and that’s with an on-package “PCH,” or south bridge I/O chip. AMD evidently didn’t see that move coming when it defined the requirements of its new APU. Kaveri operates in a broad range of power targets between 15W and 95W, but it’s most likely not optimal at either end of that range. AMD hasn’t yet announced the mobile versions of Kaveri—today’s introduction applies only to the desktop variants—but the 15W version of Kaveri will presumably have an external south bridge with its own power budget. AMD will have to cover lower power ranges with its Kabini and Temash SoCs, which are decent but cheaper, lower-performance chips.

Steamroller CPU cores

Kaveri has a pair of CPU modules, each with two “tightly coupled” integer cores and a single, shared floating-point unit. In keeping with its recent heavy-machinery theme, AMD calls this next revision of its CPU microarchitecture Steamroller. Kaveri’s Steamroller modules have been tweaked in significant ways to improve performance and power efficiency compared to the previous generations, known as Piledriver and Bulldozer. AMD CTO Mark Papermaster revealed many of the changes on tap for Steamroller over a year ago, but Kaveri is the first silicon to include this generation of AMD’s x86 processor tech.

An alarmingly simplified block diagram of a Steamroller module. Source: AMD.

The CPU modules in the Bulldozer family have never quite lived up to expectations for various reasons. The obvious point of emphasis in Steamroller is keeping the execution engine better fed through tweaks to the microarchitecture’s front end. Most notably, instruction decode is no longer a shared resource. The module has separate, dedicated decoders for each of its two integer cores. Also, the instruction cache is now 50% larger, at 96KB, and is three-way set associative. AMD claims i-cache misses have been reduced by 30% as a result. Furthermore, the branch target buffer has grown in size from 5K to 10K entries, giving the branch predictor more insight into program activity. The benefit is a claimed 20% reduction in branch mispredictions. Tricky x86 instructions that require the use of microcode should run faster in Steamroller, as well, since microcode ROM can be accessed simultaneously by both of the module’s threads.

There are some big numbers attached to those individual front-end improvements. Combined with a larger scheduler window that adds 5-10% more efficiency, the Steamroller execution engine is apparently being kept much busier. On a per-thread basis, AMD says instruction dispatches that use the max width of the machine have risen by 25%. The Steamroller module can retire work at a higher rate, too, thanks to improvements to its back end (including enhancements to the load and store queues).

Of course, improvements in individual areas don’t always translate directly into overall performance gains, since architectural constraints tend to move around depending on the workload. AMD claims Steamroller delivers an overall average gain in retired instructions per clock of about 10% over Piledriver, although that number can rise as high as 20% in certain scenarios. The good news is that Kaveri’s IPC increases should serve to offset the reduction in clock frequency caused by the switch to 28-nm SHP manufacturing, thus keeping CPU performance steady from Trinity and Richland. The bad news is that AMD may be largely treading water in terms of overall CPU performance, while Intel continues to extend its lead.

GCN graphics

Any pain associated with AMD’s ongoing deficit in CPU performance is dulled somewhat by Kaveri’s incorporation of the state-of-the-art GCN graphics architecture. 47% of Kaveri’s die space is devoted to graphics, signaling AMD’s commitment not just to graphics, but also to GPU acceleration of general-purpose computing workloads.

A GCN compute unit in a small fraction of its true glory

The move to the Graphics Core Next architecture is a major upgrade over Trinity on both of these fronts, just as it was when the Radeon HD 7000 series supplanted the HD 6000 series. (I’ve outlined the structure of the GCN compute units here.) This is the same generation of graphics technology that AMD built into the chips that power Microsoft’s Xbone and Sony’s PS4.

More precisely, Kaveri’s compute units are of the same vintage as those in the Hawaii GPU that powers the Radeon R9 290X. This latest revision of GCN includes provisions especially helpful for APUs. The addition of flat system addressing facilitates the sharing of memory between CPU and graphics compute units. Meanwhile, buffering changes should improve the performance of geometry shaders and tessellation in the bandwidth-constrained environs of a CPU socket.

Naturally, Kaveri’s GPU is built on a much smaller scale than the big Hawaii chip. It has only eight compute units, versus 44 on the Radeon R9 290X. Still, those eight CUs endow Kaveri with a total of 512 shader processors and 32 texels per clock of bilinear filtering capacity. The front end can rasterize a single primitive per clock cycle, and two render back-ends give it 16 pixels per clock of ROP throughput. This is a major upgrade from the 384 SPs, 24 tpc of filtering, and 8 ppc of ROP throughput in Trinity—and we haven’t even accounted for the more efficient scheduling and superior GPU computing chops of the GCN architecture.

In keeping with Kaveri’s mobile focus, the impact of this wider graphics engine will most likely be felt in lower power bands, where the dual memory channels available inside of a CPU socket are less of a constraint, relatively speaking. We’ve already shown that the previous-gen Richland’s GPU is somewhat bandwidth-constrained in higher power envelopes. If bandwidth becomes the primary performance limiter, then Kaveri’s wider graphics engine could become starved for work.

The future is fusion?

What may be Kaveri’s most innovative new technology doesn’t yet benefit current applications. However, it should enable developers to create programs that can use the CPU and GPU cores on a chip together in novel ways. AMD talks about these features under the umbrella of its wide-ranging HSA effort. HSA stands for Heterogeneous Systems Architecture, and it refers to an overarching system architecture for mixed-mode computing (involving CPU cores, GPUs, and possibly DSPs) with its own programming model. AMD’s HSA enablement effort involves building the tools and partnerships to make HSA a viable development platform, both for x86-compatible chips and for SoCs that marry other sorts of CPU cores and graphics engines. The goal is to make it possible to write software that almost effortlessly intermingles the use of CPUs, graphics processors, and other computing engines as needed.

AMD outlined the basic HSA architecture several years ago, and it has been slowly adding features to its chips to make this vision a reality. The first APU, Llano, had a 128-bit Fusion Compute Link that allowed the GPU to access CPU-owned memory in certain cases. This link was an add-on created specifically for mixed-mode computing, since the integrated Radeon had a 512-bit bus of its own. Trinity expanded the FCL to 256 bits wide and changed its path, routing it through an IOMMU and into a unified north bridge between the CPU and graphics cores. Kaveri retains the 512-bit Radeon bus and the 256-bit FCL, and it adds a third 256-bit link from the GPU to the north bridge.

This new link is notable because it provides coherent access to memory. That is, the GPU can read and modify memory locations over this link without worrying about whether the same data is being held or modified in the CPU caches. Much like in a multi-socket server, Kaveri’s hardware ensures that its CPU and GPU cores are properly synchronized and working on correct, up-to-date data. Programmers and compilers need not worry about the hazards created by the GPU reaching into main memory and making a change. Coherent communication is one of the keys to unlocking the GPU’s full participation in heterogeneous computing, and Kaveri is the first chip from AMD to offer this capability.

Kaveri’s coherent FCL pairs up with a couple of other HSA-enabling features to open some new possibilities for programming an APU. Thanks to a feature called hUMA, or hetergenous uniform memory architecture, the CPU and GPU can share up to 32GB of memory and access it via a common addressing scheme. hQ, or heterogeneous queuing, allows the GPU to create and dispatch work for itself—or for the CPU. Kaveri’s graphics unit includes eight dedicated asynchronous compute engines (ACE), independent of the graphics command processor, for scheduling parallel computing work. And Kaveri supports the atomic operations needed for synchronization between the CPU and GPU cores.

At the Kaveri press event, AMD HSA honcho Phil Rogers offered several examples of how an HSA-compliant APU could intermix CPU and GPU operations for higher performance using simple, less repetitive code. Kaveri is the first chip capable of running that code natively, making it the first real development platform for HSA. If AMD somehow is able to persuade the rest of the industry to standardize on its vision for heterogeneous programming, that could be an even bigger coup than the adoption of the x86-64 ISA back in the Athlon 64 days.

With that said, the implementation of graphics coherency in Kaveri is just a first step, as the presence of three separate buses coming from the GPU indicates. AMD Client Divison CTO Joe Macri forthrightly admitted that the three buses could be merged in a future design. One can imagine how a single link could be more power-efficient. For engineering purposes, he told us, replicating the FCL and making it coherent was the easier path for this project. Also, the coherent FCL presently bypasses the GPU’s L2 cache, unlike the non-coherent link. On the CPU side, the L1 cache’s TLB is available on both busses, but the L2 TLB—located in the IOMMU—can only be accessed by one client at a time. In the event of an L2 miss, the IOMMU will walk the page tables, remaining locked the whole time.

Obviously, these limitations aren’t ideal. Macri explained that the goal in this case was keeping things simple and maintaining architectural correctness. The team didn’t want a bug in HSA-related features to delay the product, especially since HSA is about enabling future applications, not current ones. In keeping with AMD’s recent modus operandi of incremental CPU-GPU fusion, we’d expect these restrictions to be removed from future APUs.

Dedicated accelerators and more

Kaveri is about more than Steamroller and GCN. The dedicated media accelerators on the chip have all been updated, too.

The big addition here is the TrueAudio DSP block that AMD built into the latest Radeons—and apparently into the next-gen game console SoCs, as well. TrueAudio is meant to accelerate effects like 3D positional audio in games, removing that burden from the CPU. Like the HSA features, Kaveri’s TrueAudio block is a bit forward-looking, since we don’t yet have any software that can take advantage of it. However, a number of middleware vendors look to be gearing up to support TrueAudio, so we can probably expect to see games use it before too long.

Kaveri’s video accelerators are both updated versions of the ones featured in Trinity and Richland. The UVD 4 video decoder block hasn’t changed much, but AMD says it has improved error resiliency, so videos will continue playing even when the decoder encounters errors in their source files. The VCE 2 encoder block adds support for the YUV444 color format, specifically in order to provide better text quality when using 60GHz wireless displays. H.265/HEVC isn’t supported in VCE 2. AMD is instead talking about using GPU acceleration via OpenCL to assist with the playback of 4K video content encoded in this fashion.

Oh, and one big-ticket checkbox item has been marked: at last, AMD’s latest APU supports PCI Express 3.0 connectivity for off-chip I/O. This addition could pay dividends in several cases, especially when the APU is paired with a couple of discrete graphics cards in a multi-GPU team.

Power management

Like AMD’s past APUs, Kaveri has sophisticated power management capabilities, with dynamic voltage and frequency scaling (DVFS) as well as boost. I suspect AMD isn’t talking as much about this particular area because it’s saving something for the introduction of the mobile Kaveri parts. Compared to prior generations, the firm says, Kaveri has better monitoring of temperatures and activity counters across the chip, allowing it to pursue higher clock frequencies with boost—and thus push the limits of its prescribed thermal envelope—without reducing chip reliability.

AMD did share some preliminary battery life numbers for the mobile version of Kaveri. The numbers above come from a 35W APU installed in a system with a 58Whr battery, and as you can see, the run times look pretty decent—although that is a pretty beefy battery.

One other bit of good news for mobile versions of Kaveri: AMD says the chip draws only about 25 mW of power in an S3 suspend state. That should mean that it’s possible to let a laptop sleep for hours or even days without substantially draining the battery. We’ll have to see how that works out at a platform and system level, but the APU power number sounds very nice.

A new socket: FM2+

Kaveri comes to the desktop with a new type of socket in tow: Socket FM2+. This new plug type has two more pins than the older FM2 standard, and as a result, Kaveri-based APUs won’t drop into pre-FM2+ motherboards.

Happily, Socket FM2+ mobos will accept older Trinity and Richland-based APUs, so there is a measure of backward compatibility in play here. I think most owners of Socket FM2 systems would probably prefer things the other way around, though, so they could drop a new CPU into an older system as an upgrade.

A trio of desktop Kaveris

Model Modules/

Integer

cores

Base core

clock speed

Max Turbo

clock speed

Total

L2 cache

capacity

Graphics

CUs

Graphics

clock

TDP Price
A10-7850K 2/4 3.7 GHz 4.0 GHz 4 MB 8 720 MHz 95 W $173
A10-7700K 2/4 3.4 GHz 3.8 GHz 4 MB 6 720 MHz 95 W $152
A8-7600 2/4 3.3 GHz 3.8 GHz 4 MB 6 720 MHz 65 W $119
A8-7600 2/4 3.1 GHz 3.3 GHz 4 MB 6 720 MHz 45 W $119

Yes, I said there is a trio of desktop Kaveri APUs. Look closely above, and you’ll see that the A8-7600 occupies two lines in the table. That’s because this particular model comes with a configurable TDP. The user can pick one of two operating points for it, a 45W peak or a 65W peak, and the chip will run at different clock speeds based on that setting. I’ve already mentioned that most of Kaveri’s improvements will be more acutely felt in lower power envelopes, so perhaps you won’t be surprised to learn that AMD has elected to supply the A8-7600 to us for review. I can’t really complain. We’ve long said AMD’s 65W APUs are its most attractive offerings.

I do wish the A8-7600 were actually becoming available for purchase today, but AMD quotes a vague “Q1 ’14” release time frame for it. The two A10 parts are the ones hitting stores today.

Naturally, we’ve tested the A8-7600 at both 45W and 65W TDP levels. As a fairly direct competitor to the A8-7600, we’ve have Intel’s Core i3-4330. This dual-core, quad-threaded Haswell runs at 3.5GHz, actually a higher clock than the A8’s. That fact doesn’t bode well for the CPU performance match-up, since Intel’s recent cores tend to be substantially faster clock-for-clock than AMD’s. (Then again, Kaveri has twice as many integer cores.) The i3-4330 lists for $138 and has a TDP rating of 54W, smack-dab between the A8-7600’s two configurable levels. The Core i3 features Intel’s HD Graphics 4600 IGP. Haswell’s beefier GT3 and GT3e graphics configs aren’t available in socketed desktop parts.

As expected, the A10-series Kaveris can’t quite reach the same clock frequencies as Richland parts fabbed on a 32-nm SOI process. The 7850K tops out at 3.7GHz base and 4.0GHz boost speeds, several hundred megahertz below the 4.1/4.4GHz operation of the A10-6800K. Graphics clock speeds are down a bit, too, from 844MHz in the 6800K to 720MHz in the 7850K. Kaveri’s wider graphics should still be a clean win, though, provided that there’s enough memory bandwidth available in the socket.

To that end, AMD has expanded support for DDR3-2133 memory speeds across the entire Kaveri desktop lineup. In the Richland lineup, the A10-6800K is the only part with official support for DDR3-2133. The others top out at DDR3-1866.

Test notes and methods

AMD sent us a complete system with the A8-7600 inside. The machine is based on Xigmatek’s Nebula enclosure, which looks like an overgrown Mini-ITX cube. At 13″ x 10″ x 10″, the case is a little big for a mini build. There’s room inside for full-sized PSUs, larger coolers, and double-wide graphics cards, though.

The case has some nice elements, including chunky aluminum side panels affixed with a nifty, tool-free mechanism. Popping off the walls exposes the guts on two sides.

Inside lies a Gigabyte F2A88XN-WIFI motherboard, an Antec High Current Pro 750W PSU, a Samsung 840 Pro 256GB SSD, and 16GB of AMD’s own Gamer Series DDR3-2133 memory. And the A8-7600, of course. Despite the fact that there’s plenty of headroom inside the case, AMD strapped one of Noctua’s low-profile NH-L9a coolers onto the chip.

With Scott in Las Vegas for CES last week, all of our testing was conducted at TR’s northern outpost. We don’t have access to the test rigs and CPUs in Scott’s lab, so we had to make do with a more limited selection of competitors for the A8-7600.

AMD has positioned the A8-7600 opposite the Core i3-4330. We tested the Core i3 on a Mini-ITX motherboard based on Intel’s Z87 platform. We also tested a couple of 45W APUs based on the last-gen Richland silicon. Both are quad-core models; the A8-6500T is clocked at 2.1/3.1GHz, while the A10-6700T runs at 2.5/3.5GHz. The A10’s higher clock speeds make it the more appropriate foil for the A8-7600, which is clocked at 3.3/3.8GHz in 65W mode and 3.1/3.3GHz in 45W mode.

We tested the A8-7600 in its 45W and 65W modes, both with 2133 MT/s memory. The Core i3 doesn’t officially support memory transfer rates over 1600 MT/s, but our Z87 motherboard does, and it had no problem running a pair of DIMMs with the same frequency as the Kaveri rig. Since all of our testing was conducted using the onboard GPUs, we targeted 2133 MT/s for all the configs.

Richland has an 1866 MT/s default memory speed, and we weren’t able to push the A10-6700T and A8-6500T any higher, perhaps because the T-series parts lack unlocked multipliers. Even when we set a 2133 MT/s transfer rate in the firmware, the system booted at 1866 MT/s or slower. The A10-6700T was happy at 1866 MT/s, but the A8-6500T stubbornly stuck to 1600 MT/s no matter what we tried. The 6500T is supposed to support the higher speed, so a motherboard firmware quirk may be responsible for the issues we encountered.

To fill out the lineup, we added a Core i7-4770K. This is Intel’s fastest Haswell chip, so it’s not a direct competitor for the A8-7600 or any of the AMD APUs we’ve tested. The i7-4770K is meant to provide a familiar frame of reference for the rest of the results.

The timeline for this review was very tight, limiting our ability to test additional configurations. We didn’t even get final drivers from AMD until Friday, so we had to work through the weekend just to get these parts tested. More on Kaveri is coming, though. Scott managed to get his hands on the full-fat, 95W A10-7850K during CES. Look for that chip to make its way through our usual CPU test suite soon.

We ran every test at least three times and reported the median of the scores produced. The test systems were configured like so:

Processor AMD A8-7600 AMD
A10-6700T
AMD
A8-6500T
Intel
Core i3-4330
Intel Core i7-4770K
Motherboard Gigabyte
F2A88XN-WIFI
ASRock
Z87E-ITX
Platform hub AMD A88X Intel
Z87 Express
Memory size 16 GB
(2 DIMMs)
16 GB
(2 DIMMs)
Memory type AMD
Gamer Series DDR3 SDRAM
Corsair
Vengeance Pro DDR3 SDRAM
Memory speed 2133 MT/s 1866
MT/s
1600
MT/s
2133
MT/s
Memory timings 10-11-11-30 2T 9-10-11-27
2T
8-9-10-23
2T
10-11-11-30
2T
Platform drivers AMD
Catalyst 13.30 RC2
Intel
INF 9.4.0.1027

Intel RST 12.8.0.1016

Audio Realtek
ALC889 with 2.73 drivers
Realtek
ALC1150 with 2.73 drivers
Integrated graphics Radeon
R7
Radeon HD 8650D Radeon
HD 8550D
HD
Graphics 4600
IGP
drivers
AMD
Catalyst 13.30 RC2
Intel
15.33.8.64.3345
Solid-state drive Samsung
840 Pro 256GB
Samsung
840 Pro 256GB
Power supply Antec
High Current Pro 750W
Corsair
AX850 850W
OS Windows
8.1 Pro
Windows
8.1 Pro

We used the following versions of our test applications:

Some further notes on our testing methods:

  • The test systems’ Windows desktops were set at 1920×1200 in 32-bit color. Vertical refresh sync (vsync) was disabled in the graphics driver control panel.
  • We used a Watts Up Pro digital power meter to capture power use over a span of time. The meter reads power use at the wall socket, so it incorporates power use from the entire system—the CPU, motherboard, memory, graphics solution, hard drives, and anything else plugged into the power supply unit. (The monitor was plugged into a separate outlet.) We measured how each of our test systems used power across a set time period, during which time we encoded a video with x264. All power testing was done with the Antec High Current Pro 750W PSU.
  • After consulting with our readers, we’ve decided to enable Windows’ “Balanced” power profile for the bulk of our desktop processor tests, which means power-saving features like SpeedStep and Cool’n’Quiet are operating. (In the past, we only enabled these features for power consumption testing.) Our spot checks demonstrated to us that, typically, there’s no performance penalty for enabling these features on today’s CPUs. If there is a real-world penalty to enabling these features, well, we think that’s worthy of inclusion in our measurements, since the vast majority of desktop processors these days will spend their lives with these features enabled.

The tests and methods we employ are usually publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Memory subsystem performance

Before diving into our gaming and application tests, we’ll take a moment to look a handful of lower-level metrics, starting with memory subsystem performance. Keep in mind that only the A8-7600 and the Intel CPUs are running their memory at 2133 MT/s. The A10-6700T config has a slower 1866 MT/s memory speed, and the A8-6500T is limited to 1600 MT/s.

The A8-7600 is a fair bit faster than its Richland-based siblings in our Stream memory bandwidth test. That’s to be expected given the Kaveri chip’s higher clock speeds, especially versus the A8-6500T. The A10-6700 is a closer match for the A8-7600, but it can’t keep up, either.

While Kaveri looks fast versus Richland, it lags well behind the Haswell competition. The Core i3-4330 wrings much higher bandwidth from the same memory setup as the A8-7600.

Dialing back the A8-7600’s thermal envelope has only a minimal impact on memory bandwidth, at least in this test. Let’s see what Sandra has to say.

This multithreaded test measures the bandwidth of all caches on all cores concurrently. The different block sizes step us down from the L1 and L2 caches into L3 and main memory. Notice how the A8-7600’s performance starts to fall off after 64KB, when the test spills out of the L1 cache, and after 4MB, when it exceeds the capacity of the L2 cache and pushes into system memory. Neither Kaveri nor Richland has an integrated L3 cache, so the test hits main memory when it runs out of L2.

The A8-7600 has higher cache bandwidth than the Richland chips we tested. The Core i3-4330 delivers substantially higher throughput than the A8-7600 at smaller block sizes, though. Those two chips are closely matched from 128KB through 512KB, but the Core i3 slows down as larger block sizes push into its L3 cache. The A8-7600’s larger L2 cache has an edge until the caches are exhausted and the test becomes bound by the system memory interface.

The Core i7-4770K runs away with this test thanks to a combination of higher clock speeds, greater L1 and L2 cache capacity (via additional cores), and a larger L3 cache. Remember that it’s not a direct competitor to the A8-7600 or any of the other contenders.

Next, we’ll look at Sandra’s cache and memory latency test. We used the “in-page random” access pattern to reduce the impact of prefetchers on our measurements. You can read more about this test right here.

Again, the results expose the cache configurations of each chip. This test is single-threaded, so the presence of additional CPU cores doesn’t affect the results. The Core i7-4770K has lower access latencies than the i3-4330 only because of the difference in L3 cache size.

All of the AMD chips perform comparably until the 4MB block size. Starting at that point, the A8-7600 configs exhibit higher latencies than the A10-6700T and A8-6500T. The looser timings required by the A8-7600’s 2133 MT/s memory could explain the difference.

Some quick synthetic math tests

AIDA64 has a collection of synthetic CPU benchmarks, some of which take advantage of the new instructions supported by the latest AMD and Intel CPUs. If you’re curious, this page has details on each test. The CPU PhotoWorxx and Hash tests both employ AVX2 and XOP instructions. So do the FPU Julia and Mandel tests, which also support FMA4 code.

The A8-7600 can’t catch its Core i3 competition in the PhotoWorxx test, and it’s way behind in the two FPU tests. The chip outpaces the Intel duallie in the CPU Hash test, though. That test uses the SHA1 algorithm and runs much faster on Kaveri than it does on Richland. Of course, the A8-7600 also has higher CPU and memory clocks than the A10-6700T and A8-6500T. The tight race between those Richland chips suggests memory bandwidth isn’t a major constraint in the CPU Hash test.

Given the different CPU and memory frequencies of our APU configs, it’s difficult to get a sense of Kaveri’s IPC improvements over Richland. We may have to revisit that topic with more targeted testing in the future. Given the timeline for this review, we elected to spend more time testing actual games and applications. Speaking of which, let’s see how Kaveri’s GCN-derived Radeon handles cutting-edge DirectX 11 titles.

Battlefield 4

Unfortunately, the A8-7600 isn’t part of AMD’s Battlefield 4 bundling promo. The chip runs the game rather well, though, as we learned while blasting through a portion of the single-player campaign’s Shanghai mission. As usual, we tested the game by measuring each frame of animation produced. The uninitiated can start here for an intro to our methods.

We stuck to a 1920×1080 display resolution for all our game testing. Surprisingly, the A8-7600 handled that resolution with medium detail settings.



At these settings, Battlefield 4 runs much better on the A8-7600 than on any of the other configs. All our metrics agree; the Kaveri setups have higher FPS averages, lower 99th percentile frame times, and fewer frames beyond each of our “badness” thresholds. Frame production isn’t silky smooth, as the frame time plot indicates, but it’s a massive improvement over the other solutions.

The data match my subjective impressions. BF4 may not be especially pretty with medium details, but it’s definitely playable on the A8-7600’s integrated graphics, and there isn’t much of a penalty associated with shifting the chip into 45W mode. That said, 28 FPS is a little on the sluggish side for multiplayer gaming. You may want to lower the in-game detail when playing online, where 64-player servers can generate a lot more on-screen mayhem than the average campaign mission.

We had hoped to test BF4 with multiple memory speeds, but the A8-7600 wouldn’t boot with the DIMMs set to 2400 MT/s. We did manage to get Kaveri running with 1866 MT/s memory, though. That setup dropped the 65W config’s FPS average by two frames per second and increased its 99th percentile frame time by 2.5 milliseconds—relatively small changes. Those small deltas suggest that the A10-6700’s deficit is due to more than just its slower memory interface. The A8-7600’s closest Richland-based competition has much higher frame latencies.

Tomb Raider

The Tomb Raider reboot is next. In this game, we ran through the jungle and pilfered a dead man’s bow and arrow. Most of the detail settings were left at the “normal” defaults.



Once again, all our metrics agree that the A8-7600 offers the best performance of the bunch. My seat-of-the-pants impressions concur. Playing on the A8-7600 feels substantially smoother regardless of the TDP configuration. Tomb Raider is still playable on the other setups, but the experience is definitely compromised.

Although the Core i3-4330 and Core i7-4770K are soundly trounced by the A8-7600, the Intel chips are surprisingly competitive with the A10-6700T. Haswell’s onboard GPU can’t take all the credit, though. Integrated graphics performance is highly dependent on memory bandwidth, and the Intel configs are running faster memory than the Richland setups.

Batman: Arkham Origins

Game benchmarking sounds fun until you realize that it involves repeating the same 60-second sequence over and over again. That usually gets old pretty fast. However, after several days of non-stop testing, I’m still not sick of brawling through the “Panorama” challenge map we used to test Batman: Arkham Origins.



The contest is tighter this time, but the end result is the same. The A8-7600 delivers much more fluid frame delivery than the competition.

Yes, there are still spikes in its frame time plot. And no, Arkham Origins doesn’t look exceptional with so much of its eye candy turned off. But the game’s timing-focused combat makes it easy to feel the performance differences between the A8-7600 and its peers. The Richland-based A10-6700 is noticeably slower than the Kaveri configs.

As we’ve seen throughout our gaming tests, the 45W Kaveri config offers nearly all of the gaming performance of the 65W setup. Our metrics show a consistent delta between the two settings, but I couldn’t discern much of a difference while actually playing each game.

Productivity

JavaScript performance

We tested JavaScript performance using the SunSpider and Kraken benchmarks running in Google Chrome.

The A8-7600 is stuck between Haswell and Richland here. The Core i3-4330 has a considerable lead over the fastest Kaveri config, which in turn has a smaller advantage over the A10-6700T.

TrueCrypt disk encryption

TrueCrypt supports acceleration via Intel’s AES-NI instructions, which also work with Richland and Kaveri. We’ve included results for another algorithm, Twofish, that isn’t accelerated via dedicated instructions.

In the AES test, the Core i3-4330 slips between the two Kaveri configs. It’s not fast enough to keep up in the Twofish test, though. The A10-6700T isn’t fast enough to keep up with the A8-7600 in either test.

7-Zip file compression and decompression

The first of two compression tests, 7-Zip doesn’t employ any specialized hardware acceleration.

The A8-7600 fares reasonably well here. Its 65W incarnation is only a smidgen behind the Core i3-4330 in the compression test, and it has a comfortable lead over the Intel chip in the decompression test. Capping the chip’s TDP at 45W lowers performance somewhat, but the A8-7600 still performs better than the A10-6700T in the same thermal envelope. With slower CPU and memory frequencies, the A8-6500T continues to bring up the rear.

WinZip file compression and decompression

Unlike Z-Zip, WinZip has built-in OpenCL acceleration. It doesn’t include a benchmark, so we used a stopwatch to time how long it took to compress and decompress 1.5GB of application, MP3, RAW, JPEG, Excel, and text files.

Although the A8-7600 compresses our file set in about the same amount of time as the Core i3-4330, the Intel chip is much faster in the decompression test. Interestingly, the Richland-based A10-6700 is way behind in the compression test but barely off the pace in the decompression test.

Compiling code in GCC

Our resident developer, Bruno Ferreira, helped put together this code compiling test. Qtbench tests the time required to compile the QT SDK using the GCC compiler. Here’s Bruno’s note about how he built it:

QT SDK 2010.05 – Windows, compiled via the included MinGW port of GCC 4.4.0.

Even though apparently at the time the Linux version had properly working and supported multithreaded compilation, the Windows version had to be somewhat hacked to achieve the same functionality, due to some batch file snafus.

After a working multithreaded compile was obtained (with the number of simultaneous jobs configurable), it was time to get the compile time down from 45m+ to a manageable level. This required severe hacking of the makefiles in order to strip the build down to a more streamlined version that preferably would still compile before hell froze over.

Then some more fiddling was required in order for the test to be flexible about the paths where it was located. Which led to yet more Makefile mangling (the poor thing).

The number of jobs dispatched by the Qtbench script is configurable, and the compiler does some multithreading of its own, so we did some calibration testing to determine the optimal number of jobs for each CPU.

Score another one for the Core i3-4330. The A8-7600 takes more than a minute longer to finish our compiling test, and that’s in 65W mode. Lowering the TDP extends the chip’s compiling time by about a minute and a half, putting it even farther behind. At least the A8-7600 has a healthy advantage over the A10-6700T. The gap between the Kaveri chip and its closest Richland competition is large enough to suggest that IPC improvements are partially responsible.

Video encoding

x264 HD video encoding

Our x264 test uses a build of the encoder that supports both AVX2 and FMA instructions. To test, we encoded a one-minute, 1080p .m2ts video using the following options:

–profile high –preset medium –crf 18 –video-filter resize:1280,720 –force-cfr

The source video was obtained from a repository of stock videos on this website. We used the Samsung Earth from Above clip.

The A8-7600 doesn’t quite catch the Core i3-4330 here. The 65W config comes close, but it’s not fast enough.

Even with its TDP dialed back to 45W, the A8-7600 has a considerable edge over the A10-6700T. Kaveri trumps Richland once more, with the handicapped A8-6500T stuck in last place as usual.

Handbrake HD video encoding

Our Handbrake test transcodes a two-and-a-half-minute 1080p H.264 source video into a smaller format defined by the program’s “iPhone & iPod Touch” preset. The latest official version of the encoder is supposed to support OpenCL, but we couldn’t find a way to enable the feature on any of our test systems. Installing each platform’s OpenCL SDK didn’t help, either.

We had to fall back to an older, OpenCL-specific Handbrake build to let the integrated GPUs assist with the encoding process. That build didn’t get along with the Intel processors, though. The OpenCL option was present when we opened the app, but it disappeared after our source file was selected.

It’s a shame the OpenCL build didn’t work on the Intel CPUs, because the A8-7600 and Core i3-4330 are neck-and-neck in the standard test. I’m curious to see if the two chips would still be evenly matched with GPU acceleration thrown into the mix.

Don’t compare the standard and OpenCL-accelerated encoding times to each other. Those sets of results come from Handbrake builds released months apart, so other factors may contribute to the differences—or the relative lack thereof.

Do, however, note that the A8-7600 leads the A10-6700T by another wide margin.

Image processing

The Panorama Factory photo stitching
The Panorama Factory handles an increasingly popular image processing task: joining together multiple images to create a wide-aspect panorama. This task can require lots of memory and can be computationally intensive, so The Panorama Factory comes in a 64-bit version that’s widely multithreaded. We asked it to join four pictures, each eight megapixels, into a glorious panorama of the interior of Damage Labs.

Another test, another example of the A8-7600 failing to catch the Core i3-4330 but managing to stay comfortably ahead of the A10-6700T.

Photoshop CC

Photoshop CC’s smart sharpen filter uses OpenCL for noise reduction. We used a stopwatch to time how long it took each system to sharpen an 18-megapixel RAW image file.

The A8-7600 65W takes nearly 50% longer than the Core i3-4330 to sharpen our test image. Lowering the thermal envelope to 45W adds another two seconds to the filter’s execution time, which puts the Kaveri-based chip behind the A10-6700T. The A10 chip has a 45W TDP, too, but it has a higher peak Turbo speed than the 45W Kaveri setup. That 200MHz advantage is enough for the A10-6700T to steal a small victory over its successor.

Musemage

Like Photoshop, Musemage is an image editing application with OpenCL acceleration. We used the built-in benchmark, which applies a series of filters to an image before producing an overall score.

AMD runs the table here. The A8-7600 nearly doubles the performance of the Core i3-4330, and it’s way ahead of the i7-4770K. All of the APUs, including even the A8-6500T saddled with 1600 MT/s memory, manage to beat the Intel CPUs in this test.

This time around, the A10-6700T’s Turbo advantage over the A8-7600 45W isn’t enough to tip the scales in Richland’s favor. Kaveri’s IPC enhancements and higher memory speed keep the A8-7600 ahead of its 45W predecessor.

3D rendering

LuxMark

Because LuxMark uses OpenCL, we can use it to test both GPU and CPU performance. OpenCL code is by nature parallelized and relies on a real-time compiler, so it should adapt well to new instructions. For instance, Intel and AMD offer integrated client drivers for OpenCL on x86 processors, and they both support AVX. The AMD APP driver even supports Bulldozer’s and Piledriver’s distinctive instructions, FMA4 and XOP. We used the Intel ICD on the Intel processors and the AMD ICD on the AMD chips.

Interesting. Intel has a clear advantage in CPU performance, while AMD has the edge in GPU horsepower. When both components are working together to render the scene, the scales tip in Intel’s favor. The Core i3-4330’s faster CPU cores are just too much for the A8-7600’s integrated Radeon to overcome.

To AMD’s credit, the A8-7600 scores better than the A10-6700T. The difference in CPU performance is relatively small, but the gaps in the GPU and combined tests are huge. Some of Kaveri’s advantage there probably comes courtesy of its faster memory interface. The deltas between the A10-6700T and A8-6500T suggest that the GPU and combined tests are particularly sensitive to memory bandwidth.

Cinebench rendering

The Cinebench benchmark is based on Maxon’s Cinema 4D rendering engine. It’s multithreaded and comes with a 64-bit executable. This test runs with just a single thread and then with as many threads as CPU cores (or threads, in CPUs with multiple hardware threads per core) are available.

The A8-7600 goes zero for two in Cinebench. Its single-threaded performance is substantially slower than that of the Core i3-4330, and the multithreaded test doesn’t provide much relief.

The multithreaded test gives Kaveri a chance to beat up on Richland a little, though. In that test, the A8-7600 has a big lead over the A10-6700. The difference between the two chips is much smaller in the single-threaded test.

Power consumption and efficiency

Our workload for this test is encoding a video with x264, based on a command ripped straight from the x264 benchmark earlier in the review. The first graph below shows system power consumption over the duration of the test.

The A8-7600T completes the encoding workload much quicker than the Richland-based APUs. It also has much higher peak power consumption during the encoding process, but there’s little difference in idle power draw between the AMD offerings. Meanwhile, the Core i3-4330 has lower idle and peak power consumption than anything in the AMD camp. And it finishes encoding the video file faster than the competition, too.

Note that the 45W and 65W Kaveri configs have identical idle power consumption. The lower TDP limit cuts the system’s peak power consumption by almost exactly 20W, which is what we’d expect.

We can quantify efficiency by looking at the amount of power used, in kilojoules, during the entirety of our test period, when the chips are busy and at idle.

Perhaps our best measure of CPU power efficiency is task energy: the amount of energy used while encoding our video. This measure rewards CPUs for finishing the job sooner, but it doesn’t account for power draw at idle.

With a slower encoding time and higher power consumption, the A8-7600 is less energy efficient than the Core i3-4330. Depending on the configuration, it requires 60-70% more energy to complete the same encoding task.

The A8-7600 and A10-6700T are pretty closely matched on the efficiency front. The Richland chip’s task energy is comparable to that of the two Kaveri configs. And, since the A8-7600 finishes the encode and returns to idle faster, it consumes less energy over the full test period.

Conclusions

AMD’s Kaveri APU raises the bar for integrated graphics performance, which is sort of what we expected. What did you think would happen when AMD built a processor infused with GCN-class Radeon hardware?

To be honest, I didn’t expect something that plays Battlefield 4 as well as the A8-7600. Despite sporting a cut-down version of Kaveri’s integrated GPU, the A8-7600 still pumps out playable frame rates at 1080p resolution with medium details. And it’s powerful enough to handle other big-name DirectX 11 titles, too. Some in-game eye candy has to be disabled to get playable frame rates, of course, but that’s true for all integrated graphics implementations. The fact is the A8-7600 delivers a better overall experience with fewer compromises than direct rivals based on AMD’s older Richland chips and Intel’s latest Haswell parts.

Kaveri’s potent onboard Radeon also has benefits beyond gaming. General-purpose computing applications can leverage the graphics hardware to tackle less trivial tasks, and HSA could make things really interesting down the road. Right now, though, the A8-7600 doesn’t have a clear advantage in so-called “accelerated” applications. The results of our OpenCL-accelerated tests were mixed, and they highlight the fact that the GPU is only one part of the processor. Kaveri’s Steamroller cores have to hold up their end of the bargain, too.

To AMD’s credit, Steamroller appears to have higher per-clock performance than the Piledriver cores familiar from Richland. Kaveri’s advantage seems to be especially prominent in multithreaded tests, and it’s nice to see the company making progress on the CPU performance front. There’s more work to be done, though. The Core i3-4330 beat the A8-7600 in the bulk of our non-gaming tests, including most of the multithreaded ones. Intel continues to have advantages in single-threaded performance and power efficiency, as well.

We’ve seen this dynamic with previous APUs, and it’s always made for a tough sell on the desktop. Gamers who actually care about graphics performance are better off with discrete video cards that deliver better visuals and smoother frame delivery, while those who don’t care about gaming are better served by Intel chips with higher per-thread performance and lower power consumption (which typically leads to lower noise levels.) APUs occupy this awkward middle ground for so-called casual gamers who want something better than an Intel IGP but not as good as a halfway-decent graphics card. As Jerry Seinfeld would say, “who are these people?” Seriously, I’ve never met one.

Now, Kaveri may be a questionable proposition for traditional desktops, but it has some appeal everywhere a discrete graphics card isn’t an option. Small-form-factor and all-in-one rigs seem particularly ripe for an APU like the A8-7600, which could bring a dose of graphics grunt to machines that typically offer poor gaming experiences.

Comments closed
    • lord_eamon
    • 6 years ago

    does the Noctua L9a touches the ram?? i checked noctua’s website and it said that this Gigabyte GA-F2A88XN-WIFI motherboard is not compatible with the cpu cooler. i really need to know since i already ordered this mobo and have the cpu cooler from my previous build. it sucks that i just find out about this a while ago and i can’t stop thinking about it. thank you.

    • NeelyCam
    • 6 years ago

    Interesting article from ExtremeTech:

    [url<]http://www.extremetech.com/computing/174980-its-time-for-amd-to-take-a-page-from-intel-and-dump-steamroller[/url<] The graph on the first page, comparing Kabini IPC to Richland and Kaveri is particularly interesting.. I didn't realize Kabini has better IPC than Richland. I guess I just assumed it would be lower since the core is tuned for low power consumption. The article argues that AMD should do an "Intel" and dump its speedster big core (like Intel's P4) for a more efficient mobile core (Intel's Pentium M).

      • LostCat
      • 6 years ago

      Kind of awkward that they say ‘Higher is more efficient’ on the graph yet ‘a score less than 100% indicates that Kabini is less efficient than its big-core rival, while a score of greater than 100% means Kabini is more efficient.’

      Wait, what?

        • NeelyCam
        • 6 years ago

        Yeah, it took me a while to figure out what the graph really meant… and you need to read the text carefully to understand – the axis labels won’t help

          • LostCat
          • 6 years ago

          Oh geez. Now I feel dumber.

      • Rza79
      • 6 years ago

      One of the ‘noobiest’ articles I’ve ever seen from Extremetech. Totally clueless.

    • Gadgety
    • 6 years ago

    On top of the Techreport page there’s an ad for an AMD A8 Gigabyte Brix system. It doesn’t use the Kaveri A8-7600 chip however. I think if it did, it would be selling like hotcakes. I know I would buy one today if it were. Tests of the A10-7850k shows that the A8-7600 performs very well indeed, and that the scaling of the A10 is approximately 15% performance gain, but there is little to be gained from the 512 graphic stream processors compared to the A8’s 384. AMD’s launch strategy seems totally confused. They launch the A10, available to consumers first presumably because it is a higher margin product. Then they send out the A8 to be reviewed, showing that the A10 isn’t performing that well. Meanwhile the hardware providers are offering the old Richland platforms, because the A8 platform was launched late. Must be hurting AMD.

    • Gadgety
    • 6 years ago

    “APUs occupy this awkward middle ground for so-called casual gamers who want something better than an Intel IGP but not as good as a halfway-decent graphics card. As Jerry Seinfeld would say, “who are these people?” Seriously, I’ve never met one.”

    Because you don’t cater to that segment of users. The so-called casual gamers are families with children who like to game, but aren’t into demanding high fps games. They live in spaces that may be too tight, as are their budgets. The computer will serve the parents’ needs, and the kids’. It will function as an htpc, surfing platform, document handler (waaay fast with Kaveri) and gaming platform. So, now you know.

      • Stickmansam
      • 6 years ago

      Thing is you can get almost as much done with an AMD/Intel CPU + dGPU for the same price. And since they are “casual” HSA won’t matter to them nor will having 4 cores (instead of say an i3) matter as well. So the AMD APU for now is good for a niche that is casual gaming in highly space/thermally constrained. When and if HSA does take off then it will be good for power users as well.

      Thing is people usually want good graphics (not quite yet offered by APU’s) or don’t really care (Intel iGPU/lower end APU’s). There is a segment for people who want to play more demanding games than Intel iGPU can handle but an APU still can.

    • ronch
    • 6 years ago

    Frankly, I find the FX-8350 a lot more competing. People complain how they don’t need 8 CPU cores and few apps can use 8 cores. Well, ok, so how many apps can harness all the GCN cores anyway?

    • Anovoca
    • 6 years ago

    I was also curious where this chip would fall in-line in the desktop world and I think the answer would be budget builds. That having been said, the side by side comparison of the Kaveri vs an i3, while interesting, offers little to help solve the question of which is the best processor for under $100 for someone with a discrete GPU. Without Celeron or Pentium in the equation the major questions still remains unanswered.

    With Steam OS on the verge and prices leaking out showing even some of the cheaper manufacturers offering systems in the 500+ range, a lot of us looking to only spend 300-400 may look to self-build rigs and have our eyes on the higher end versions of the Kaveri or Pentium family chips.

      • Airmantharp
      • 6 years ago

      Simplified:

      Best inexpensive CPU with integrated graphics: Kaveri
      Best inexpensive CPU with discrete graphics: not Kaveri

    • anotherengineer
    • 6 years ago

    Is this the first FM2+ board with the A88X chipset that TR has tested? If so will you be doing a partial review on the mobo? Like the sata speeds, etc. or is that something for another day??

    • ronch
    • 6 years ago

    There are some people who can’t see the point of these APUs. I’ve always had an affinity for AMD but really, as Scott said, it’s difficult to see who would really buy these. A lot of people buy these, I know, but really, why? Any serious gamer will want more powerful CPU cores propping up a much faster GPU, and casual users wouldn’t care much for the IGP here and would probably much rather have stronger CPU cores. So where do these APUs really fit in? Gamers on a budget who are biased towards AMD? People who think HSA will take off? People who are just curious? At roughly the same price I could pair a Core i3 with, say, a Radeon HD7750 and get a more powerful, more balanced system, along with the option of being able to upgrade to a much more powerful Core i7 later on and not having to worry about disabling the APU’s IGP later on when adding a discrete video card or having to put up with these APUs’ pickiness as to which discrete graphics cards will work with them in Crossfire mode. HSA and all those GPGPU promises are a little hard to swallow too and are akin to how 3DNow! promised much higher performance as long as programmers will go the extra mile and code specifically for it. And even then performance gains weren’t consistent from game to game.

    Look, I’d like to give these APUs a chance, really, but unless I’m a gamer on a budget with some AMD bias, I would probably get an Intel CPU with a low end graphics card. That’s exactly what we did when we were faced with the choice.

      • chuckula
      • 6 years ago

      Like I keep saying: Forget Intel & Nvidia. Throw them out the window. AMD’s own products do a great job of competing with Kaveri.

      Kaveri in a notebook: I’m interested since the IGP is much more vital to a mobile platform. Let’s see what comes out later this year.

        • swaaye
        • 6 years ago

        Why would notebooks with it go differently than with the previous APUs? It is just a slight performance bump in graphics. It is a shame there is no answer to the graphics memory bandwidth problem.

        HSA at best will open up some niche things but my guess is it won’t really go anywhere until new chips have replaced Kaveri anyway.

          • ptsant
          • 6 years ago

          [quote<] Why would notebooks with it go differently than with the previous APUs? It is just a slight performance bump in graphics. It is a shame there is no answer to the graphics memory bandwidth problem. [/quote<] I feel, judging from the overclocking potential we have seen, that the whole Kaveri design has been tuned for low power and performs admirably in the sub 65w range. I don't think it will scale as well as Haswell (who is also tuned for low power) but the difference in the same, low power range will be quite impressive. You can already see the 7600 vs the 6500T, for example. This matters in a notebook.

            • ronch
            • 6 years ago

            Or maybe the old 45w APUs are just too slow in the first place, making them too easy to beat.

      • the Lionheart
      • 6 years ago

      1- A serious gamer would indeed want a more powerful CPU, but then again, Kaveri is not a mere CPU and Mantle reduces CPU overhead dramatically to the point where Oxide games were able to push 100,000/s draw calls to the GPU using a heavily under clocked FX 8350 (I think it FX CPU was at ~2 GHz).
      With most PC games NOT likely to be Mantle exclusives, you won’t see 100K draw calls/s in any game as you could never push beyond 5K draw calls with DX/GL, which makes having a high end X86 CPU for games a pointless draw.

      2- Open-CL != HSA.
      HSA, while has been just introduced with Kaveri, is not an AMD-only thing. HSA is an ISA not an API, and it’s supported by industry giants such as ARM, Imagination Technologies, MediaTek, Qualcomm, Samsung, Texas Instruments and AMD. You can see the difference between HSA support and Open-CL acceleration in the OpenOffice LibreCalc test here:

      [url<]http://www.extremetech.com/computing/174632-amd-kaveri-a10-7850k-and-a8-7600-review-was-it-worth-the-wait-for-the-first-true-heterogeneous-chip/5[/url<] It seems that you're not familiar with the concepts of shared virtual memory and shared execution where execution of the same process can move back and forth between CPUs and GPUs by using pointers and simple signaling mechanisms. An HSA processing agent can create tasks and place them in native execution queues of the other processors without having to go through a context switch, which is an innovation! 3- There's almost no doubt that Mantle will obtain HSA support. What this translates to in games is offloading of workloads such as physics simulations, audio processing and AI to the GPU and integrated sound processors in Kaveri. The kind of physics effects and AI that we could get in those games and could run much better on AMD chips is the real gaming potential of HSA.

        • ronch
        • 6 years ago

        You practically just repeated what I’ve said: Kaveri needs devs to code specifically for its new features in order to shine. Other industry giants may also have things like HSA, making it a much-supported technology, except those you mentioned can’t really make a big impact in the PC space. In the PC space, it’s Intel who decides which things will gain momentum and which won’t. Need proof? Look at 3DNow!, FMA4, SSE4a, etc. Sure, they say they’ll ‘support’ it… which results in just a handful of titles ending up supporting it, and most probably won’t implement the feature nearly well enough to convince others to follow. Sad but true.

    • hyperspaced
    • 6 years ago

    This review and the whole writing style is clearly biased towards Intel processors. It performs the test by OVERCLOCKING the Intel processor’s memory to 2133MHz, while the Intel specs clearly state that 1600MHz is the standard (base) frequency.
    Just because you CAN, doesn’t mean you SHOULD, when performing comparisons.

    Everyone knows that 28nm is less energy efficient than 22nm. However, not by 80% as you claim. It would be nice to see the energy efficiency and consumption when applying filters on OpenCL-enabled graphics applications e.g. MuseMage or Photoshop CC w/ OpenCL support.

    But, oh, wait, you performed the energy efficiency test on x264 encoding which doesn’t even use the GPU compute units… (not to mention that x264 1st pass doesn’t even parallelize).

    Try to be reasonable and fair next time or you will start losing readers.

      • chuckula
      • 6 years ago

      [quote<]It performs the test by OVERCLOCKING the Intel processor's memory to 2133MHz, while the Intel specs clearly state that 1600MHz is the standard (base) frequency. Just because you CAN, doesn't mean you SHOULD, when performing comparisons.[/quote<] Uh... not even going to dignify that with a response considering that if it weren't for the majority of Intel systems out there "cheating" with overclocked RAM the demand for DDR3-2133 would be so minuscule that the price would be astronomical instead of just inflated so AMD is benefiting from all that "cheating." Oh.... as a followup: This is the first time I've ever seen an AMD shill accuse Intel of making it trivially easy to "overclock" even low-end i3 parts! That's the beauty of being a "true" believer: Logic, reason, consistency, and sanity can be swept aside when they become inconvenient to the troll of the moment. [quote<]Everyone knows that 28nm is less energy efficient than 22nm. However, not by 80% as you claim.[/quote<] Uh.. 28nm is a spatial dimension. 22nm is another spatial dimension. Neither one has any "energy efficiency" and TR never made any claims about the efficiency of spatial dimensions. Instead, TR did a completely fair comparison of two different microprocessors that are made using two different lithographic processes. I'm sorry the results didn't turn out the way your preconceived biases told you they [b<]should[/b<] turn out. Pray, are you a Dallas Cowboys fan who feels that the Cowboys deserve to win the Superbowl every year because Jerry Jones says they are the best team every August? [quote<]But, oh, wait, you performed the energy efficiency test on x264 encoding which doesn't even use the GPU compute units...[/quote<] Indeed, so the GPU compute units were idle so the power consumption test clearly underestimated the maximum power consumption that these Kaveri parts might achieve in the real world. What test do you propose that will show Kaveri in a worse light since clearly TR's benchmark gave AMD an unfair advantage in power consumption metrics by leaving that 47% of the chip idle? So why didn't you complain about that MuseMage benchmark that showed Kaveri in an extremely favorable light? Or would you like TR to just delete the rest of the review, leave that benchmark up, and have a conclusion written by reliable [s<]party member loyal to Dear Leader Kim Jong Un[/s<] uh.. I mean... "professional AMD PR employee" just to make it 100% fair? Oh.. it looks like you registered your account today to copy & paste a form letter complaining that TR didn't regurgitate the preconceived results that you wanted for Kaveri a year before it even launched... how quaint. Why don't you just download the AMD press slides, and then cutoff your Internet connection so that you don't have to be subjected to all those evil non-AMD PR points of view?

        • hyperspaced
        • 6 years ago

        — This a product comparison, not an overclocking deathmatch. Both products should have been tested at manufacturer specific frequencies. You can do whatever overclocking you want in YOUR system. Hell, you can even stick a dual tail-pipe in your KIA, for all I care. Doesn’t mean all KIAs are faster.

        — If that extra 47% had been used, the total encoding time would be less. Power would have gone up, but not proportionally; that is to a point where total energy would be less. You know, Energy = Power x Time.

        Talking about GPU power, my i5 3570K (standard issue for the loyal members of Kim Jogn Un party) hits a nice 140W when running 3Dmark. So it’s not like Intel’s GPU is made out of superconducting materials and consumes no energy.

        TR should have given power efficiency figures when running other applications as well (e.g. Photoshop rendering with OpenCL support, x264 video playback etc.).

        And, yes, the current trend in programming is to strive to use ALL available cores be it CPU or GPU. And AMD makes that super efficient with hUMA and hQ.

        (BTW, most of the processors on my PC’s are from Intel.)

          • chuckula
          • 6 years ago

          Here’s how I know you’re up to no good:
          For all your disingenuous dramatization and [made-up] protests about how you run so many Intel PCs, not once in your lengthy diatribe did you mention how those big-bad TR fanboys CRIPPLED AMD’s products in this review!

          That’s right! Note that TR very evilly used those Richland chips using the LOWER SPEED RAM that AMD provided TR expressly for the purpose of the review!

          It’s a scandal!
          It’s an outrage!
          It’s done on purpose by AMD to make sure Kaveri’s delta over AMD’s older products looks bigger!
          Oh wait…

          Where is your crusade for truth and justice now?

          • Waco
          • 6 years ago

          Your 3570K does not use 140 watts running 3DMark. Try trolling somewhere else please. 🙂

        • derFunkenstein
        • 6 years ago

        For “not dignifying that with a response” you sure did type a lot.

          • chuckula
          • 6 years ago

          That wasn’t a response though. It was a rant.

            • derFunkenstein
            • 6 years ago

            touche.

      • OneArmedScissor
      • 6 years ago

      Exposing Intel’s disadvantage doesn’t seem biased towards Intel to me. Had they used 1600 MHz RAM, then it would be an open question if Intel’s GPUs are slower because they don’t have enough memory bandwidth.

      Now we know for certain that they are just plain slower.

      [quote<]Try to be reasonable...[/quote<] Try taking your own advice.

        • hyperspaced
        • 6 years ago

        OK, now that we know Intel’s GPU is plain slower even with overclocked 2133MHz RAM, shouldn’t we see how slower it’s CPU would be with the manufacturer-specified 1600MHZ RAM ?

      • maxxcool
      • 6 years ago

      umm. you don’t get it do you?

      “”But, oh, wait, you performed the energy efficiency test on x264 encoding which doesn’t even use the GPU compute units…””

      Since the egpu was not in use.. the powerdraw is EVEN WORSE when the egpu will be engaged.. TR actually did you a favor there by NOT showing how much draw Kev has under ‘true load’..

      btw. 28nm versus 22nm has ZERO effect on powerdraw. You might need to go back to school for transistor density and switching characteristics which actually define how much juice a transistor will need to flip and how quickly it can cycle.

      since 47% of the chip was idle .. I wonder what round 2 testing will show when Damage fires up a OPENCL bench and we see the full chips power draw with almost 50% more silicon in use.

        • hyperspaced
        • 6 years ago

        Well, I would really like to know the school that you attended, because keeping all other variables constant, die shrinking = smaller transistors = less voltage required to switch on/off = reduced power (inverse square)

        TR didn’t do justice favor. Unless they post power efficiency figures for tasks that utilize the GPU cores as well, I am not retracting anything I said.

          • derFunkenstein
          • 6 years ago

          Shill alert. This is an account that signed up about 4 hours ago for the express purpose of commenting – and commenting A LOT – on this article.

            • LostCat
            • 6 years ago

            Not surprised people are excited, but some of this is really boneheaded.

            • derFunkenstein
            • 6 years ago

            [quote<]but some of this is really [s<]boneheaded[/s<] sponsored.[/quote<] FTFY

            • LostCat
            • 6 years ago

            Never prescribe to malice what can reasonable construed as idiocy, wasn’t it? Or did I misquote?

            • derFunkenstein
            • 6 years ago

            Normally I think you’re right, but AMD has a history of “malice”, as you put it.

            • LostCat
            • 6 years ago

            Couldn’t think of a better way to say it. *halo*

            • Klimax
            • 6 years ago

            I’d say crazy AMD super-fan. (I think I have seen some of them, some even have pretend-magazines sites where they push “stories”.)

            If he was really shill, then extremely bad one and whoever paid him should want his money back…

            On the other hand:
            [url<]http://alienbabeltech.com/main/would-you-like-to-be-considered-for-the-amd-advocacy-program-free-hw-available/[/url<] (Include two only comments...)

          • maxxcool
          • 6 years ago

          your tech-illiteracy makes me sad inside… don’t you have the cheat-sheet for cutting and pasting from the firm that hired you ?

        • NeelyCam
        • 6 years ago

        Um.. I agree that the shill doesn’t really know what (s)he is talking about, but you’re wrong about 28nm vs. 22nm having zero effect on power draw. For a given logic function at a given frequency and voltage, 22nm generally consumes less energy because the capacitive load that needs to charge/discharge is lower.

        Moreover, as also the drive strength is generally better, the circuits can operate at the same frequency at a lower supply voltage, reducing energy usage even more. This is one of the main reasons why Intel’s 22nm chips are more energy efficient than AMD’s 28nm ones.

        Since you mention going to school to learn about switching characteristics, you should know all this already. So I don’t really understand why you said what you did. Maybe I misunderstood what you meant…?

      • sschaem
      • 6 years ago

      I agree on the power chart… Why only test CPU power usage on a modern APU ?

      And I would be surprised if people interested in this chip plan to use them as pure software encoder and only care about power usage using the CPU alone.

      To me HTPC seem the #1 target for this desktop rlease… yet nothing about it multimedia features.

      Now, AMD is also to blame for not creating much enthusiasm, like by not releasing any H.264 HW encoder software solutions.

      But we know for sure, Kaveri is not the chip to get to build an x264 encoding farm.

      • Unknown-Error
      • 6 years ago

      anyone checked their Shill-o-meter? My one short circuited last night.

    • christos_thski
    • 6 years ago

    What’s the point? No, seriously, what is the point, when you can combine a low end intel CPU with a midrange discrete AMD gpu and have better performance for less money?

    This is a niche product, at best.

    I’d love to see Kaveri CPUs on laptops, where there is no discrete GPU upgrade path and they’d make a welcome upgrade from intel integrated graphics. But -at these pricepoints- they make no sense for the vast majority of desktop users out there. None.

      • FuturePastNow
      • 6 years ago

      Try as I might, the only advantage I can see for AMD’s APUs is in the mobile space, since getting good gaming performance out of a laptop with no discrete GPU will really help.

      For desktops… there are small and thin systems without room for a video card. I could see myself making a HTPC out of one of these. But it would be merely ‘OK’, not especially great.

      • Blink
      • 6 years ago

      HTPC? Tom’s benches show both the 7850k and 7600 over the 6800k at 1080p with medium settings. I have no problem running those. If the 7600 keeps cool, which I assume it does consuming quite few watts less than the Richland, I’d love to mate one with a cheap PSU. That would give me a rig ready to stream video content off the web and play all the 2-5 year old Steam games I normally play as a father of 3 with a full time job. Go, go $5-10 AAA titles! I’m not sure that I’m all that alone either. It may be an untapped market. I’d bank that there are more casual gamers than hardcore ones.

      P.S. Tom’s did run the 7600 with 2133 memory, which runs at a $12-15 Premium over 1600. A cost I could certainly swallow if it was noticeable beneficial.

      Edit: Added detail to referenced benches.

      • Xenolith
      • 6 years ago

      Because your setup won’t fit into this case or similar – [url<]http://www.logicsupply.com/components/cases/mini-itx/m350/?gclid=CIXz5tuVhrwCFaE9QgodYwUAaQ[/url<]

        • swaaye
        • 6 years ago

        That would be a niche and the OP did mention niches.

    • shank15217
    • 6 years ago

    I propose a new niche, 45W kaveri litecoin clusters! Go!

      • tipoo
      • 6 years ago

      For dedicated litecoin mining, you’d probably be far better served with the cheapest (though at least dual core) CPU you can find and putting the rest of the cost towards the GPU.

      Though, that would be nice for AMD, that was what sold out all their R9 cards!

    • USAFTW
    • 6 years ago

    Kinda disappointed at the results. The darned thing takes 2410 million transistors and it can’t get half as good a result as 4770k which weighs in at only 1400. Although it seems that AMD has somehow managed to get close to Intel 22nm transistor density. Just imagine that GPU portion get replaced by 2 additional modules and a decent l3 cache.

      • tipoo
      • 6 years ago

      [quote<]Although it seems that AMD has somehow managed to get close to Intel 22nm transistor density.[/quote<] I believe this is because GPUs can be inherently more dense by nature than CPUs. Intel dedicates less of their die space to GPU, so the average density goes down.Given that, getting 28nm averages to 22nm average densities isn't magic.

    • ptsant
    • 6 years ago

    After having digested multiple reviews over teh internetz, I have come to the following conclusions:

    1. IGPs have improved greatly. A few years ago, an Intel or AMD IGP would be OK for tetris or platform games but any FPS or similar would be unthinkable. As an example, the Sandy Bridge reviews tested IGP at 1024×768 with low settings. This is not the case today.

    2. With Kaveri, AMD has proven the HSA concept. I’m not saying they won the war, but several applications show measurable benefit when OpenCL is used (Handbrake, WinZip, Adobe) and the HSA examples (LibreOffice, Corel Aftershot) are meaningful, not just marketing gimmicks. Clearly, it all depends on developer adoption, driver quality and library support, which are not AMD’s strongest points. We’ll see.

    3. Kaveri seems reasonably overclockable, especially on the GPU front. Some people have gotten up to 30-40% overclocks on the GPU. CPU goes up to 10% quite easily put power/thermals explode. I suppose AMD chose to keep power within reasonable limits instead of pushing the chip to its limit, which is probably a good idea for the HTPC/SFF target group.

    4. In the end, Kaveri is a bet. Performance is not good today, but I would be curious to see HSA/OpenCL benchmarks which show up during the chip’s lifetime.

      • maxxcool
      • 6 years ago

      “2. With Kaveri, AMD has proven the HSA concept. I’m not saying they won the war, but several applications show measurable benefit when OpenCL is used”

      See there is my problem. Why am I going to spend more money and dev cycles adding support for a HSA enabled app when OPENCL already does the same job on windows, android, IOS, linux, BBos10…

        • Klimax
        • 6 years ago

        And then there is also DirectCompute aka Compute shaders. (I think it is present in similar form in OpenGL) Those shaders can be executed standalone without any graphics. (It is done by separate call)

        Also to get it simpler, there is whole C++AMP by Microsoft, which is open specification and IIRC it is scheduled to be supported by open source compilers. (Or it is already) It is using whatever platform supports (DC, OpenCL, maybe CUDA,…)

        • ptsant
        • 6 years ago

        [quote<] See there is my problem. Why am I going to spend more money and dev cycles adding support for a HSA enabled app when OPENCL already does the same job on windows, android, IOS, linux, BBos10... [/quote<] Have a look at this presentation: [url<]http://www.kitguru.net/site-news/interviews/jules/exclusive-kaveri-interview-with-nicolas-thibieroz-of-amd/[/url<] HSA seems to require less coding, because memory access is unified and transparent. Furthermore, switching latency is reduced, making it easier to mingle CPU/GPU code. This means that it is applicable in more scenarios, were a mix of the two would be required. Anyway, I'm not betting that HSA will catch up. However, we do need something else. Have a look at this presentation: [url<]http://ewh.ieee.org/r5/denver/sscs/Presentations/2006_12_Naffziger.pdf[/url<] This guy was saying (in 2006) that we are going to touch a frequency and multicore ceiling. He has been proven correct. Intel CPUs do have exceptionally good performance, but the difference between generations is shrinking, despite the process advantage. A new generation used to mean massive performance gains. This is no longer the case, which is why we need something else. Maybe it's going to be HSA, maybe not.

          • chuckula
          • 6 years ago

          [quote<]HSA seems to require less coding, [/quote<] No, be definition HSA requires [b<]more[/b<] coding unless your sole objective is to write software that runs on: 1. Zero Intel hardware (oh, wait, that's a good thing though right?) 2. Zero systems in the vast vast majority of AMD hardware ever sold including every single AMD system that has more than two effective CPU cores, and every APU that AMD has ever sold before January 14, 2014. (oh snap... didn't think about that part). 3. No ARM hardware, no tablets, no smartphones, etc. etc.* 4. Only on Kaveri APUs from AMD. * the fact that some ARM licensees have their names splashed on the HSA foundation powerpoint doesn't change the arrangement of trasnsistors in actual chips that actual people actually use.

            • ptsant
            • 6 years ago

            [quote<] No, be definition HSA requires more coding unless your sole objective is to write software that runs on: [/quote<] You are obviously going to write a CPU codepath which covers everything you mention. HSA requires less coding than the equivalent OpenCL solution. If you are going to implement a standard CPU codepath and an alternative, it makes sense to consider CPU + HSA instead of CPU + OpenCL. So, to make it plain, yes: CPU [less code] CPU + HSA [more code] --> kinda obvious CPU + OpenCL [probably, even more code] Furthermore, as I mentioned, in some cases the OpenCL setup (memory copying)/latency does not even make sense because you're spending more overhead than what you're gaining. So, in these cases, HSA is the only alternative to plain CPU code.

            • Klimax
            • 6 years ago

            Well, you are missing fact that OpenCL is supported by Intel, NVidia… while HSA is AMD-only. Something will have much better returns and I don’t think HSA it is and if you are that worried about amount of work to use GPGPU then you can use C++ AMP or similar and be done with that.

            • alientorni
            • 6 years ago

            hsa takes advantage of opencl 2.0 i’m not sure if it needs to be specificly developed for hsa. but i think most developers should be grateful to have full cpu+gpu main memory access.

            • Klimax
            • 6 years ago

            Too quick reading I guess. Well, from results it doesn’t look like it will benefit AMD much. Their gap in CPU department is too great for GPU to offset.

            Well, I’ll take a look how HSA compares to what is already available directly for CUDA/OpenCL/C++ AMP/OpenMP/… (Already number of technologies and platforms to choose from, so HSA is just another one)

            • maxxcool
            • 6 years ago

            also supported by Mali, powerVR and the third arm gpu i cannot remember the name of..

        • the Lionheart
        • 6 years ago

        Open-Cl is an API, not and ISA…

          • ptsant
          • 6 years ago

          True, but if you need high performance you are probably going to write dedicated (non-OpenCL) CPU code.

      • LostCat
      • 6 years ago

      We’ve got a few…though not much yet. And they’re interesting.
      [url<]http://www.extremetech.com/computing/174632-amd-kaveri-a10-7850k-and-a8-7600-review-was-it-worth-the-wait-for-the-first-true-heterogeneous-chip/5[/url<]

        • alientorni
        • 6 years ago

        if those softwares are really taking advantage and showing the true HSA potential, i want more of that…
        on the next page there’s also a single thread/multi-thread 7850k vs 6800k. i think that speaks the true improvement on kaveri’s steamrollers, maybe it’s not on single thread ipc, but on multi-thread core management.

    • DarkMikaru
    • 6 years ago

    First of all, great article TR! We really appreciate all the hard work you guys put in to these reviews.
    However, the following statement made me start thinking….

    “who are these people?” Seriously, I’ve never met one.

    In a way, you guys make a very compelling point. Who are these people? Who would intensionally seek out an APU (or any IGP) for that matter with the purpose of gaming? My answer, there are, but those people aren’t educated enough on the subject to know it.

    For example, a friend of mine referred one of her friends to me to build a custom PC for him. Just an entry level machine that was capable of playing “The Sims”. Thats it.. not kidding. Here his email…

    “Yes, sims is normally all I play. Especially with the new one coming out. I have my systems so that’s all I would really need it for. Not an online gamer like that for the pc, just on Xbox and PlayStation. Price range I would like to keep it at $400.”

    Here is another example – Or the repaired Dell Inspiron N5110 with Core i5, 6GB Ram & Intel HD graphics that my clients kids had loaded up with Minecraft, WoW, The Sims, etc… I mean… 640GB drive crammed full of games. If I could of gotten the thing to boot (was virused beyond recognition) I’m sure the games couldn’t of run well. My point is, I think these people do exist but don’t really know it themselves. Not really “casual gamer”, but definitely curious about gaming. So I think it would be a pleasant surprise for any user who purchased a $400 desktop / laptop to randomly install a game demo and discover..wow, this actually plays decently! Cool! That’s a win not only for AMD but gaming companies and consumers as well.

    Basically, AMD’s APU & Intel’s improved IGP are the gateway drugs gaming companies and consumers are looking for. Myself included. Recently purchased an HP G7 w/ A8-4500M and I am amazed that it can play any game I’ve thrown at it so far. What do you guys think?

      • Shobai
      • 6 years ago

      I hope this comes across as constructive criticism; you’re looking for “could have” rather than “could of” – it’s an easy mistake to make, brought on by the contraction “could’ve”.

        • DarkMikaru
        • 6 years ago

        Thanks Shobai, no worries. I appreciate the insight. A writer I certainly am not. 🙂

      • Spunjji
      • 6 years ago

      Very good point on the “gateway drug” perspective. Long have I known people to get hooked on a relatively resource-light game only to discover that their system needed an upgrade to play the sequel / a better game, just because the integrated graphics were so terrible.

      AMD’s problem here is that Intel are rapidly closing in on “good enough” on the GPU front, even with their somewhat-crippled products. At that point the end user may not care that they’re running in lower detail settings than they might be with an Intel GPU; if it works it works.

      To summarise, I see your argument, but I also see TR’s. It’s a very odd middle ground that this chip is aimed at. It just happens to include me with my budget single-fan-system-telly-games-box-thingy aspirations, but I’ll admit that I have framed my requirement around the product somewhat.

        • DarkMikaru
        • 6 years ago

        I totally agree with you. Its kinda one of those things where once you get to like something you usually end up trying to learn more about it right? I too was “that consumer” way back in the day before I got into computers. Back in the day when Maximum PC would include demo CD’s of games and the Intel IGP didn’t have a hell of a chance to play it you learn quick about what your computer can and can not do.

        And thus we all agree on the point that APU’s aim at an odd middle ground. How do you market a product to someone who doesn’t know they need you? I guess that is for the marketing folks to figure out. Anyway… thanks for the comment. I’m a fan of AMD’s APU’s and I look forward to seeing what else that is in store.

      • MrJP
      • 6 years ago

      I’ve already posted this elsewhere in these comments, but I’ve also been surprised with how much gaming you can do with a laptop with an A8-4500M. I think you’re right that there’s actually a good number of people out there that would be very happy with an APU and just don’t know it.

    • Damage
    • 6 years ago

    Modivated1 has been banned for shilling. Come at me, KenLuskin!

      • derFunkenstein
      • 6 years ago

      DROP THE HAMMER AND DISPENSE SOME INDISCRIMINATE JUSTICE

        • Damage
        • 6 years ago

        derFunk has been banned for caps lock abuse.

          • chuckula
          • 6 years ago

          and ssk wept bitter bitter tears

          • derFunkenstein
          • 6 years ago

          oh wait you’re serious

            • Damage
            • 6 years ago

            I am?

            • derFunkenstein
            • 6 years ago

            I was joking, just kind of ruined it there. :p

            • Damage
            • 6 years ago

            derFunk has been banned for saying I ruined it there.

            • ClickClick5
            • 6 years ago

            As Scott bans everyone in a mad fit of admin power.

            • derFunkenstein
            • 6 years ago

            Damage is band for egregious rocking too hard.

            [url<]https://www.youtube.com/watch?v=bHSVlr9qO8c[/url<] edit: note the location of the cigarette.

            • Deanjo
            • 6 years ago

            Na, it’s more like this.

            [url<]http://www.youtube.com/watch?v=IqP76XWHQI0[/url<]

            • derFunkenstein
            • 6 years ago

            And just like that, Deanjo was banned for being one bad mother–

            /shuts his mouth

            • Deanjo
            • 6 years ago

            Well that would be a bit ‘extreme’.

            • derFunkenstein
            • 6 years ago

            They should merge the two bands…

            …oh wait, they did:

            [url<]https://www.youtube.com/watch?v=BM5Rh2KW4Zc[/url<]

            • NeelyCam
            • 6 years ago

            Eddie is cool, I admit, but the new kids have surpassed him technically. E.g.:

            [url<]https://www.youtube.com/watch?v=bt-RoSzsEKA[/url<]

            • derFunkenstein
            • 6 years ago

            They have, but Eddie is also like 100 years old and has more whiskey than blood in his veins. Plus he nailed Valerie Bertinelli. Ohhhh she was the subject of so many dreams.

          • maxxcool
          • 6 years ago

          🙂 /smirk!/

          • ronch
          • 6 years ago

          He just [b<][u<]***LOVES***[/b<][/u<] pressing his Cherry MX Caps Lock key!!!

      • Airmantharp
      • 6 years ago

      The bans are strong with this thread…

        • maxxcool
        • 6 years ago

        No kidding, There has been some SERIOUS hamfissted ‘gorilla’ marketing for this release…

          • chuckula
          • 6 years ago

          I disagree. Nobody has tried to sell me a gorilla the whole time!

            • LostCat
            • 6 years ago

            Are you insulting my monkey?

            • LostCat
            • 6 years ago

            You’d think no one gets No One Lives Forever references. sadface

            • maxxcool
            • 6 years ago

            nice… 🙂

          • derFunkenstein
          • 6 years ago

          Gorilla or guerrilla?

      • Meadows
      • 6 years ago

      And you guys had a problem with me. I pale in comparison.

      • Klimax
      • 6 years ago

      That surprised me. I just thought we was overzealous… unexpected is about right.

      • Krogoth
      • 6 years ago

      Duke Nuked feels left out…..

      • the Lionheart
      • 6 years ago

      You need to ban Chuckula, Ronch, nanoflower, Klimax, Bomber and the rest of the Nvidia shills team so we can have a decent comment section.

        • Airmantharp
        • 6 years ago

        You must be new here 🙂

        • Fighterpilot
        • 6 years ago

        You need to understand Chuckula posts here to feed his ego.
        Nothing less than a regular spam of any article will sate his narcissistic personality.
        Couple that with his obnoxious,bullying responses to anyone disagreeing with his point of view and you have the archetype Internet troll.
        Full of his own importance,certain that every post he makes is a marvel of lucid,well thought out argument and chilling dedicated to the aim of not only be “frist post” but also leaving everyone with a vague feeling of unease at his abuse of the common courtesies of forum discussions.
        A veritable swarm of sycophants follows his breathless pronouncements,each doing their best to emulate his poisonous,egotistical style…yearning for the dubious honor of a little green “thunbs up” from the troll-in-chief.

          • Airmantharp
          • 6 years ago

          Next to you, Chucky is downright sane…

            • chuckula
            • 6 years ago

            How dare you insult me by implying I’m sane!

            • Airmantharp
            • 6 years ago

            Apologies 🙂

          • Meadows
          • 6 years ago

          Says the red shill.

        • chuckula
        • 6 years ago

        [url<]http://www.youtube.com/watch?v=mnhrz5nDDa4[/url<] Ask yourself Lionheart, would Lion-O try to ban Nvidia shills from a thread that has nothing to do with Nvidia and where Nvidia was basically never mentioned? Or would he just be AWESOME instead? WWLion-oD?

        • Klimax
        • 6 years ago

        How about getting rid of you. I don’t think your contribution would be that missed…
        (Must say accusation of shilling is interesting Ad Hominem, which frankly shows often more about accuser and not accused)

        • ronch
        • 6 years ago

        Really? When the heck did I root for Nvidia? I like AMD but I try to be objective about it as well instead of sounding like an AMD Marketing Shill like some guy here who used to be called Spigzone.

        (Oh, wasn’t Spiggy banned?)

    • Modivated1
    • 6 years ago

    Here’s interesting news concerning the performance of Kaveri’s 7850k processor. Look at page 5 on this link to see 3 HSA compatible applications and the effects on the workload compared to OpenCL and straight processor performance.

    [url<]http://www.extremetech.com/computing/174632-amd-kaveri-a10-7850k-and-a8-7600-review-was-it-worth-the-wait-for-the-first-true-heterogeneous-chip/5[/url<] Then I found this article at HardwareHeaven talking about 5Ghz Overclock on Air! [url<]http://www.hardwareheaven.com/reviews/1918/pg11/amd-a10-7850k-kaveri-apu-review-featuring-gigabyte-g1sniper-a88x-power-use-and-overclocking.html[/url<] I wonder what the effect will be when HSA becomes compatible with more real world applications? With numbers suggesting these huge differences surely software companies cannot ignore sampling it for the possibilities.

      • OneArmedScissor
      • 6 years ago

      I wonder when my Athlon 64 can run 64 bit everything? :p

      I remember people buying those because romg teh futor!

      Been there, done that. Future adoption of a CPU’s features does not a future proof CPU make. Even if HSA takes off, there will just be HSA 2.0 next year, and the cost of entry will be even cheaper.

      • LostCat
      • 6 years ago

      I expect AMDs new APIs (Mantle/TrueAudio) will use HSA code when possible.

    • ronch
    • 6 years ago

    I remember Mark Papermaster saying they’ll need more than 15% better performance improvement every year or every new iteration of Bulldozer and they’ll get it.

    Well, are they getting it?

    • joselillo_25
    • 6 years ago

    kaveri and apu technology at this time are not ready for 1080p gaming. sorry. I will stick with my q6600 and a pci express gpu and pay a bit more in my electrical bill for some time.

    • vargis14
    • 6 years ago

    The A8-7600 has some pretty impressive graphics scores…..from what I have seen the A10-7850k does not improve on them much at all even though it has well over 120 more shader cores.
    I really wish they would embed some GDDR5 memory on select motherboards the IGP’s and possibly the CPU could take advantage of and get rid of the memory bandwidth problems.

    • Zizy
    • 6 years ago

    This is one dense chip, almost as dense as most of comments here 😀

    • Klimax
    • 6 years ago

    Question: Is Museimage using both CPU and GPU on Intel? Difference looks similar to Luxmark’s CPU vs. GPU. Verification would be good. (it is possible that for some reason Museimage doesn’t recognize Intel’s IGP)

    Anyway, I found a thing odd about AMD’s architecture. Compare:
    [url<]https://techreport.com/r.x/a8-7600/hsa-bus.gif[/url<] [url<]http://images.anandtech.com/reviews/cpu/intel/Haswell/Architecture/LLCfreq.jpg[/url<] (Lower right; Didn't find better one) Seems as if AMD has it bit less efficient then Intel, which seems quite odd considering usage they want or do I misread drawings? And last note: Reportedly from Bulldozer launch some hash-based benchmarks have AMD optimized codepath, which could explain results. Not verified though. (IIRC it was written about two test suites, no idea about best way to verify it) Note: if my question was answered, I'll link to answer.

      • chuckula
      • 6 years ago

      In Musemage I wouldn’t be very surprised at all if the OpenCL path was disabled for the Intel chips for whatever reason. OpenCL is cross-platform in a vague theoretical sense but a *TON* of programs that claim OpenCL are really just targeting an AMD GPU architecture in the same way that CUDA targets an Nvidia GPU architecture.

      There are exceptions like Luxmark that clearly uses OpenCL on both AMD and Intel platforms, but not every program is written properly.

        • Klimax
        • 6 years ago

        Nor sanely or unbiasedly… (ScienceMark anybody?)

      • Klimax
      • 6 years ago

      Update: Tested Musemage on Ivy Bridge (NB variant last drivers pushed by Microsoft)
      It does use IGP, from cursory analysis it seems that there are some anomalies. Unfortunatelly, can’t say what causes them. Note: It looks like it uses dominantly GPU and not fully both GPU+CPU, which would most likely explain discrepancy between Luxmark and Musemage.

      See:
      [url<]http://sdrv.ms/1eWJfvE[/url<] ETA: Stupid. I have missed one driver update... Will redo analysis in case it changed something. ETA2: No significant change I think, maybe better utilization of GPU. I'll keep this as is.

    • Rza79
    • 6 years ago

    It’s time to bring back SidePort memory. I’m surprised AMD didn’t do it already.
    Imagine 256MB of GDDR5 memory on your motherboard on a 32bit bus for almost no added cost.
    Maybe it didn’t do much in the old days but back then it added just 1.4GB/s. Now it could easily add 22GB/s. It surely would be cheaper than the added cost of PC2133 memory.

      • maxxcool
      • 6 years ago

      a dedicated bus for egpu ram would reduce the efficiency they are shooting for… the HSA mantra is short paths to memory on die, cache on die and them main ram. to get the ‘writeonce’ benefits they cannot go back to a disparate bus for the gpu. In this i agree with amd’s design.

        • Klimax
        • 6 years ago

        Better way would be eDRAM as done by Intel, but that’s expensive. (One of reasons why Iris Pro is found only in high-end CPUs)

          • tipoo
          • 6 years ago

          It’s also down to fabs being suited for it. But it would be nice if AMD could offer it. Intel said even 32MB was enough for current workloads but they went way overkill with 128, if AMD could even offer 32MB eDRAM caches to help with the bandwidth starved GPUs…

            • derFunkenstein
            • 6 years ago

            seems to be working for Xbone.

        • Rza79
        • 6 years ago

        SidePort was used as a frame buffer. It don’t think it would interfere with HSA. It’s not like suddenly every game requires HSA (actually nothing requires HSA yet). It would be pretty easy for the driver to know what it can cache.

        On a different note, I’ve noticed that Kaveri has XDMA support. But most (if not all) reviews ignore this feature.

      • OneArmedScissor
      • 6 years ago

      GDDR5 is a power guzzler, but LPDDR4+ isn’t. APUs are designed for mobility. Remember that even 35w laptop parts are clocked lower, and only need incremental bandwidth improvements. There are plenty more coming down the pipe.

      Beneficial GDDR5 bandwidth would require a dedicated memory controller. Look at the PS4 and Xbox One. They have Jaguar CPUs, but use 100+ watts!

      Even if AMD could dynamically switch it on and off, there’s no way around having everything active while playing a game. That’s too high of a TDP for a laptop.

      So for either a laptop or a desktop, just use a faster graphics card.

        • Rza79
        • 6 years ago

        GDDR5 is by definition not a power guzzler. You think it is because high-end videocards have 12 to 16 GDDR5 memory chips on board (and have 6 to 8 memory controllers).
        One GDDR5 chip and one 32bit memory contoller won’t be a power guzzler. Infact, since SidePort is on the motherboard, it doesn’t have to be in every device. I was thinking for desktop use only.

    • NIKOLAS
    • 6 years ago

    So Cadaveri is yet another underwhelming member of the Bulldozer family.

    All Hail HSA.

    • ET3D
    • 6 years ago

    Pity that there are no power consumption figures for gaming, since that would IMO be a major use for such an APU.

    • Unknown-Error
    • 6 years ago

    Here are some lengthy reviews:

    [url<]http://www.pcper.com/reviews/Processors/AMD-A8-7600-Kaveri-APU-Review-HSA-Arrives/Power-Consumption-and-Conclusions[/url<] [url<]http://www.guru3d.com/articles_pages/amd_a8_7600_apu_review,19.html[/url<] [url<]http://us.hardware.info/reviews/5156/amd-a10-7850k-kaveri-review-amds-new-apu[/url<] [url<]http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/65031-amd-kaveri-a10-7850k-a8-7600-review-28.html[/url<]

      • Milo Burke
      • 6 years ago

      There’s a lengthy review here that we trust more.

        • Unknown-Error
        • 6 years ago

        Did I ever say Techreport, in this case Geoff Gasior and Scott Wasson is any less trust worthy? Those are some of the longer reviews (BESIDES Techreport) I’ve read, so I put the links. Look at my posting history. I’ve never questioned Techreport’s credibility.

      • maxxcool
      • 6 years ago

      there will be a part 2 when the 7850 in Scotts pocket makes it to the lab…

        • Unknown-Error
        • 6 years ago

        True. Scott mentioned that AMD sent only the A8 (45W/65W) parts. Literally trying make things look better with “lower TDP” parts compared a 95W part that doesn’t perform much better.

    • KenLuskin
    • 6 years ago

    Nice work.. except its MEANINGLESS!

    ALL NEW GAMES will support MANTLE!

    Until you can test Kaveri using NEW games released with MANTLE… your old reviews will be MEANINGLESS!

      • Reuel
      • 6 years ago

      I can’t take you seriously. You surely aren’t this stupid.

      I think Mantle is going to be great, but it won’t be most games, so it is important to get non-Mantle numbers so we can see that performance.

        • Airmantharp
        • 6 years ago

        Don’t waste time with these guys- they don’t last too long :D.

          • LostCat
          • 6 years ago

          I like Mantle and I don’t know what the hell they’re smoking.

        • Klimax
        • 6 years ago

        There is bigger question. Can Mantle do anything or will it have to rely on other optimizations. (Like accounting for different CPU-GPU organization)

      • maxxcool
      • 6 years ago

      failed troll fail ?

      • ronch
      • 6 years ago

      One of my favorite games for the Sega Genesis was Immortal by EA. You’re a wizard descending down many levels of dungeons killing goblins and TROLLS.

      • Concupiscence
      • 6 years ago

      Well, shucks. You’re right! We also shouldn’t benchmark anything until DirectX 11.3 comes out, just to be safe. Or OpenGL 6. Or Fahrenheit. Or Glide 4.

        • Deanjo
        • 6 years ago

        Pffft, hand optimized assembly code or GTFO!!! ;D

          • swaaye
          • 6 years ago

          Binary is what real men use. Be one (or zero) with the machine.

            • Deanjo
            • 6 years ago

            I’m equally turned on and off by that proposition.

      • derFunkenstein
      • 6 years ago

      All new games? Really?

    • smilingcrow
    • 6 years ago

    Read the Anandtech A10-7850K (95W) review first and was amazed how poor it was. Then I read the A8-7600 review here and recalled what Anandtech said about the platform being designed around a 45W TDP and it all made sense.
    Bodes well for the mobile versions hopefully as at 45W it already looks very good.
    Clearly designed for mid to lower power designs which makes sense in 2014 but will disappoint those hoping for serious desktop performance.

      • mikato
      • 6 years ago

      As long as they can get some darn OEMs to give it a shot!

    • UnfriendlyFire
    • 6 years ago

    I wonder how well would the A8-7600 do if it was directly ported to the high end gaming laptops?

    A lot of the gaming laptops’ cooling system are designed to handle a 75W to 100W mobile GPU, and plus a 35W to 45W CPU.

    Think about it, one less major component on the laptop’s mobo. That space saving could go for a bigger batter, better cooling, or cheaper price.

      • Airmantharp
      • 6 years ago

      Mostly, it’d be too slow. It’d be cool, but the performance of a lower-end CPU coupled to a memory bandwidth restricted small GPU would be far from ‘enlightening’.

      The reality is that Kaveri is targeted more for mobility than gaming. It will provide better gaming performance in a much smaller/lighter/cooler etc. package than a discrete setup, and that’s very cool, but the transistors just aren’t their for ‘high-end’ gaming.

      • Reuel
      • 6 years ago

      Gaming laptops have big fans. It seems a waste to put a 35w CPU/GPU into a system that can cool 100w.

    • ptsant
    • 6 years ago

    Looks like a decent little processor for HTPC/SFF. It will shine in cheap notebooks when the mobile version comes out. It’s a pity the process didn’t work as planned and the frequency had to be reduced.

    • Bomber
    • 6 years ago

    Okay so I didn’t read all the comments so apologies if I throw out something already mentioned but what I got out of this is if you are going for budget or small form factor a Core i3 Haswell + a 7750 is going to perform better than a Kaveri in almost every situation for essentially the same money. I have a Microcenter nearby and the Core i3 4130 is 99.99 and a 7750 is 99.99 as well.

    I’m not saying that AMD dropped the ball here. The graphics capabilities are pretty amazing for onboard graphics, but it’s priced a bit silly. You can get the 7750 in low profile so it’s not as if size is a consideration unless going with a super thin client that won’t alow for a PCIe riser.

    I’m most curious what Mantle will do for Kaveri. That could be the difference I guess.

      • nanoflower
      • 6 years ago

      Wouldn’t Mantle be better for the I3+7750 platform since you are starting off from a better position with both CPU and GPU?

      • mikato
      • 6 years ago

      What about HTPC? Does the Core i3 Haswell + 7750 consume more power if the system is left on all the time? Does the Kaveri have enough performance CPU and GPU for an HTPC workload?

      You said “in almost every situation”, which I imagine you mean to be gaming and whatever productivity and CPU dependent work. Maybe everything except HTPC and light gaming, right?

        • swaaye
        • 6 years ago

        Kaveri would make a fabulous HTPC as long as demanding gaming isn’t a major priority. But then so does 3 year old hardware. I’m still using a Core 2 + Radeon 4670 setup.

        • Reuel
        • 6 years ago

        That does consume more power, but it has more performance for the same price.

        • Bomber
        • 6 years ago

        Most situations = anything that a small form factor prohibits the use of a secondary graphics card. In all reality for productivity and CPU dependent work the graphics prowess isn’t necessary so the onboard 4×00 inthe i3 will be adequate.

          • JustAnEngineer
          • 6 years ago

          A 21.6 liter micro-ATX [url=http://www.silverstonetek.com/product.php?pid=241&area=en<]Grandia GD05[/url<] accepts full-size gaming graphics cards up to 11" long. This is the case that I suggest for a living room PC. A 23 liter micro-ATX [url=http://www.silverstonetek.com/product.php?pid=392&area=en<]Sugo SG10[/url<] accepts full-size gaming graphics cards up to 13.3" long. A 14.8 liter mini-ITX [url=http://www.silverstonetek.com/product.php?pid=317&area=en<]Sugo SG08[/url<] accepts s full-size gaming graphics card up to 12.2" long. If you limit yourself to slim-line cases, you're probably looking at a Radeon HD7750 as the best graphics card that will fit. If you eliminate all of the expansion slots, you can get down to a very small form factor like the 0.8 liter [url=http://www.gigabyte.us/products/list.aspx?s=47&ck=104<]Gigbyte Brix[/url<] using Haswell or Kaveri's integrated graphics and hanging any expansion components off of USB ports like an octopus.

      • Reuel
      • 6 years ago

      Mantle will help Kaveri, but it will also help the 7750, so that still comes out ahead.

        • Klimax
        • 6 years ago

        I’d say “might” as there is still no evidence Mantle is solution. Note, that most of gains will come from changing the way engine uses GPU and its memory when in APU/IGP configuration. (Alongside of prepared drivers)

        • maxxcool
        • 6 years ago

        mantle will help the 7750 more than the E-gpu.

        • LostCat
        • 6 years ago

        It will also use both GPUs at the same time without needing them to be the same.

      • Zizy
      • 6 years ago

      Yeah, the K processors are priced too high. They might have a small (tiny) niche – a gaming htpc in a small box where you dont have place for a GPU but can cool 100W.

      Tested A8 looks very nice. Good all-round performance in low tdp and low price tag. I could recommend it to a friend looking for a cheap htpc that can play games. Or a cheap laptop with the same/similar chip.

    • Meadows
    • 6 years ago

    I hadn’t heard of Musemage prior to this review. Thanks!

      • Airmantharp
      • 6 years ago

      Me either, though I would have preferred it to be more of a ‘Lightroom’ style program rather than a ‘Photoshop’ style.

    • anotherengineer
    • 6 years ago

    Geoff, 2 questions.

    What bios version was the mobo using?
    and
    What was the default cpu voltage supplied by the mobo?

    Reason I ask, is I have a Gigabyte mobo, and my 955BE AMD chip says 1.35V and the mobo defaults to 1.45V !!

    Just wondering if gigabyte has fixed the auto over voltage, and if that would affect the power consumption in the review if so?

    Thanks

      • chuckula
      • 6 years ago

      [quote<]What bios version was the mobo using? and What was the default cpu voltage supplied by the mobo?[/quote<] One would hope that AMD... who built the review unit including every single component before TR ever saw it... would have thought of that. Then again, it never hurts to check.

        • anotherengineer
        • 6 years ago

        You are not Geoff and you did not answer either of my questions, thanks for the spam post 😛

          • derFunkenstein
          • 6 years ago

          Such is the cost of fame. It bumped you up to the top.

    • mkk
    • 6 years ago

    It’s a given for (another) ITX build. I’ll retire my original Llano APU ITX system to someone else, then put together a multipurpose / LAN gaming ITX rig in a chassis that doesn’t even have room for a separate GPU. It’ll be sweet to get some gaming performance in that package. With Llano I really had to stick to 720p in order to get nice framerates, but this will be much better. The hunt for a relatively affordable ITX motheboard begins…

    Edit: I wonder if the rumored A10-7800 was a fairy tale or if it might be worth waiting for. Have no real need for a K part.

    • kilkennycat
    • 6 years ago

    From page 3:-

    [quote<]The VCE 2 encoder block adds support for the YUV444 color format, specifically in order to provide better text quality when using 60GHz wireless displays.[/quote<] Where can I get one of those displays? Can I afford it? ......

    • talos_2002
    • 6 years ago

    I would find useful a test showing how good or bad it performs compared to Intel’s CPUs on daily non-gaming graphics tasks like flash video playback, heavy flash websites, x264 playback, Skype videocalls etc.

      • ronch
      • 6 years ago

      It’ll probably be just fine for those things but don’t expect it to kick Intel’s butt. But don’t worry!!! Developers just have to recode their apps to use AMD’s HSA and hUMA and all that to make these chips REDEFINE the entire industry and let AMD set new levels of performance Intel can only dream of!!!

      /sarcasm

    • Geonerd
    • 6 years ago

    I didn’t realize that the HSA developer’s kit isn’t even complete. The API / SDK won’t be complete for another 6 months or so. Only then can 3rd party developers begin to write all the Wonerful Flying Pony software that will actually make use of HSA. In other words, we are at least a year away from seeing anything useful…

    That makes the de-emphasis of x86 all the more galling. Looking at the die photo, you could easily chop ~50% of the orange GPU area (wasted space!), and add another 4 ‘Roller modules. That would give traditional multi-threaded x86 a shot in the arm, and maintain AMD’s semi-relevance until HSA gets going.

    In the meantime, as an owner of an ancient Thuban + 7xxx GPU, what AMD chip am I expected to buy as a potential upgrade?

      • maxxcool
      • 6 years ago

      heh, i have 3 thuban rigs, next buy is intel all the way for “real” work ..

    • WaltC
    • 6 years ago

    Good review for the most part. Generally even-handed, I thought.

    [quote<]To fill out the lineup, we added a Core i7-4770K. This is Intel's fastest Haswell chip, so it's not a direct competitor for the A8-7600 or any of the AMD APUs we've tested. The i7-4770K is meant to provide a familiar frame of reference for the rest of the results.[/quote<] Mentioning the incongruity of the price disparity, as well, would also have been nice. I fail to understand, however, since the product reviewed here was an A8, how including the benchmark results of a much more expensive Intel cpu provides a "familiar frame of reference" pertinent to this review. Seems to me that would simply skew the frame of reference, instead. Why not stick with *competing* cpus retailing within ~$50 of each other, if possible? I think that actually would provide a much more informative "frame of reference"...;)

      • chuckula
      • 6 years ago

      Uh.. Walt… if TR were really against AMD they would have put an FX-8350 paired with an HD-7750… that’s barely more expensive than the Kaveri given that you don’t need uber-high end RAM in the FX platform… into the reviews and AMD would have been competing with itself.

        • WaltC
        • 6 years ago

        I don’t recall saying that TR was against AMD. I guess you missed the first sentence. Man, you guys have to learn how to read on a higher level…;)

        I simply made what I think is a perfectly logical statement.

      • maxxcool
      • 6 years ago

      i disagree only in that the KEV is a quad core, so comparing it to a intel quad seems legit. but i would have gone with a non HT i5 to make it more level.

        • WaltC
        • 6 years ago

        Uh, it’s known as “economics”–not “core counting”…;) Just like we generally don’t directly compare gpus costing $300 with gpus costing $200, etc. Besides, read the text I quoted: “…so it’s not a direct competitor for the A8-7600 or any of the AMD APUs we’ve tested.” The author said that himself…I was merely elaborating a bit on what he already said. himself..;)

          • maxxcool
          • 6 years ago

          Amd calls it a quadcore. so comparing to a quad core is reasonable even if it is not in the price range

      • Airmantharp
      • 6 years ago

      Does an actual gaming and/or consumer workstation CPU seem out of place somehow?

      The ‘frame of reference’ is just that; a comparison to put the performance of these models into a common perspective.

        • WaltC
        • 6 years ago

        Again, the author says himself that the i7-4770 is not directly competitive with an APU they’ve tested thus far. Seems rational to me.

          • nanoflower
          • 6 years ago

          I think it’s just a matter of testing with what is on hand. Given the short time schedule they couldn’t do a more complete test which would have included at least one I5 model. Presumably Scott will be posting a more complete review when he has had a chance to complete his benchmarks with the 7850K in a week or so.

    • DPete27
    • 6 years ago

    So the 85W TDP i7 falls smack in the middle of the 45W and 65W Kaveri in [url=https://techreport.com/review/25908/amd-a8-7600-kaveri-processor-reviewed/12<]peak power consumption?[/url<] What's going on with TPDs here? Surely the LGA 1150 platform doesn't draw 30W less power than FM2+ [Add] Is it because Kaveri is 38% larger than Haswell?...85W * .722 = 62W?

      • Farting Bob
      • 6 years ago

      Intel still use old school TDP which essentially means “the broad range of processors will use at most xW” which is useful for OEM’s and when buying heatsinks. Most of the processors that are labelled 65w or 85w etc will never use close to that, but they could potentially use more than the next step down. It’s like a absolute worst case scenario power draw (at stock speeds).

      AMD use a different method of working it out which is closer to the actual power draw of each chip in theory, which tends to give a lower number but the CPU’s can often reach, or slightly exceed their TDP in short bursts before they are throttled.

      Intel CPU’s almost never reach near their stated TDP because to do so you would need to load the CPU and IGP fully with synthetic benchmarks.

      Both have their logical reasons and neither is wrong, which is why looking at rated wattage is a rather useless exercise.

      • sschaem
      • 6 years ago

      84W i7-4770k idle 32.6 – 87.9 x264 (CPU only) Delta 55.3w
      45W a8-7600 idle 34.5 – 78.9 x264 (CPU only) Delta 44.4w

      The i7 got headroom to use the GPU in parallel and not slow down the CPU,
      The A8 will be thermally limited, as other review show prime+GPU still keep the APU at 45w

      But interestingly enough, in luxmark CPU+GPU score show bigger drop in performance for the I7 then the 45w A8…

      TR should also show power consumption for the entire APU, not just the CPU side.
      Most people dont buy APU and turn off the GPU, so TR power consumption analysis only show half the story.

    • puppetworx
    • 6 years ago

    Can anyone explain to me why AMD hasn’t made any headway on single threaded performance in years? I know Intel hasn’t made much either but they’re already far ahead of AMD in single threaded performance, why can’t AMD catch up even a little?

    This chip looks pretty nice, if I was building a general purpose machine for anyone I’d probably go with this over any Intel offering simply because of the gaming performance. I can’t wait to see how TrueAudio performs from a personal standpoint.

    Shame on AMD for not supplying TR. [url=https://twitter.com/AMD<]Let's let AMD know how we feel[/url<] so this hopefully doesn't happen again.

      • chuckula
      • 6 years ago

      [quote<]Can anyone explain to me why AMD hasn't made any headway on single threaded performance in years?[/quote<] Don't let the marketing fool you: AMD is stretched *incredibly* thin in the R&D department and it lacks the design resources and (thanks to GloFo) manufacturing capabilities to really design and -- more importantly -- build high-end x86 cores these days. The limited R&D has mostly gone into AMD's strong area: graphics, and more recently compute operations that use the same resources found in GPUs. That's why you see AMD being able to come out with high-end GPUs and have decent IGPs, but fall behind when it comes to the CPU.

        • srg86
        • 6 years ago

        The problem CPU wise is that AMD made the same mistake Intel made with Netburst. Like Netburst, the Bulldozer family is a “speed demon” type architecture. These specifically sacrifice single threaded performance for higher clock speed.

        Now it is true that AMD didn’t go as ridiculously far as Intel did, but just like Prescott, AMD have had trouble clocking the chips as high as they wanted to, and Kaveri is another case of this. Even worse here, they seem to have a speed demon CPU architecture on a process that’s not optimized for high clocks.

        They are doing things to widen the machine and improve IPC, but they may need to do what Intel did and what Anandtech are thinking, and junk this arch for a more “braniac” type one. This is precisely what intel did when they came out with the Core 2.

          • Airmantharp
          • 6 years ago

          I still think the ‘Bulldozer’ architecture holds a lot of promise, but without smaller processes, AMD can’t hope to compete with Intel anywhere near the high-end, or in any application/benchmark that relies on anything other than the AMD APU’s GPU grunt.

          Given a modern process tuned to their architecture, an FX-8350 successor with six or eight modules (12 or 16 ‘cores’) might be quite competitive, and I wouldn’t rule out such a beast in the future. Today, however, the return on such a SKU would not be very favorable given the overall lack of optimization for dynamic parallelization supporting >4 real cores. AMD has to make what people will buy, after all.

            • srg86
            • 6 years ago

            I think it has no future at all. AMD may well just run into the problem that Netburst ran into, Physics, you simply can’t run the CPU fast enough before heat becomes too much of a problem, that’s why I think the “speed demon” is the wrong path. Only time will tell…

            Personally IMHO I wish AMD had taken K10.5 and widened it to 4 issue, equivalent to what Intel did to make Core 2.

            • Airmantharp
            • 6 years ago

            It seems that they still have architecture adjustments to make- but essentially, Bulldozer is hardware Hyper-Threading. We’ve seen where the four module/eight thread Bulldozer CPUs nearly embarrass the four core/eight thread Intel Core i7 CPUs, but those instances require the full use of all eight available threads for real work.

            And that would be an advantage for AMD, if they’d kept up with Intel on the process technology front too. A lack of appropriate software (or need for it) and modern CPU manufacturing seems to have become AMD’s crutch.

            • srg86
            • 6 years ago

            Integer only, not in floating point.

            • Prion
            • 6 years ago

            If HSA works as intended, ultimately floating point ops won’t need to be scheduled on the CPU cores at all.

            • Airmantharp
            • 6 years ago

            Which is the point, yes!

            But we’re pretty far from there.

            • srg86
            • 6 years ago

            Even for floating point, not all applications are highly parallelizable, for those you need a strong FPU. The GPU section is not the magic bullet for all FP uses as you think. I’ve seen plenty of FP situations where a GPU would be no help at all, in branchy code.

            Also for HSA in the x86 space, these days AMD are just a bit player (certainly compaired to their Athlon 64 heyday), most x86 CPUs shipped won’t have it unless Intel takes it up or something similar. Same thing for mantle. They are kinda like 3DNow! in the K6-2 days.

            • Klimax
            • 6 years ago

            There are many, many FP operations among integer code, which are part of regular code. You’d be surprised where it can be found. (Often even GUI code itself has it like positioning and such)

            You can’t get rid of it from CPU core…

            • maxxcool
            • 6 years ago

            None, no promise at all. AMD in the next refresh may place the FPU’s back to a normal dedicated structure and we may see the end of all core sharing. but it will be the last BD change, which is sad because essentially they made a phenom ii, and took 2 years to do it and still has is only slightly faster.

            the shared resources was a terrible idea, and execution proved that true.

          • UnfriendlyFire
          • 6 years ago

          I think why AMD was trying to do with the “two intergers, two decoders, one floating point” concept was because they assumed software developers would use the GPU for floating point calculations.

          Which does make sense, especially given the fact that AMD is good with GPU designs.

          The only issue was that “Bulldozer” was far ahead of its time, and its architecture was half-baked. And there’s going to be hardly any software using HSA for at least months after Kaveri’s launch.

          It’s like introducing a 500 mhz dual core processor in 2001. Those single core processors with +1 GHz would’ve ran around it in circles given the fact that very few consumer software in 2001 would’ve supported dual cores.

      • WaltC
      • 6 years ago

      First, your allegation is false, but I’ll get to that in the third paragraph.

      Uh, well, perhaps it is also because of the advent of the phenomenon known as the “multicore” cpu? You might want to think about that for a moment. Where would Intel benchmarks be without multiple cores and hyperthreading, for instance? You could ask the same exact question of Intel, too, for that matter.

      As to the matter of single-core performance, AMD says that Steamroller delivers on the order of ~25% more IPC performance than Piledriver–or did you get that far into this review? (That information is also in every other AMD Kaveri APU review today.) So you see, your assertion that AMD hasn’t made any progress on that front is wrong from the start. (Also, Piledriver was 10-15% stronger per core than Bulldozer.)

      I often wish that people not using AMD cpus could improve their reading skills…;)

        • Airmantharp
        • 6 years ago

        We see AMD’s claims, and we see the benchmarks. 125% of not a whole lot is still not a whole lot…

        • srg86
        • 6 years ago

        Yet thy are still playing catchup with Intel on IPC, yes it’s better, not not better enough.

        • Modivated1
        • 6 years ago

        What you say is true but I think they are talking in reference to the natural progression of the Industry. Even though AMD’s has made improvements they are not substantial compared to the advancements of the industry and therefore Generation after Generation they are falling behind in IPC.

        I think that they were concentrating combo compute performance the whole time, they are attempting to transfer all the major work over to the major GPU compute cores. Anyone remember the original roadmaps of Bulldozer and beyond where they talked about combining the CPU and GPU? Step1 was to put the CPU and GPU on the same die, Step 2 was to make them able to communicate directly with each other (Kaveri) and Step 3 is to produce a chip that is new and neither CPU or GPU but a truly new creation that does the job better then either of them could ever do it.

        Look up the old road maps. Part of what happened and why this took so long to play out is the Phenom bug, then Phenom II only had the performance that the Phenom was supposed to have which caused AMD to take a free fall in sales, which to insufficient funds for R&D, which lead to Bulldozer being delayed by 1+ years and the final draft of Bulldozer cut down of it’s original intended architecture. In the end Bulldozers high end release did not perform any better then the last gen 6 core and a serious disappointment for a 4 year wait. That lead to a massive loss of confidence in AMD.

        The reality is that Kaveri is the most significant gain in performance AMD has produced in a long time. It’s not massive but for the first time it has a decent margin of improvement over it’s previous Generation of products (keep in mind that I speak as a whole not on IPC or any other individual characterstic). AMD has actually beat Intel is some benchmarks, functioned reasonably close in some others and closed the gap some even though still not comparing in others. It’s been a long time since we have seen AMD contend at all, for years we have watched them getting swept up in every benchmark.

        Rory Read and Lisa Su are doing a good job bring AMD back from the catastrophy that was Hector Ruiz. The one problem I see with AMD is that they are failing to consistent hold the advantage in anything, generation after generation Intel has been closing the gap on graphics, AMD once dominated the APU space as Intel on die graphics were crap stuff into silicon. Now Intel has a fair comparison in the APU space and is advancing in integrated graphic’s, also they will soon been releasing a discreet graphics card. AMD has to maintain being the best at something or at least be in the running for first against CPU and GPU competitors.There still a long journey ahead and we will see if AMD can do more then survive, they have to thrive.

        • Ringofett
        • 6 years ago

        [quote<]I often wish that people not using AMD cpus could improve their reading skills...;)[/quote<] I can read and comprehend benchmarks very well, which is why I don't have AMD CPU's! Funny how that works. 😉

      • OneArmedScissor
      • 6 years ago

      See my post on AMD’s memory latency problem:

      [url<]https://techreport.com/discussion/25908/amd-a8-7600-kaveri-processor-reviewed?post=793088[/url<] It's not as simple as laying the blame on single-threaded performance, IPC, inefficiency, etc. of the Bulldozer concept. AMD has a history here, which spans the Athlon 64, Phenom, and Bulldozer. They regularly admit these issues, but don't have the time / resources to address them.

      • Tristan
      • 6 years ago

      Because everything AMD can do is brainless copying and whining. So, they copied Netburst philosophy, and created Bulldozer.

        • Modivated1
        • 6 years ago

        Intel copied AMD and went 64 bit, then they copied AMD and went Dual core, then they copied AMD and went Quad Core, then they copied AMD and made an APU.

        Someone does something that is great then the industry will follow that’s the nature of the beast.

        Microsoft/Sony copied Nintendo and went 64 bit, 3D, and analog controller. It’s called Adapt or DIE!

          • Airmantharp
          • 6 years ago

          I don’t think either copied the other- each implemented the obvious technological ‘next step’ as it became feasible and practical to manufacture and sell.

            • Modivated1
            • 6 years ago

            For the most part I guess I can agree with you on that, however the move from 32 bit to 64 bit definitely was Intel following suit. Originally they announced that they were going to make the transition further down the road. Then the Athlon 64 came and trounced everything they had out on the market. They scrambled to fuse to 32 bit dies together to make the Pentium D (Which SUCKED!) trying to respond to AMD’s release. From there AMD had 4 years to produce new and advancing tech, instead they sat on their hands (Hector Ruiz!) and let Intel recoup and surpass them.

            • Airmantharp
            • 6 years ago

            Remember that that was for x86; Intel could have done x86-64 at any time, but didn’t because they were pushing IA64 instead.

            Probably a bad gamble on their part, but real 64-bit usage was still years behind the retail availability of 64-bit x86 CPUs. In the end, we can thank AMD for getting the part out there regardless; that otherwise useless step resolved the ‘chicken-and-egg’ position that the industry was in and made Microsoft’s eventual release of a consumer 64-bit OS actually possible.

            • Modivated1
            • 6 years ago

            Yeah, I have to agree with you here I do remember that pre- 64bit era talk. I am still glad AMD was willing to step out in front and take the risk though.

      • pervert
      • 6 years ago

      Sadly i think that isnt the largest problem. I think its a problem with the renta-fabs. they all suck in comparison to intel. intel is a generation ahead and their “3d” or whatever they call their gates on the 22nm is awesome. there needs to be better fabbing. intel dumps money into their fabs and gets results. most everything else is made at a few fabs that just cant compete. they compete with eachother but not at all with intel. intel is now selling use of their fabs to select customers. it’d be hilarious to see amd try and start using them 🙂 it wouldnt fix all their problems but it would be a start

        • Maff
        • 6 years ago

        Regarding your comment about renta-fabs: apparently Intels fabs aren’t that competitive at all.

        Sure, they’re ahead in transistor dimensions, but apart from that their tech apparently isnt that great, and very expensive at that aswell.

        See here: [url<]http://www.semiwiki.com/forum/content/3032-exposed-intel-wafer-pricing.html[/url<]

          • Klimax
          • 6 years ago

          Incorrect. I think it was under discussion on RealWorldTech, but they lack currently forum search so I’ll link to two candidate threads.
          [url<]http://www.realworldtech.com/forum/?threadid=137739&curpostid=137739[/url<] [url<]http://www.realworldtech.com/forum/?threadid=137844&curpostid=137844[/url<] Also linked "article" doesn't source much, so I would consider it "Baseless Shout". I don't think there is currently any evidence for those assertions.

            • Maff
            • 6 years ago

            Thanks! I probably stand corrected. Only know that semiwiki site since today, not so sure about its reputation

    • swaaye
    • 6 years ago

    Anandtech has the old Radeon HD 5750-aka-6750 blowing all of the iGPUs away by a large margin (for the most part). It puts things into perspective. I’m not sold on the value of all of those transistors being dedicated to such effectively minimal 3D power.

      • maxxcool
      • 6 years ago

      it is true.. but this is about survival in a thin margin mobile-centric market. I think HSA is a garbage trick (as well as mantle) but these aught to do well in super light laptops and thin htpcs

    • UnfriendlyFire
    • 6 years ago

    Will TR also do a RAM scaling test sometime in the future? 1600 vs 1886 vs 2133 mhz, with differing latencies?

      • maxxcool
      • 6 years ago

      i would like to see this as well, but i suspect that untill the e-gpu has it’s own private bus to the ram putting more than 1866 is wasted except for underclocking 2133 and pushing much tighter timings.

        • maxxcool
        • 6 years ago

        hrmm i stand corrected.. check the link.. obviously TR review would be preferred..

        [url<]http://us.hardware.info/reviews/5156/42/amd-a10-7850k-kaveri-review-amds-new-apu-overclocking-ram[/url<]

    • Cyril
    • 6 years ago

    Based on their contributions to this thread as well as their posting histories, blightymate and ximage21 have been banned for violation of rule 13.

    • the Lionheart
    • 6 years ago

    So when are we gonna see a decent physics engine that puts kaveri to good use?

      • maxxcool
      • 6 years ago

      um AMD already has one.. its called havok… and like physx.. it is mostly ”meh”

        • the Lionheart
        • 6 years ago

        Havok is horrible, and it’s not AMD’s engine, it’s owned by Intel as a matter of fact.

          • maxxcool
          • 6 years ago

          hrmm .. time to hit wiki.. thought they were still using it ..

            • sweatshopking
            • 6 years ago

            they do use it. intel license it out.

          • maxxcool
          • 6 years ago

          oh snap! your right on the money 😛

    • ronch
    • 6 years ago

    Can’t we ever get decent die shots these days anymore???

      • Deanjo
      • 6 years ago

      Oh you and your need for silicon pornography…..

        • ronch
        • 6 years ago

        Yeah. Speaking of silicon pron, AMD pioneered the idea. Remember the SemPRON?

          • Deanjo
          • 6 years ago

          [quote<]Remember the SemPRON?[/quote<] Nope, Google's safe search must of filtered it out. ;D

            • ermo
            • 6 years ago

            [quote<]"Nope, Google's safe search [i<]must of[/i<] filtered it out. ;D[/quote<] Et tu, Brute? (I'm sort of hoping it was on purpose...)

      • spuppy
      • 6 years ago

      [url<]http://i.imgur.com/JOFzlcy.jpg[/url<] Enjoy!

        • ronch
        • 6 years ago

        No good.

    • Modivated1
    • 6 years ago

    I would like to see the gaming performance in Mantle applicable games after the patch is released added to this review to get an idea of the difference in performance it would offer low end chips.

    EDIT: The true Mark of a Hater * Rolls on the Floor Laughing * is when you have a problem with a comment like this. It’s not proclaiming that AMD is the savior of the Computing world, not in any way cheerleading, it’s just requesting for a honest analysis of how a valuable API release might effect the performance of this newly released software.

    Some people can hate a product no matter how good it is because of where it comes from. I try not to let this be me. I have my preferences but I will acknowledge when the other guy has produced a better product.

    CAN YOU GIVE CREDIT WHERE CREDIT IS DUE?!!!

      • nanoflower
      • 6 years ago

      That won’t happen since it looks like it will be a month before the Mantle BF4 patch comes out and this article will be considered old news. I’m sure that the TR guys will be doing new BF4 tests (and other games) once there are public releases of Mantle support which will include hopefully not only the AMD line up (up to the 7850K as well as the Intel I3, I5, I7 lineup.)

        • Reuel
        • 6 years ago

        They aren’t going to update this article They will make a new article with the Mantle info.

      • LostCat
      • 6 years ago

      Honestly, whining about downvoting is about as pointless as it gets.

    • lilbuddhaman
    • 6 years ago

    According to my calculations, once mantle comes out these APU’s will run BF4 at [i<]up to[/i<] 126fps@1080p! And with that highly efficient transistor usage, AMD will certainly have a profitable year!

      • mikato
      • 6 years ago

      In fact, they will make up to 100,099,399,bazillion,230,992 dollars!

      I keeed. I hope AMD and HSA and HSAIL and hUMA start kicking some butt in new software usage ASAP so we all benefit.

      • maxxcool
      • 6 years ago

      1.21 Jigawatts!

    • anotherengineer
    • 6 years ago

    Hey Geoff,

    Where are the LiteCoin benchmarks 😀

    Edit – Chuckula 22 posts/105 posts (21%) lolz seems a bit fanatical to me

      • Dissonance
      • 6 years ago

      We’re holding back on cryptocurrency testing until an HSA-accelerated Coinye generator is available.

        • chuckula
        • 6 years ago

        I KNEW HSA WAS A MIRACLE!! COINYE FTW!

        • anotherengineer
        • 6 years ago

        😀

        roger that!

      • spuppy
      • 6 years ago

      There is a problem with their driver, where the cgminer window remains blank until the process is shut down. I did run a quick bench using thread-concurrency 8192 and got about 60 kh/s.

      As you know though, it takes a lot of tweaking to get full performance out of a GPU. I didn’t have time to test it over and over again with a blank screen to get there. Unless 60 kh/s is it! Richland gets 50 for me.

      • chuckula
      • 6 years ago

      [quote<] Edit - Chuckula 22 posts/105 posts (21%) lolz seems a bit fanatical to me [/quote<] Two reasons: 1. Aside from Spiggy's alter-egos, many people seem to be a little quieter than normal. This ain't the next Athlon 64 no matter what they tell you. 2. [b<]The HSA excites me! I NEED HSA!![/b<]

        • NeelyCam
        • 6 years ago

        [quote<]1. Aside from Spiggy's alter-egos, many people seem to be a little quieter than normal. [/quote<] You're not talking about me, right...?

        • LostCat
        • 6 years ago

        I’m only mostly quiet cause I’m still waiting on Mantle benches.

      • raddude9
      • 6 years ago

      Yea, it’s a shame when article comments get hijacked by people who think everything they say is important.

      I propose TR set a limit of 2 or 3 posts an hour on a single article. That way people might engage their brains a bit more before spewing out pointless comments.

      • jdaven
      • 6 years ago

      It’s getting harder to read the comments with every other one is chuckula. Maybe a filter on usernames? I’m sure there are many who would block jdaven?

        • chuckula
        • 6 years ago

        Sorry Jdavs… I’ve been stuck traveling for the last couple of days.
        You can register your complaints with me when you buy me wings & beer later.

      • ptsant
      • 6 years ago

      A Richland gets easily 50kh/s with little tuning. I suspect that with GCN cores and overclocking, 100kh/s could be attained. With the current network hash speeds this amounts to … not much.

    • JohnC
    • 6 years ago

    I’ve read this whole review, only thing somewhat useful is the power consumption/efficiency. For everything related to actual performance I’d rather do my own tests instead of relying on “review sample” system and specific selection of unrealistic games (the only type of games I occasionally play on integrated GPUs are things like Hearthstone, Defence Grid, Torchlight 2 and using WoW/EVE Online as 3D chat client and doing some solo PvE stuff). No offense to TR, of course – I understand you were forced to test this APU in specific way, you have to review this stuff as soon as possible in order to try to stay relevant and you also have a very limited manpower.

    P.S: Always amusing how such articles attract multiple shill agents… I already counted at least 3, same goes for Anand’s article.

    • dpaus
    • 6 years ago

    Now that an APU can deliver a solid gaming experience without requiring a dedicated graphics card, it’ll be interesting to see the ‘Budget’ build in the next System Guide.

      • chuckula
      • 6 years ago

      Considering the last “Econobox” build included a much more powerful GPU than even the HD-7750, I strongly doubt you’ll see any movement with the exception that the A8-7600 will replace whatever Trinity-equivalent is present in the alternatives section.

      Edit: TR recommended the HD-7850, which is much much stronger than the HD-7750.. and the HD-7750 is clearly stronger than Kaveri.

        • dpaus
        • 6 years ago

        Yes, but.. That’s kinda my point. The last Econobox had a $120 CPU and a $150 graphics card. This article shows us that a $120 APU can deliver an adequate gaming experience at a $150 savings – almost 24% of the entire cost of the Econobox config. And since no GPU card is required, it can be put in a much smaller case if desired.

        I guess the question is priorities, as always, but is this an ‘Econo’ box first or a gaming rig first? Because, just like the closing section of the review points out, it’s not really hitting either target anymore.

          • OneArmedScissor
          • 6 years ago

          Or if it runs games smoother than i3s, that’s a few dollars to put towards a faster graphics card. AMD already blew a lot of that potential with the A10 price hike, though.

            • RDFSteve
            • 6 years ago

            “that’s a few dollars to put towards a faster graphics card”

            No, the point is, there’s now a $475 AMD-based Econobox config that can do ‘good enough’ (for some) gaming. What can you put together with an Intel CPU/APU for $475 and how does it compare at gaming?

            • Klimax
            • 6 years ago

            Depends on price point of cheaper Iris (Pro) chips with their respective NUCs( or ITX boards) and furthermore it would depend on when Broadwell ships and what configurations will look like.

          • NovusBogus
          • 6 years ago

          Based on some arguments I’ve had with oldtimers it seems TR views all of the builds, including Econobox, as gamer/enthusiast systems at varying price points. Once Kaveri pricing stabilizes I’m going to see about putting together a Dormbox build that’s good enough for general use as a service to broke college students everywhere.

            • superjawes
            • 6 years ago

            Well yes, they are typically considered gaming builds, and on top of that:

            [quote<]Decent graphics performance is a must here, [b<]as is a strong upgrade path.[/b<][/quote<] As long as the Econobox is viewed as an entry level machine that can be upgraded later to keep up, we'll still see dedicated graphics cards. At least you can always head over to SBA and get a solid build recommendation based around an APU.

      • raddude9
      • 6 years ago

      This is something I’ve been saying for a while. Now that we have an APU that can play most games at 1080p and medium quality settings, that’s plenty good enough for most people.

        • Deanjo
        • 6 years ago

        When gamers are bitching about their displays only being able to handle 60 fps on a regular basis, I really don’t see how sub 30 fps performance on today’s titles (not to mention tomorrow’s) can be considered “good enough”. I would expect that because of the mediocre performance, most people would still turn down their resolution to 720 for game play on Kaveri to get a better game play experience.

          • dodozoid
          • 6 years ago

          I gues fraction of gamers who complain about 60Hz screens is very low

    • blightymate
    • 6 years ago

    The true performance differential potential of Kaveri lies in it’s HSA capabilities. Mantle games leveraging PS4 HSA optimizations will be at the forefront of highlighting this potential.

    PvZ:GW benchmarks will be notable.

      • the Lionheart
      • 6 years ago

      Valid point! It would be great to get decent APU-accelerated physics in those new games.

        • chuckula
        • 6 years ago

        Uh… Spigzone, logging in using two different sock-puppet accounts to compliment your own trolls doesn’t make you cool, no matter what they tell you at school.

          • the Lionheart
          • 6 years ago

          You’re flamboyantly rude mate. The point he makes is valid… Ever heard of GPU-accelerated physics?

      • chuckula
      • 6 years ago

      Oh… so blightymate is saying that the launch of Kaveri has been delayed because the chips that AMD deigned to provide to TR aren’t the “real” Kaveri chips.

        • Modivated1
        • 6 years ago

        NO… THAT’S not what he is saying at all..READ IT AGAIN!

        STOP BEING A HATER!

          • chuckula
          • 6 years ago

          I’ll stop being a “hater” when you stop insinuating anyone who buys an FX-6300 and an HD-7750 for less money than Kaveri is an Intel fanboy.

            • Concupiscence
            • 6 years ago

            Truth. The FX-6300’s going for $90 at Micro Center, and that comes with a $40 discount on an AM3+ motherboard. Grab a GDDR5 Radeon 7750 online and you’d be hard-pressed not to come out ahead.

            • Modivated1
            • 6 years ago

            Ok, if I implied or said that in past posts my bad, but rather then make your point you rant all over the thread.

            The number one things rants do is make sure people know you’re angry. It brings so much focus to that it’s hard to decipher any point you are trying to spew out in the process.

            • freebird
            • 6 years ago

            But that setup would use more power and have no future… and no HUMA/HSA support. AMD has to start putting chips out with HSA support sometime, don’t they??? If they want people to write software that uses both CPU & GPU in every day software there has to be systems used by ORDINARY PC CUSTOMERS that will BUY the software??? Right??? So quit blaming AMD for sucking and ADVANCING the industry all on a Market CAP of 3 Billion when Intel sits on over 130 Billion in Market CAP and doesn’t innovate as much as AMD seems to…

            Intel has relied on using Monopoly Market prowness to keep AMD from competing directly with them over the last 15 years and milked the PC / Server / Laptop for all it is worth, but that is soon coming to an end… low power servers are invading the cloud, which made AMD’s move to buy SeaMicro a very smart idea. Tablets, dual-use devices and Smart phones are taking a larger and larger portion of the consumers spending each year and why AMD has gone through a lot of the change they have lately. Just because they can’t drop 10 billion $$$ to develop a CPU to match what Intel has, what then??? Intel still has their monopoly power and would do they same thing they did back 10 years ago. Block & lock out AMD with major system makers. AMD’s best bet is to NOT try and compete with Intel now, but develop their own markets with better products which they are hoping HUMA & HSA will develop to be.

            If it wasn’t for AMD we’d still be in the 32-bit hell of Windows XP & the Intel Pentium XII

            • Deanjo
            • 6 years ago

            [quote<]If it wasn't for AMD we'd still be in the 32-bit hell of Windows XP & the Intel Pentium XII[/quote<] And the reason we are not in that hell is because AMD did go after and competed directly with intel in the heavyweight division. Back then they were pushing intel to move forward. Now days they are not and we see it with very minor improvements to performance with each cpu family release. It is a far cry from the days where processor releases were exciting because AMD and intel were trading blows in the Athlon to pre Core 2 years.

            • nanoflower
            • 6 years ago

            I’m not against HSA but how is software written for HSA advancing the industry when it isn’t usable by about 90% of the current PC market (Intel and non-HSA capable AMD CPUs) The idea is fine but there needs to be a solution that both Intel and AMD are supporting for developers to really start supporting it.

            • Reuel
            • 6 years ago

            Devs will have to write code for both. People will see how much faster HSA is and buy those CPUs. Pretty soon your 90% figure will be out of date and it snowballs from there.

            AMD is betting the company on that.

            • Reuel
            • 6 years ago

            The 7750 uses more power but it gives you more performance for the same money. That’s a “win” in my book.

      • maxxcool
      • 6 years ago

      to bad ps4 don’t use mantle. or ever will.

        • Airmantharp
        • 6 years ago

        Or ever could… but that’s not what he’s saying.

        • freebird
        • 6 years ago

        PS4 doesn’t have to use Mantle because most of the game & system design: how you code, etc. are very similar between PS4 API & Mantle.

          • maxxcool
          • 6 years ago

          no, and no. 2 hyper-visors sit between the OS and the hardware. think vmware in a vmware. thats been sonys OS m.o. for a while now to keep people out of installing other os’s.

    • willmore
    • 6 years ago

    Wow, there’s a lot going on here. It’s sad that AMD didn’t get the A10 chips out to more reviewers in time for them to have good data by the expiration of the NDA. I don’t blame some conspiracy theory about AMD punishing certain reviewers, while it’s way more likely that AMD is just trying to prevent leaks–which they’ve had a lot of trouble with in the past.

    I found this review informative and it really looks like Kaveri is doing a lot better than Richland. Comparing it to Intel is a lot harder. The i3 part is a 2 core/4 thread device and the i7 part is a 4/8 part. It would have been helpful to see an i5 part–which are 4/4. This would have helped to show how well the steamroller modules are performing vs similar Haswell cores. Also helpful would have been an IVB era i3 and i5 part to show how the L1 cache improvements that Haswell brought effect things. That would also be useful because the i3 IVB parts didn’t have AESNI and that could be illuminating.

    This review also show how many apps are making use of GPU computing. That’s good to see as it finally lets us users benefit from the direction Intel and AMD have gone in past years.

    • willg
    • 6 years ago

    I think everyone expected Kaveri to at least launch at Richland clocks, but it looks like AMD is stuck with a ‘speed demon’ CPU design on a ‘density not speed’ optimised 28nm bulk process. Ah, the joys of not owning your own fab.

    To be fair though, 100w Trinity/Richland made little sense as stated by Scott in the reviews of those CPUs, so not being able to go after that market aggressively probably isn’t a huge loss, I’m not sure how you get a design-win with one of those CPUs when your competitor’s product sits in an 84w or lower envelope and trounces yours in CPU performance, and probably costs the same or less with Intel’s OEM pricing.

    I would guess lower TDP parts must make up something like 80%-90% of the market now, when you consider all-in-ones, small form factor PCs, laptops etc. So optimising for lower TDPs makes sense, even if it annoys us enthusiasts who want to see some competition return to the midrange to high-end CPU space.

    • UnfriendlyFire
    • 6 years ago

    Looks like an ideal chip for a budget Steam machine.

    I’m more interested to see how their Kaveri mobile chips do, because I plan on getting a new laptop during the summer to replace my nearly 4 years old laptop.

      • Airmantharp
      • 6 years ago

      BF4 at 1080p on medium pre-Mantle? Count me in!

      I’m standing in line for G-Sync on a larger IPS panel for my desktop, but under the 55″ I see no reason not to bolt a top-end Kaveri to a diminutive mITX platform. About the only thing that SteamOS needs to perfect is streaming playback- get Netflix/Hulu/Vudu/Amazon/YouTube streaming and we’re good. One box to rule them all!

    • Unknown-Error
    • 6 years ago

    Huh?! WTF is with those power numbers – [url=https://techreport.com/review/25908/amd-a8-7600-kaveri-processor-reviewed/12<]link[/url<]??? 84W 4-core/8-threaded top of the line 4770K is more energy efficient (and actually consumes less power overall) than the ultra-mediocre 65W A8-7600? Even Richland A8/A10 does better. Ok.......seriously....... What the [b<]*CENSORED*[/b<] is this [b<]*CENSORED*[/b<]?!

      • chuckula
      • 6 years ago

      You didn’t get the memo? Power consumption and efficiency were only relevant when AMD was winning in 2005. Ever since the P4 it’s been unimportant.

      Oh and Intel lies with its TDP numbers, didn’t any AMD fanboy ever tell you that?

    • tipoo
    • 6 years ago

    What’s weird with the Iris Pro 5200, it does well in average FPS in Anandtechs Infinite tests (at least on low settings, turn up the whizbang effects and it suffers more than others) but it does very poorly in the *minimum* FPS. I wonder if it has weird FPS drops?

    [url<]http://www.anandtech.com/show/7677/amd-kaveri-review-a8-7600-a10-7850k/12[/url<]

      • chuckula
      • 6 years ago

      In one game (Bioshock) in Tombraider.. which AMD specially tuned… the Iris pro has better minimum frame rates and even takes the lead from Kaveri in minimum framerates at the highest resolutions.

    • derFunkenstein
    • 6 years ago

    Disappointed about the new socket, if only because I have a super-budget A6 Richland system that my wife uses who would like a bit better graphics performance. Now I’m stuck either buying a new motherboard along with a Kaveri CPU or just buying a Richland A10.

      • willmore
      • 6 years ago

      Buck up little camper, maybe those A10 Richlands are going to drop in price, now!

      • Deanjo
      • 6 years ago

      It wouldn’t make much sense bumping up the APU just for better graphics. Just buy a video card and get even better video performance for the same money.

        • flip-mode
        • 6 years ago

        Fo sho.

        • the Lionheart
        • 6 years ago

        What do you mean by better video performance? Both the UVC and VCE are updated in Kaveri.

          • derFunkenstein
          • 6 years ago

          I assume since Deanjo was replying to me, he means 3D graphics since that’s what I meant. :p

        • derFunkenstein
        • 6 years ago

        Well it’s an A6, which means only one CPU module, too. If I’m going to spend money, I’m going to bump the CPU as well.

    • Hattig
    • 6 years ago

    It’s better than Richland. That’s good. The gaming performance is good (for the total price, integrated, etc). OpenCL is sometimes effectively accelerated. The memory controller is improved, if not up to the standard of Intel’s. Finally GlobalFoundries is shipping volume 28nm product.

    The 45W version isn’t that much slower than the 65W option. And it’s $119, which is cheaper than the Core i3 as reviewed. I doubt the 95W options are actually that compelling assuming you will be using a discrete GPU in such a desktop.

    I noticed there was no performance/dollar graphs to go along with the performance per watt charts.

    So … the 45W versions would make very good gaming small form factor chips, and AMD has moved forward, but not as much in the CPU area as people would have liked, or even expected.

    • Jon1984
    • 6 years ago

    This put in a box similar to Gigabyte Brix, 8Gb of RAM, 120Gb SSD and Steam OS for 450-500$ would be must buy for me. Powerful enough to handle any game with decent settings at 720/1080p.

    • ximage21
    • 6 years ago

    Looks like the low power AMD Kaveri 7600 has video acceleration and is even faster than the $1000+ Intel i7-4960x and all other Intel CPUs at video transcoding!

    [url<]http://www.guru3d.com/articles_pages/amd_a8_7600_apu_review,13.html[/url<]

      • chuckula
      • 6 years ago

      OK: Official request for the ban-hammer. I’m getting sick of this crud when people who legitimately like AMD products like Ronch are being subjected to personal attacks while copy & pasted spam gets blasted all over the comments.

        • ronch
        • 6 years ago

        Even an AMD fan such as I can’t understand these people.

        • the Lionheart
        • 6 years ago

        You do it all the time, in a more restrained way tho.

          • chuckula
          • 6 years ago

          Uh… name one post where I use idiotic language like that… seriously.. Spigzone.

            • ronch
            • 6 years ago

            I kinda like the name ‘Spigzone’ better than ‘the Lionheart’, which kinda seems awkward. Reminds me of The Lion King, or perhaps Spiggy loves watching Care Bears. LOL.

            Just kidding, [s<]Spiggy[/s<] the Lionheart! 😉

        • Modivated1
        • 6 years ago

        Ok, Chuck. You argue that AMD fan’s don’t present facts and then when someone does show references you call it spam? This article is totally relevant to the subject and illustrates the truth behind his claim.

        I just chimed in here let me how many time he has pasted in this thread. If it’s a outrageous amount of times I will edit my post.

      • Deanjo
      • 6 years ago

      Until you enable QuickSync.

        • Unknown-Error
        • 6 years ago

        sssshhhhhhh……

        don’t ruin things 😉

      • Hattig
      • 6 years ago

      I don’t know why you are modded down for this apart from linking to another review, because it’s quite clear that when AMD’s hardware transcode is used, it’s really really fast. The Tech Report review didn’t test this (maybe because it’s not a common use case for enthusiasts who will use HQ software encode?). Alternatively, Intel’s hardware acceleration wasn’t enabled for those tests? Is that it?

      Which for people who are happy to use software that uses the hardware transcode, or TwitchTV, etc, users who need to use hardware encode, is good.

      Obviously there are probably quality issues compared with the H 264 software encoder option, which is why you would want to use the latter for stuff you are encoding to keep.

        • chuckula
        • 6 years ago

        [quote<]I don't know why you are modded down[/quote<] I do. No real person talks like that and this "ximage21" account seems to post out-of-context spam in the same manner that I've seen from some bots. Look down the exact same page that ximage21 linked to and you see real benchmarks in a real transcoding program (Handbrake) where Kaveri is embarrassed by 3 year old chips (and I ain't talking about the $1000 or even $300 chips either).

      • maxxcool
      • 6 years ago

      paid-post..

    • alientorni
    • 6 years ago

    in other news, i think battlefield 4 delay on mantle has worsened this apu launch and giving the fact that there’s no top of the line apu review on TR, the final veredic is also delayed.

    • anotherengineer
    • 6 years ago

    “To be honest, I didn’t expect something that plays Battlefield 4 as well as the A8-7600. Despite sporting a cut-down version of Kaveri’s integrated GPU, the A8-7600 still pumps out playable frame rates at 1080p resolution with medium details.”

    So a 25W or 35W notebook with the classic 1366×768 resolution should be able to run most games no problem at all then?

    Now I wonder how this 45W version compares to my dads S939 Single core Venice at 2.4 GHzwith 1GB of DDR-400 and x800 pro Radeon, would it be worth upgrading? We need the Master CPU list again 🙂

    I’m also curious to what other things can be done to improve the cpu’s IPC? L3 cache? Improved Cache speed? Increase memory controller efficiency?

    An 8 module Kaveri CPU could be a decent upgrade to Vishera?

      • chuckula
      • 6 years ago

      [quote<]An 8 module Kaveri CPU could be a decent upgrade to Vishera?[/quote<] An 8 module "Kaveri" (more accurately Steamroller) chip does not exist anywhere on AMD's roadmaps and likely will never be built. Might as well wonder what would happen if Intel brought Iris Pro GPUs to the desktop... oh wait. [Edit: Looks like the AMD mod-squad is out in full force. So uh... go ahead and show me that powerpoint slide where AMD is promising you a 16-core (or even 8-core) Steamroller... go ahead... I'm waiting...]

      • maxxcool
      • 6 years ago

      unfortunately 8 core SR will never exist.. the gpu takes up to much room even with a amazing die shrink. AMD’s next move will be to fully separate the cores and shuffle the FPU’s back to “dedicated per core” in the next iteration, that’s about as far as they will go on the ladder ..

      • Concupiscence
      • 6 years ago

      Even if Kaveri’s performance is disappointing by the high bar Intel has set, a 45W dual module Kaveri would slamdance all OVER your dad’s old Athlon64.

      edit: Really? I’m being downvoted for asserting that a four CPU/dual module Kaveri with a GCN Radeon would frog stomp a single core K8 with a graphics card that doesn’t even support Shader Model 3.0?

      • OneArmedScissor
      • 6 years ago

      Steamroller still has a cache latency bug. AMD said it will be fixed in Excavator.

      It’s possible the “module” concept is still slowing it down. 2MB of shared L2 is going to be higher latency than a smaller amount for each core.

      High memory latency has been AMD’s disadvantage for years now. It became a low priority as far back as the 65nm Athlon 64, which was slower than 90nm.

      The original Phenom had less L3 than Core 2’s faster L2. The earlier i7s ran their L3 at up to 2.7 GHz, while AMD stayed at 2 GHz.

      The FX CPUs still only run the L3 at 2.2 GHz, the same speed as Phenom IIs, which is likely why they’re dropping it.

      Since Sandy Bridge, Intel’s CPUs all can run the L3 cache, memory controller, and ring bus at the same clock speed as the CPU. That dramatically cut memory latency, but it hasn’t budged since.

      That lull may have given AMD a false sense of security. Intel’s eDRAM is really going to rub salt in the wound if it becomes a common feature.

        • anotherengineer
        • 6 years ago

        Ahhh Thanks

        So is that due to AMDs design or more on the fab process or both?

        • freebird
        • 6 years ago

        I regularly run my 1090T at 2800 on the NB and hope to run my FX8320 at 3000+ on the NB with better cooling. AMD is very conservative on what speed they run their NB/L3 cache

      • NovusBogus
      • 6 years ago

      A 15.4″ 1366×768 biz-class tank that can run games decently well is exactly what I want for my next laptop. ‘Gamer’ notebooks piss me off for a multitude of reasons.

        • Chrispy_
        • 6 years ago

        But… but… but!

        [list<][*<]Won't you miss a laptop that looks like a robot? [/*<][*<]How will you cope without garish multi-coloured lights everywhere? [/*<][*<]What if your WASD cluster isn't outlined in red? [/*<][*<]The lack of GO FASTER decals all over the chassis will make you visibly inferior to your peers.[/*<] [/list<] Most importantly though, it'll be thinner and lighter to carry - You will grow weedy and frail without the helpful burden of a hefty gaming brick to keep you fit, leading to a traumatic early death.

        • MrJP
        • 6 years ago

        I bought a cheap HP laptop for the family earlier this year to replace the old Dell that finally succumbed to Nvidia’s dodgy solder bumps. The HP has a Trinity A8-4500 with a 1366*768 panel and I’ve been pleasantly surprised at just how capable this is with games. It’s also perfectly snappy in normal desktop tasks despite the handicap of a mechanical HDD. OK the screen and keyboard are not fantastic quality and it’s certainly not ultrabook-thin, but I can see this being more than enough computer for most people and much better than anything Intel-powered at anything like the same price. It’s such a shame that AMD hasn’t been able to do the right promotion or the right OEM deals to get these things the market success they deserve.

        • FuturePastNow
        • 6 years ago

        I’m not sure we’ll see any business-class Kaveri notebooks. I don’t recall AMD’s previous APUs making it into business lines.

        Which is silly, when AMD’s marketing department has the names Opteron and FirePro at its disposal and could easily rebrand some APUs to sell for cheap workstations.

        Yes, yes, we all hate rebranding, I know.

      • Chrispy_
      • 6 years ago

      I know from experience that an X1800XT isn’t enough to play Borderlands [i<]1[/i<] smoothly at low details, and an X1800XT is a much better graphics card than an X800 Pro. Any A6 or better should be fine for gaming at 1366x768 as long as you're okay with around 30fps in some games.

    • ronch
    • 6 years ago

    Back when Vishera came out I was seriously contemplating about whether I should get Vishera already or wait for Steamroller. I chose to make the jump to Vishera in December of 2012 and I’m glad I did for the following reasons:

    1. Turns out there wouldn’t be any FX 8-core chips based on Steamroller so the wait would’ve been for nothing. Sorry, not really interested in these APUs. I read APU articles but I probably won’t buy one of these for my own use unless it’s for a secondary machine such as a laptop.

    2. I got my MSI 990FXA-GD65 for just half the regular price. Good luck finding another fire sale like that on a decent 990FX motherboard. I got lucky.

    3. DRAM prices today are twice what they were when I got my FX-8350.

    4. Even if AMD decided to put out 8-core FX chips the performance improvements look quite minimal right now.

    Overall, I’m really glad I jumped the gun last year and I’ve been a happy FX owner for a year now.

    • ximage21
    • 6 years ago

    The gaming frame times of the low power Kaveri against Intel’s $300 i7 is pretty eye opening of how advanced AMD is over Intel in terms of graphics quality.

    Other sites also showing the LibreOffice Opencl acceleration of AMD’s Kaveri can be nearly 7-10 times faster than Intel’s i5-4670K, which shows the power of Kaveri GPU compute.

      • chuckula
      • 6 years ago

      Oh look, a little zoner came over to visit. Tell ya what, if you are capable of intellectual processes that extend past copying and pasting information from AMD’s powerpoints that aren’t part of any credible review of these chips, why don’t you explain why AMD failed to even give TR a 7850K to test? How about it?

        • the Lionheart
        • 6 years ago

        Why does it matter to you whether AMD gave a 7850K to techreport or not? Perhaps they forgot? From the difference in clock speeds, you should be able to tell how the 7850K fares against the chips tested today…

        With that said, you think AMD didn’t provide TR with a 7850K for some “bad reason”? If so, care to explain what that would be?

        • Modivated1
        • 6 years ago

        Probably because they feel they won’t get fair scrutiny. Remember the Nvidia G-Sync thread? In it I questioned why TechReport hadn’t published a news report concerning the blackbox syndrome of Nvidia profiles that denies developers, and AMD to see code and modify or alter their programs or drivers to better run the code.

        Damage said that indeed they hadn’t reported it because they were still investigating it. The original release of that information was late 2013/Early 2014 almost three weeks ago. Things like that probably make AMD feel like a fair perspective concerning the products is no longer available here.

        Now CES and a whole flurry of information has come to the table this New Year (as it always does) and their are only 4 Knights presented at TechReports round table and I can understand the overload of information takes time to process ingest and distribute. While the first time I was quick to make an accusation I am not trying to be quick in making one here. Never the less I do hope to see the results of TechReports investigation concerning Nvidia and the Black Box issue.

      • ronch
      • 6 years ago

      Are you an AMD shill?

      • maxxcool
      • 6 years ago

      … but who uses open office ? nobody but me really, and i work in a tech office with 1800 people and i have yet to see anyone pull up OO and or Libre.

        • Concupiscence
        • 6 years ago

        I run OpenOffice Calc, but almost exclusively for dealing with CSV files. That’s because my install of Excel 2007 has insane default behavior and misinterprets data in cells for CSVs in spite of my efforts to correct it. That’s a great big no-no for dealing with well logs and oil field data.

          • maxxcool
          • 6 years ago

          indeed as do i.. it is also AWESOME for weirdly formatted log files 🙂 being imported into calc for better row sorting.

            • Concupiscence
            • 6 years ago

            I have to echo your mild disapproval of the rest of the suite though… Bugs I ran into in 2006 with embedding image behavior in an OpenOffice doc that varied depending on the image format were still in both OpenOffice *and* LibreOffice in 2012. Yuck.

          • Deanjo
          • 6 years ago

          That still hasn’t improved much in the newer versions of Excel.

        • indeego
        • 6 years ago

        We use it as a backup to Microsoft Office. Sometimes Office can’t open its own documents, or we need the file support.

    • chuckula
    • 6 years ago

    Hey Guys! Remember that complete disaster last year called Haswell?

    Well ya know what? No matter how bad Haswell was, Intel still sent TR review samples no strings attached and didn’t try to punish TR for failing to toe some sort of party line in its review.

    To paraphrase a little Song of Ice and Fire: Powerpoints are Wind.

      • derFunkenstein
      • 6 years ago

      True fact. And I don’t even get it – Kaveri isn’t a bad chip at full speed. Eats a bit more power, but generally as good or better than Richland in CPU tasks despite a clock speed deficit and across the board better graphics. If I were one of those iGPU gamer weirdos I might consider upgrading a Richland box to Kaveri. If I were a normal person I wouldn’t.

        • chuckula
        • 6 years ago

        I agree. Kaveri has a pretty strong IGP that (in its full-power incarnation) is competitive (sometimes a little behind sometimes a little ahead) with the Iris Pro in the I7-4770R that Anand had on hand for testing if you read his review.

        Considering that Kaveri is cheaper than the I7-4770R, that’s decent for situations where CPU performance isn’t a limiting factor and where you can have a big enough cooler for a full-bore Kaveri.

        Kaveri’s not a disaster, but it’s not a miracle either, and I’m Krogothed with AMD’s behavior in “punishing” certain review sites especially when TR said some awfully nice things about Kaveri in spite of AMD’s behavior.

          • derFunkenstein
          • 6 years ago

          For sure TR has to be way up toward the top of AMD’s shit list. 😆

          • NeelyCam
          • 6 years ago

          “Punishing?” What are you talking about?

            • chuckula
            • 6 years ago

            You did see that TR wasn’t given the A10-7580K for review? Apparently AMD was… shall we say… selective in the recipients of the high-end chips.

            The A8 in this review is downclocked and only has 384 GCN units activated. It also isn’t on sale yet.

            • Damage
            • 6 years ago

            Look, we eventually got an A10-7850K from AMD last week at CES, like I said earlier–just not in time (or in the right place) for us to conduct testing prior to today.

            I suspect those publications with 7850K numbers had people who weren’t at CES to work on the review ahead of time or something like that. Geoff was doing that work for us, but he’s far, far from my lab and my stock of 65W and 95W Richlands, so I chose to carry the 95W part back from CES myself. And I didn’t get both a 95W chip and FM2+ mobo together here until today.

            So we just couldn’t do 95W testing in time for today’s review.

            I don’t entirely understand the push-pull that happened inside of AMD that created a plan to supply reviewers with only 45W parts and then later to offer some of us 95W parts. Big companies are weird. But I don’t think we were being punished or any nonsense like that.

            The nonsense instead was firmly centered around 1) attempting to protect the 95W Kaveri’s CPU performance from scrutiny, 2) soon realizing the initial plan to only supply 45W review units wasn’t good PR, and 3) trying to ship a product for revenue as quickly as possible, even though CES fell right in the launch window. That’s plenty enough nonsense for me.

            • chuckula
            • 6 years ago

            Thanks for the clarification!
            We are looking forward to the full A10 review. Do you still have access to that 4950HQ rig that you tested last year (or to a 4770R?)

            I’d be curious to see TR’s take on how a full-bore Kaveri IGP does against Iris Pro.

            • ronch
            • 6 years ago

            [quote<]1) attempting to protect the 95W Kaveri's CPU performance from scrutiny[/quote<] Well, AMD can't really do anything to protect it from scrutiny. Sooner or later the benches are gonna come out. [quote<]2) soon realizing the initial plan to only supply 45W review units wasn't good PR, [/quote<] Ah..... AMD's awesome PR department.... /facepalm

        • Modivated1
        • 6 years ago

        Very true, from early previews I expected a Bulldozer dude, something that I could not even see the reason why they released it. Instead while not the divine chip that turns the tide against it’s rivals it has some fair improvements and even wins in certain area’s against Intel; something that hasn’t happened for a long time.

        It has some real advantages over their previous releases. I am not apart of the crowd they are marketing to as I am trying to build a Powerhouse PC rather then a chip that looks like it’s market belongs more to the mobile space. However, it does also show promise for future development down this avenue of combined CPU + GPU computing. I wonder how Mantle will effect gaming on this same setup.

        If we are lucky, when Mantle is released TechReport will give us an update.

        • tcubed
        • 6 years ago

        I still don’t understand why a iGPU gamer is a weirdo… if I can get 30+ frames in ANY AAA titles with 780p with detail level medium to high in some it really is enough for me.

        Dear AMD,

        please be a champ and offer a FX APU line with:
        – 8 x86 cores
        – 20 GCN cores – with full DP @ 1GHz
        – 4.5-5 Ghz
        – 4 DDR3 memory lanes with native 2.6 GHz capacity

        From my calculations this would mean a ~450 mm^2… for a high-end CPU it should be ok don’t you think?

        Yours truly, the WHOLE DAMN WORLD! Except the weirdos that actually do have APU’s and think they’re ok, myself included 😀 and happy they only paid 140$ for a quad core and a GPU that can handle anything… I mean anything and much more then the similarly priced i3-4130…

        Let’s be reasonable, what normal work scenario or home scenario (meaning 90+ % of population actually does that) is there that the 6800 or the latest kaveri can’t handle anywhere from excellently to decently? Even considering media content creation encoding and such, even 3D work to some extent… not to mention normal office work which means basically emails, browsers and office, sometimes excel or powerpoint or home which means HD video playback, DVD/Blu-ray movie playback, youtube, emails, browsing, flash games and why not some 3D games? Any and all of these scenarios cover 99% of activity done on a computer by the common folk most of the time. Sometimes you encode a movie or transcode it or such which happens like what, once a year if you’re not a youtube freak and if you are you still don’t transcode GB of movies but instead some few mb’s. So tell me why would I pay more then 140$ for ANY type of chip when you can get a quad core and a capable GPU and a sound chip with it? For the rare occasion I need to transcode a 40GB blu-ray movie? Well I can run that in the background and watch a movie at the same time or browse or answer to emails while the transcoding/decription whatever is done in the background. Many say any saved time is saved time I do agree with that but since we’re not in the 90’s anymore and you can actually use the computer at other tasks while it does a really big one in the background I don’t see the point… Really… either for more FPS on games or using ridiculous multi-screen setups (I did use multi-screen setups myself and came to the conclusion that it’s stupid)… likewise spending a fortune for a UHD monitor and then wanting to play at native resolution? Well I’m quite happy with HD or FullHD which is playable in most any AAA game even if you need to drop the eyecandy a bit… I play it for the PLAY if I want nice scenaries I can just fire up a movie… if I’m a headshot precision maniac… well that excuse expired with FullHD guys… what distance are you from your monitor really to see the 4k pixels on a 32″ screen? Or anything beyond FullHD? Are you playing with your eyeballs glued to the screen?? Well that’s not the idea now is it? I bet you can’t see pixels from a FullHD 24″ if you’re standing a decent 50-70cm away (Eyesight not belly to screen I mean! I know I use one…) and certainly not in a fast moving game you won’t… If you have multiple screens… well then you need to stand further away if you actually want to have all screens in your sight so pixel density is even more ridiculous there… and now to the FPS part… I guarantee you if minimum FPS doesn’t go below 30 you won’t EVER notice a lag unless you’re one of the very few humans that can and this means that if you’re like me in the 99% part of the population anything beyond 30FPS is just wasted computational power and energy…

        So yeah… weirdos that are normal guys not wanting tech just for bragging rights or for specialized purposes (which usually are professional users not home or office users) do use and can be very happy with… an APU 🙂

        So… AMD is for 90% of earths home and office population the best bang for buck… Yes intel is better suited for professional users that have specific needs but for most population an AMD chip will more then fit the bill in both CPU and GPU area.. whereas not the same thing can be said of Intel… no chip in AMDs price range(<140$) is even remotely sufficiently potent on GPU side to be “powerful enough” even if the CPU’s are ok-ish because you will struggle to find a quad intel in the under 140 range. So yeah AMD does sell powerful enough chips for 90% of PC demand… yet somehow 80% are duped into buying overpowered parts or simply put overpriced parts from intel… nice world we live in, the free market has certainly shown it’s best face here…

        sorry for the rant… couldn’t help it 😀

          • derFunkenstein
          • 6 years ago

          If a quad CPU with 512 GCN cores is 95W, how much will AMD have to clock down a theoretical octo just to hot-rod the iGPU to 1GHz? You’re asking for something nobody but Intel could manufacture (at 22nm tri-gate) and it’s not likely.

          There’s a reason that the new consoles use Jaguar cores instead of Steamroller.

            • chuckula
            • 6 years ago

            [quote<]You're asking for something nobody but Intel could manufacture (at 22nm tri-gate) and it's not likely. [/quote<] I disagree: Intel's not that good. They couldn't pull it off. [I fully agree with your point though]

            • derFunkenstein
            • 6 years ago

            Well, yeah, that’s probably a good point. If anyone had a chance it’s be them though.

            • swaaye
            • 6 years ago

            Intel is almost to 14nm now so who knows.

            • Spunjji
            • 6 years ago

            Confusing downvotes for a very valid point.

            • chuckula
            • 6 years ago

            I’m an Intel fanboy so I get downvoted even when I point out that Intel isn’t perfect and isn’t capable of performing miracles.

            • UnfriendlyFire
            • 6 years ago

            Or you can do the FX-9590 stunt again. 200W TDP.

            What could possibly go wrong?

            • derFunkenstein
            • 6 years ago

            I see no possible downside! (Seriously I lold at that)

            • Airmantharp
            • 6 years ago

            I don’t either. AMD could choose to be competitive in the high-end CPU sector by throwing more modules at the problem, and it’s not like they’re very far behind Intel for many uses- and quite ahead with some.

            It is obvious though that AMD is avoiding said high-end market because there’s not much profit, or even volume, for them there. PC gaming is alive and well, but AMD’s manufacturing resources are limited to GloFo and TSMC, neither of which is competitive with Intel’s fabs. Making big, low volume SKUs that lack integrated video just isn’t going to really help them, and neither is the negative PR that would likely be generated by AMD’s top-end parts once again failing to provide a truly competitive option to Intel’s smaller, lower-power but still higher-performance parts.

            • tcubed
            • 6 years ago

            Well… my point is that that’s what the world is crying out for and it is useless and makes no sense, you could pull it off with a soi process sacrificing density and all and still within the 140w range considered ok for desktop. It would be huge it would be expensive to produce and to sell and mainstream users won’t ever buy it.

            Probably when GF gets to 14nm later this year we might see something like this, maybe charizo maybe the next thing in 15/16. But atm it makes zero sense. It would be an expensive marketing stunt that would work only if it would take the performance crown hands down. Otherwise it’s just a much more expensive fx centurion stunt, with comparable results.

            Amd could even beat intel ipc wise if it would want to it is just a matter of how whide you make the machine, how many instructions you can feed the cores simultanously… there is no secret sauce to it… it’s just a matter of economics and silicon economy. Pure performance is given by how many instructions you can cram down the pipe simultanously and how far apart the cicles are… not in this stands the rocket science… but all the rest!

            Look they just added an extra decoder and bang 20% improvements in ipc… this however is a small component of a core ~8% of one… making the machine whider would mean much bigger cores a 50% whider machine would mean a 40% bigger core and that would be enough to thrump intel… will they do it? No! Would it make sense? No! Does the world want it? Yes!

            • mikato
            • 6 years ago

            Mmmmmmm chorizo

      • fhohj
      • 6 years ago

      nah. on the plus side now we have assurance with a great list of on sites that can be trusted, and put out well tested, accurate content, and reliable reviews. you just list all the ones that got 7600’s.

      • sschaem
      • 6 years ago

      Are you saying TR must accept all free HW and must do write reviews as instructed?

      Absolutely not. TR is welcome to refuse all HW, and do its own fully independent reviews.

      Clearly TR would have refused if they didn’t find the product unacceptable as provided.
      TR did say the chip are on sale now… $170 is not a crazy amount to spend if TR was morally against doing a PC review, but wanted to do a chip review using its own motherboard.

      note: ask yourself, what motherboard do you think review site uses when they do a CPU review ?
      A board from a manufacturer that never pay to place add on the site? of one from a regular add contributor and that sent the board for free?

    • alientorni
    • 6 years ago

    nice review, strange tho, i don’t know really how to compare those cpus, i think they are from different segments? like the “T” suffix and so. it will be good to see one of your price-performance charts also!

    • Deanjo
    • 6 years ago

    [quote<] APUs occupy this awkward middle ground for so-called casual gamers who want something better than an Intel IGP but not as good as a halfway-decent graphics card. As Jerry Seinfeld would say, "who are these people?" Seriously, I've never met one.[/quote<] My sentiments exactly. I'd be more interested in a cheaper APUless Kaveri (which I would assume would bring down the TDP considerably).

      • drfish
      • 6 years ago

      Kids! Seriously. A $120 chip that can game respectively at 1080p can anchor a solid SFF system in the $300-400 range. Also HTPC/Steam Machines should eat this chip up.

        • Deanjo
        • 6 years ago

        Friends don’t let friends buy AMD graphics for linux gaming. Seriously.

          • Veerappan
          • 6 years ago

          Blegh… like I’d buy an Nvidia card to put in a linux box… Catalyst may be less than optimal, but at least r600g is mostly up to speed, and radeonsi is getting closer. It’s not like there’s many OpenGL 4.1+ games out there right now, anyway.

            • Concupiscence
            • 6 years ago

            I don’t get the raw hatred of Nvidia cards in Linux at all. Yes, the source is closed, but the drivers are golden, and have been for years and years.

        • Deanjo
        • 6 years ago

        Also I wouldn’t say sub 30fps gaming @ 1080P is “respectable” if you are not using a double standard. With a dedicated GPU, sub 30 fps is downright “unacceptable”.

    • ronch
    • 6 years ago

    Hey guys, I see Anandtech also has a review up and they have a 7850K. Haven’t read it completely yet so I dunno if they got the chip from AMD or they got it somewhere else. I made a beeline straight to the CPU tests and I’m kinda shocked to see those Steamroller cores being no better than the Piledriver cores inside Trinity/Richland in most of the tests they did. So what the heck was AMD thinking? Is this also why they didn’t bother spinning an 8-core FX lineup?

      • chuckula
      • 6 years ago

      Note Anand’s overclock test: They got the 7580K up to 4.4GHz max (with some pretty insane power consumption numbers). The 6800K has… a default 4.4GHz boost frequency without overclocking.

      So AMD did improve the IPC in steamroller, but GloFo took away any real potential CPU gains.

        • derFunkenstein
        • 6 years ago

        Hardware Heaven claims to have gotten 5.0GHz but absolutely no benchmarks or anything like that in the review. Kind of dubious as to the stability.

          • chuckula
          • 6 years ago

          Booting to 5GHz and then crashing doesn’t count. Anand tried 4.5GHz but had to back off and even at 4.4 the power usage is frightening.

          There’s a REASON that AMD didn’t clock these chips at 4.5GHz by default, and believe me, it’s not because AMD is holding something back.

            • derFunkenstein
            • 6 years ago

            Oh I agree. Dropping SoI was probably a mistake.

            • OneArmedScissor
            • 6 years ago

            It’s a win if they can sell laptops, but since they launch later, that’s up in the air. If they’re available soon, they’d be established before the back to school season, which Broadwell may miss.

            • derFunkenstein
            • 6 years ago

            Sell laptops, yes, but hopefully they’ll perform better than what they replace. The 45W setting for the 7600 was a mixed bag against 45W Richland parts in non-GPU scenarios. There’s plenty of Richland stuff on Newegg so hopefully they’ll do teh same with this chip.

            • tcubed
            • 6 years ago

            They needed the density…. frankly… i would consider buying a 4 core puma chip with a 20 gcn strapped to it at 3 ghz/1ghz… even if it would effectivelly be a gpu with some x86 capacity. Mullins sems like quite a capable chip.. it would also make a decent iris pro replacement for apple products… and also be much cheaper to produce and sell. The x86 performance would be laking but the gpu would blow your socks off it would ridicule iris pro in every way possible… and with ocl and hsa capabilities apple could build a very nice product around it…

        • spuppy
        • 6 years ago

        Here is the 7850 clocked at 4.7 GHz at 1.45v, with benchmarks

        [url<]http://www.hardcoreware.net/kaveri-review-a10-7850k/[/url<] IGP clocked to 1020 Mhz... when was the last time you overclocked something by 40%? 😉

          • derFunkenstein
          • 6 years ago

          My Duron 600 did 1Ghz. That’s more like 67%. :p

          My only other great-OC chip was a Pentium E2160 (1.8GHz) that did 3GHz for a similar 67% OC.

            • spuppy
            • 6 years ago

            So 2006 was the last time you had a monster OC 🙂

            • derFunkenstein
            • 6 years ago

            Well, I currently enjoy a 1.1GHz OC out of my i5-3570K. I might be able to get more out of it, because even under heavy PRIME95 load it doesn’t get too hot (65-70C and never throttles), but I’m happy with ~30%.

            • JustAnEngineer
            • 6 years ago

            I pencil-modded a bunch of Durons from 600, 650, 700 MHz up to 0.95 or 1 GHz in that era.

            • derFunkenstein
            • 6 years ago

            Yeah, pencil modding and a little scotch tape to protect it. That’s how I did it too.

            • swaaye
            • 6 years ago

            I discovered it was incredibly easy to just solder the bridges on the ceramic chips. Permanent and clean.

            Unfortunately the later “organic” packaging (that plastic/fiber glass stuff) of Palomino and newer made life much harder.

        • Blink
        • 6 years ago

        Anand’s overclocked it but didn’t test it. What’s the point?

      • flip-mode
      • 6 years ago

      Me: Not shocked.

      • YukaKun
      • 6 years ago

      No L3 and being able to stand toe in toe with Pilediver doesn’t give you a good impression? Really? I’d like to see Intel remove L3 from any of its Haswell CPUs and match it against Sandy or Ivy.

      Plus, the BLUK switch didn’t penalize it so badly. Reaching 4.4 with 2 Billion transistors is quite the feat in my book.

      You’re a very bad raging anti-AMD fanboi.

        • flip-mode
        • 6 years ago

        Hold your tongue. Ronch has been an AMD-er for a decade. Live your life without name-calling.

          • YukaKun
          • 6 years ago

          Because being a pure priest at heart gives you the right to violate kids from time to time, right?

          Yeah… Didn’t think so.

          Bipolar maybe then?

            • Concupiscence
            • 6 years ago

            -1, conflating brand loyalty to kiddy diddling. Go take a bath, because that’s disgusting.

            • ronch
            • 6 years ago

            abw, is that you?

            • Reuel
            • 6 years ago

            Spigzone, that you?

            • Pwnstar
            • 6 years ago

            Seriously…

            • ronch
            • 6 years ago

            Spiggy is now… the LIONHEART!!!

          • ronch
          • 6 years ago

          Make that TWO decades, man. Still, I shoot AMD down when I don’t like what they’re doing.

        • ronch
        • 6 years ago

        Dude, they were comparing Richland/Trinity to Kaveri. None of those have any L3, in case you don’t know that.

        • maxxcool
        • 6 years ago

        Since none of the apus have l3.. your post is silly.

      • chuckula
      • 6 years ago

      It’s not all bad Ronch! AMD’s GPUs are still first-rate. It’s when they try to water them down with CPUs that they run into issues.

        • ronch
        • 6 years ago

        I love AMD graphics, especially GCN. I expect no bad showing in terms of graphics with Kaveri. It’s the CPU cores I’m primarily interested in and how much AMD has moved forward with Steamroller.

          • sschaem
          • 6 years ago

          I dont see any progress… only big news, GF process it not a complete failure at low frequencies.
          So the <50w parts delivers a decent power efficiency boost.

          I dont think we haven’t seem Kaveri at its best, it will come when we have BF4. Thief, etc… running on a 35w APU laptop using Mantle.

          • snook
          • 6 years ago

          same ronch, have similar specs as your PC now. I want to build another system mITX or mATX and I’m AMD only. hope they improve the CPU some more soon. I want to move to SFF asap.

      • mikato
      • 6 years ago

      I read the Anandtech first, and after that I’m surprised to see here TechReport’s review shows Kaveri’s Steamroller cores doing better than Trinity/Richland pretty much everytime. Quite a different feeling I get from the two reviews.

        • chuckula
        • 6 years ago

        There’s a GINORMOUS caveat to those CPU benchmarks: Note that TR was only given some very-low end Trinity/Richland parts to use in this review. Anand was testing using the top-of-the-line A10-6800K from the Richland series.

        Also note that the load power consumption on the new Kaveri parts was higher than their Richland counterparts.

        Once/if TR can run a review where the full Richland series (including the 6800K that most TR readers bought or would buy) is compared to the A10-7850K we’ll have a much more complete picture of the real CPU situation.

          • mikato
          • 6 years ago

          Yeah you’re right, it was A10-5800K vs A10-6800K vs A10-7850K and A8-7600. I guess I was mostly looking at the results of the A10-7850K and A8-7600. They did make a huge improvement in performance of the lower wattage parts and Anandtech article shows that too. I might go with the A8-7600 for an HTPC.

    • ronch
    • 6 years ago

    I kinda feel like the way I felt after reading the Bulldozer launch article back in 2011.

      • flip-mode
      • 6 years ago

      Wow. You should not. This wasn’t hyped nearly as much, nor were as many promises made. Enthusiasts haven’t doted on it or expected much from it. This is exactly what I expected: essentially no CPU progress but a better GPU.

      AMD is a GPU company that is leveraging a legacy CPU design. That is all.

      • Deanjo
      • 6 years ago

      You should know better by now. AMD hasn’t lived up to their own hype to a CPU release going back to the original Phenom release.

        • flip-mode
        • 6 years ago

        Except they didn’t even hype this one.

          • Deanjo
          • 6 years ago

          Oh they have been hyping HSA for quite a while now so I can’t agree with you there.

            • flip-mode
            • 6 years ago

            You’re conflating things. Hyping HSA isn’t the same as hyping the Kaveri product in particular. Beyond that, it will take some time for HSA-optimized software to show up. Is the Open-CL stuff automatically “HSA-optimized”?

            • chuckula
            • 6 years ago

            HSA != OpenCL. The two can be related but are not necessarily dependent on each other.

            Deanjo is right that the HSA hype is directly about Kaveri because there is exactly one chip in the entire world that actually supports HSA: It’s Kaveri. That’s it.

            Running OpenCL on your Radeon card? That’s great, but it’s not HSA.
            Want to do stuff with your FX-8350? Awesome, but it’s not HSA.

            • flip-mode
            • 6 years ago

            It seems worth pointing out that when I asked if, “Open-CL is even HSA optimized,” the question pretty well implies that they are not the same thing, Chuckster.

            Anyway, I’ve heard barely any “hype” over Kaveri. AMD hasn’t said much in regards to Kaveri, they gave it a sideways mention when speaking about HSA, they gave no sill “performance previews” or those utterly nonsensical “virtual performance” numbers that they gave in the runup to Bulldozer, and other than occasionally hearing an enthusiast wonder about it, all has been quiet.

            Honestly, I’m a little surprised it’s as “improved” as it is.

            I think AnandTech nailed in in its conclusion when it said, to paraphrase, that AMD is essentially just ridding out the Bulldozer architecture until their next CPU architecture launches. The Bulldozer architecture was interesting on paper but a true loser in real life in the context of its peers. So, settle back in your seat and wait till 2016, and then wait till 2017 because this is AMD and there will be a delay (and remember we’re usually talking 4th quarter here so that’s practically 2018 with availability in Q2, LOL).

            • Deanjo
            • 6 years ago

            AMD hyped it pretty bad when they were introducing HSA and made a point of pointing out Kaveri as the first of these “revolutionary” APU’s.

            No openCL app is automatically “optimized” for any device. openCL still requires a fair amount of hand tuned device specific code to take full advantage of a openCL device.

      • Peldor
      • 6 years ago

      Nah, from the baseline of what AMD has been turning out the last few years, Kaveri is right on target.

      This is actually a bit better looking IMO than the pre-NDA stuff that got leaked.

    • tay
    • 6 years ago

    Thanks for the review TR. Made for good reading on the commute. I wonder if AMD will release Opteron chips without GPU which would run on non ECC. Could get a 4 module part which would also make good gaming CPUs.

    • ronch
    • 6 years ago

    Ok.. 2 questions:

    1. Why wasn’t the top model Kaveri reviewed? Instead, a ‘halfway’ model such as the 7600 was used as the topic for a launch article, a model that TR says isn’t even available yet. Are we more interested in seeing what the configurable TDP can do?

    2. Why wasn’t the top model Richland/Trinity (5800K or 6800K) included? Doing so would’ve made it easy to see how much AMD has moved forward with Kaveri. As it is, it’s almost as though AMD explicitly told TR not to do direct comparisons between a 6800K and the 7850K. Are we afraid to admit that those Steamroller cores took two steps forward and one step back? Increase IPC, decrease clock. AMD might as well stick with Piledriver at 32nm.

    This is the strangest launch article I’ve read in recent memory.

    • n00by
    • 6 years ago

    The A8-7600 in particular looks pretty interesting, seems to be quite a bit more efficient then the 7800. Looks like an excellent choice for super small form factor gaming machine, wondering how the A10 or whatever unlocked part will do.

    • Sadheal
    • 6 years ago

    There is a Tomb Raider graph on the Battlefield 4 page.
    Thanks for this review anyway !

      • Damage
      • 6 years ago

      Fixed.

        • chuckula
        • 6 years ago

        Is it OK to ask if AMD sent you an A10-7850K for review?

        I noticed you got a full-built system for the 7600, did the 7850K also come in a full-built system or did you just get a chip + motherboard (if you received a sample)?

          • Damage
          • 6 years ago

          Yeah, AMD’s plan to protect Kaveri involved only sending out the A8-7600 and competitive parts. They obviously wanted to protect their new chip from the scrutiny it would get for being no faster in CPU benchmarks at 95W. Even the A10-6700T wasn’t part of the original plan, just the slow-poke 6500T.

          This is all in a very tight time window. I did the Kaveri press day the Sunday before CES, attended CES, had Sat-Mon to write. Geoff was testing during that span, having gotten updated Kaveri drivers on Friday.

          So we were stuck with AMD’s chip selection for this review.

          I was able to shake loose a sample of the A10-7850K while at CES, but I still don’t have an FM2+ motherboard and wouldn’t have had time to test it for this review if I did.

          We’ll get it all sorted eventually, but AMD did succeed in thwarting our ambition to give you guys the data that you want today.

            • ronch
            • 6 years ago

            Thanks for clearing that up, Scott. I knew something fishy was going on when you got a 7600 instead of the top Kaveri part.

            • chuckula
            • 6 years ago

            Thank you Scott!!

            Please don’t take any of my comments on this thread to be any form of a complaint about you or TR’s review! I noticed that the review was a little short (likely because it’s hard to put a discrete GPU into that SFF box) but the quality is outstanding as usual and I’m impressed you did all that in such a short timeframe.

            Frankly, AMD’s intentional decision *not* to send out an A10-7850K to you guys speaks volumes about what AMD really thinks about their own chip in real life — not in some powerpoint show.

            Take your time with the benchmarks on that chip you scrounged from CES and we’ll be happy to read your full review when it’s ready.

            • Cataclysm_ZA
            • 6 years ago

            PC Perspective got the A8-7600, so did you guys, Bit-Tech and most other places as well. Anandtech and Hardware Canucks as well as a handful of others, however, did get a A10-7850K. This isn’t the launch I was looking for and it seems that AMD is playing hardball with reviewers who aren’t afraid of telling it how it is.

            • spuppy
            • 6 years ago

            btw just because a site has a review up of the 7850K doesn’t mean they don’t tell it like it is 😉

            Everyone got theirs late, and the drivers too. Maybe AMD did it this way on purpose, who knows, but all sites got the same hardware and drivers at roughly the same time.

            • Cataclysm_ZA
            • 6 years ago

            I don’t think that is the case from what we’ve seen from reviews so far. Sites that have called out AMD in the past for poor performance, especially where frame rating is concerned, seem to have been sidelined with the A8-7600, which forces them to look at the power consumption aspect of Kaveri in relation to Trinity/Richland. Anandtech’s review specifically highlights this because the gains in the 45/65W range are much bigger than with the 95/100W APUs that are the top of the range.

            AMD’s playing it safe, possibly, but this just makes it look really, really dodgy when high-profile sites like TR and PCPer don’t receive the flagship models, while Anandtech actually gets the A10-7850K handed to them. Ryan Shrout’s review mentions that it was AMD’s decision to not give them a A10-7850K. So I don’t believe that all sites received the same hardware, otherwise we’d be seeing a A10 in TR’s review.

            • Deanjo
            • 6 years ago

            Must of pissed someone off at AMD, even Phoronix got a A10-7850K to bench and review.

            • maxxcool
            • 6 years ago

            thanks again for the review and future review-breakdowns well see..

    • chuckula
    • 6 years ago

    Before anyone starts hurling accusations at TR: Go read the review and note that AMD pre-assembled this system with its own motherboard, cooling, PSU, chip, and even AMD-branded DDR3-2133 RAM… 16GB of it… which ain’t exactly cheap and needs to be considered when giving the usual argument of “but AMD CHEAPER” since that i3-4330 certainly isn’t normally requiring you to use DDR-2133.

    See Page 4: [url<]https://techreport.com/review/25908/amd-a8-7600-kaveri-processor-reviewed/4[/url<]

      • drfish
      • 6 years ago

      The price difference between 1866 and 2133 is pretty small these days. For 8GB it’s about $5.

        • chuckula
        • 6 years ago

        [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16820403002[/url<] Closest I could find. It's $185. How does $185 of memory + $130 for the chip == a $300 gaming rig again?

          • derFunkenstein
          • 6 years ago

          In drfish’s defense he said 8GB. [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16820301218[/url<] $75 for 8GB + $130 chip = 205. Still going to be tough to get it to $300. $400 is pretty doable though. Giagbyte F2A75M for $60, a 500GB HDD for another $60, and some Cooler Master case + PSU combo for $60 and you have a couple bucks to play with yet. If you need a system at $400 and you want to play a few games I guess that's the best you'll do. Still, that's cheaper than an Xbone and for that $100 you can get way more GPU. Then dump Kaveri and get an i3 instead. I'm perplexed. edit: wait a minute. A10-7850K is $190 on Newegg. A10-7700K is $170. Screw that noise.

            • drfish
            • 6 years ago

            I’m surprised the 512 vs. 384 core count doesn’t have more of an impact on GPU performance. I’m also surprised to see that 45w vs. 65w have nearly zero impact on GPU performance. Based on what I’ve seen today I don’t see any reason not to go 7600/45w vs 7850/95w so the $130 is still valid (once it actually comes out *grumble*). I still think a $399 Kavari Steam Machine is the sweet spot.

            • derFunkenstein
            • 6 years ago

            That must mean something else is holding it back. My guess is the 8 ROPs vs. 16 in even a 7750 makes a big difference as well as the bandwidth constraints.

            • Concupiscence
            • 6 years ago

            Nailed it. Those two constraints keep the GPU from stretching its legs and running at full potential. It’s still a zippy little performer for an IGP, but the GDDR5 Radeon 7750 + i5 3570k in my media PC would unambiguously thump the A10-7850k. That said, those two parts together also cost at least $80 more than the A10 by itself, though that savings could be impacted by seeking out DDR3-2133 to maximize the APU’s potential.

          • ptsant
          • 6 years ago

          Nobody is going to buy a 16Gb kit for this cpu. With $80 of memory you’re good to go, and at 2133.

    • chuckula
    • 6 years ago

    I think Kaveri presents the same set of questions for existing AMD owners that we saw with Haswell: If you have an A10-6800K already, is it worth the purchase of a new motherboard + Kaveri chip? Or are you better off getting an HD-7750 (or higher) GPU to get a bump in graphics performance?

      • bittermann
      • 6 years ago

      If you already have a FM2+ motherboard, maybe. Mantle may sway that to a yes? If not no, I’d get a 7750 for the 6800K. Just my 2 cents.

        • chuckula
        • 6 years ago

        Mantle should also work with the 7750 in the older motherboard.

          • bittermann
          • 6 years ago

          Yeah, forgot, good point.

    • chuckula
    • 6 years ago

    OK… still freaked that there’s no A10-7850K review up here but…

    From these results there isn’t much of a surprise. Kaveri has a nice IGP (big shocker since it’s mostly an IGP). Kaveri has a relatively weak CPU that is usually behind a (lower priced) I3-4330.

    Note that even in OpenCL… where AMD’s massive IGP is supposed to be a huge advantage.. AMD doesn’t even run the table and it’s a mixed bag of software that was clearly written to target AMD’s GCN vs. more generic software that is smart enough to use multiple architectures efficiently where AMD doesn’t have a strong lead.

    Kaveri’s power efficiency really isn’t any better than the previous generation, which is a bit of a disappointment and is probably mostly due to the fact that GloFo is *not* AMD’s friend and isn’t giving AMD a particularly strong manufacturing foundation for Kaveri.

    • chuckula
    • 6 years ago

    WTF??? WHERE IS THE A10-7850K???!?!?!?

    [Edit: See Damage’s comment that AMD elected not to give TR a review sample. Dear TR editors: No offense against you intended. Y’all rock. Thanks for making lemonade out of the lemons you had available.]

      • Modivated1
      • 6 years ago

      Indeed TR, I did like your write-up of the review. Much thanks for all the time you put in to be descriptive and explain some much detail in lamens terms. Looking forward to more of your work.

    • ronch
    • 6 years ago

    Holy crap. I thought we’d be reading about the top of the line Kaveri part. Oh well. Time to read.

Pin It on Pinterest

Share This