AMD’s A10-7800 processor reviewed

We first reviewed an AMD Kaveri processor back at the start of the year, but since then, AMD’s new APU has been in kind of a weird place. The A8-7600 chip we reviewed has been scarce in retail channels, evidently because AMD succeeded in selling them elsewhere—likely to big PC manufacturers. Some of the chips were surely set aside for use in laptops, too. As a result, PC hobbyists just haven’t had very good access to AMD’s latest APU.

Happily, that situation is finally changing, and Kaveri-based chips are starting to make their way into the market. AMD is putting an exclamation point on that fact today by filling out its APU lineup and making some tweaks to its pricing. The headliner of the bunch is a brand-new model, the A10-7800, that may just be the most desirable Kaveri-based desktop processor yet.

Kaveri proliferates

Here’s a look at AMD’s updated lineup:

Model Base

clock

(GHz)

Max

Turbo

clock

(GHz)

Modules/

threads

Total

L2 cache

(MB)

Graphics

CUs

Max

graphics

clock

(MHz)

TDP

(W)

Alt.

TDP

(W)

Price
A10-7850K 3.7 4.0 2/4 4 8 720 95 $173
A10-7800 3.5 3.9 2/4 4 8 720 65 45 $155
A10-7700K 3.4 3.8 2/4 4 6 720 95 $155
A8-7600 3.3 3.8 2/4 4 6 720 65 45 $105
A6-7400K 3.5 3.9 1/2 1 4 756 65 45 $77

The brand-spanking-new A10-7800 nearly matches the top-of-the-line A10-7850K in terms of clock speeds and unit counts, but it does so in a much smaller 65W power envelope. And it costs less than the 7850K. Given everything, I’d say the A10-7800 is the Kaveri chip to get, as long as you don’t plan on overclocking your processor. (Only the K-series parts have unlocked multipliers.)

Also new today are official retail editions of the A8-7600 and A6-7400K. The A8-7600 is the same basic product we reviewed in January, while the A6-7400K is an unlocked K-series part. At $77, the 7400K matches up against the unlocked Pentium G3258, but going directly against an overclocking titan like that one would probably be suicidal. The 7400K is better suited to providing an attractive CPU-IGP combo for truly low-end systems.

At $155, the A10-7800 is priced in a gap between Intel’s Core i3 and i5 desktop parts. That’s a clever tactical move by AMD. The company’s marketing materials clearly position the A10 against the Core i5, so the firm is looking to be the lower-cost alternative. As we’ll see, though, the A10-7800 will have to deal with some of the top Core i3 offerings in order to stake that claim.

Meanwhile, the new Kaveri-based APUs face some unusally formidable competition from a familar source: past generations of AMD APUs, specifically those based on 32-nm Richland chips.

As we noted in our initial review, AMD had some tough choices to make with Kaveri. The 28-nm process provided by its chipmaking partner, GlobalFoundries, offers some potential advantages, including increased power efficiency and the ability to pack more logic into a given area. AMD has used the extra gates to cram in lots of graphics horsepower—specifically in the form of the GCN graphics architecture used in the latest Radeons. GlobalFoundries’ 28-nm process is not, however, tailor-made for CPUs like its older 32-nm SOI process was. The transistors in Kaveri chips can’t switch quite as quickly as those in AMD’s 32-nm chips, and as a result, CPU clock speeds are somewhat lower.

For instance, the A10-7800 ostensibly replaces the Richland-based A10-6700. Both are 65W quad-core processors. The A10-6700 has a base clock of 3.7GHz and a Turbo peak of 4.3GHz. By contrast, the A10-7800 runs at 3.5/3.7GHz.

AMD has attempted to make up this deficit by tweaking the “Steamroller” CPU cores in Kaveri to increase per-clock instruction throughput. As we’ll soon see, those improvements have paid off to some degree. Still, this isn’t the best time for AMD to be treading water when it comes to CPU performance, given how big a lead Intel holds in this department.

AMD hopes to paper over its relative weakness in CPU performance by pushing for more desktop applications to use the GPU side of the chip to help with computing tasks. The concept makes sense—and heck, tablets and phones are using GPU acceleration pretty consistently these days—but unfortunately, Windows applications that take advantage of GPU computing have been slow in coming. Speaking of which…

What about that 12-core APU?

No, Virginia, AMD is not releasing a 12-core processor any time soon.

The Internets have been afire with a rumor about a 12-core APU lately, prompted by some AMD marketing materials that focus on the number 12.

I suppose the enthusiasm is natural; 12 cores is a lot of cores. I’m not sure what folks expect to do with them all, but whatever. Here’s the thing, though: in its push for “converged” computing, AMD has taken to calling its graphics compute units “cores.” By this reckoning, the A10-7800, with four CPU cores and eight graphics “cores,” would be a 12-core processor.

So I guess AMD really is introducing a 12-core processor. They’ve also had another one on the market for a while now.

Mind blown. Poof.

Meanwhile, Intel sells a Core i5 chip with four CPU cores and 20 graphics execution units, so it has 24 cores, right?

Hrm. I’m not so sure about this new math.

Haswell refreshed

The competition has gotten more potent since Kaveri’s initial release, too. Intel has refreshed its Haswell lineup from top to bottom, raising clock speeds while generally keeping prices the same. Here’s a relevant sampling of current Haswell models.

Model Base

clock

(GHz)

Max

Turbo

clock

(GHz)

Cores/

threads

L3

cache

(MB)

Intel HD

Graphics

Max

graphics

clock

(MHz)

TDP

(W)

Price
Core i7-4790K 4.0 4.4 4/8 8 4600 1250 88 $339
Core i5-4690K 3.5 3.9 4/4 6 4600 1200 88 $242
Core i5-4590 3.3 3.7 4/4 6 4600 1150 84 $192
Core i3-4360 3.7 2/4 4 4600 1150 54 $138
Pentium G3258 3.2 2/2 3 1100 53 $72

As I’ve mentioned, the A10-7800 is priced in between the Core i3 and i5, so we don’t have an exactly direct competitor to test against. Instead, we’ll bracket the 7800 with higher- and lower-priced Intel CPUs.

The very cheapest Core i5 on Intel’s price list is the i5-4460 for $182. There are some slower models, but Intel has priced them at $182, too—a sign that they’re on the way out. Thing is, you can also grab the Core i5-4590 for only $10 more than the i5-4460. The 4590 has a 100MHz faster base clock and a 300MHz higher Turbo peak. I figure I’d take that deal if I were buying a CPU in this range, so I chose the 4590 as our representative from the Core i5 lineup.

The Core i3-4360 stands in for the Core i3 camp. The $138 price you see for it in the table above comes from Intel’s price list. Intel CPUs typically stay close to list, but we paid $153 for our i3-4360 when we ordered it from Amazon recently. AMD probably wouldn’t want to admit this, but the Core i3-4360 may be the A10-7800’s closest competition, truth be told.

We also have the A6-7400K lined up against a stock-clocked Pentium G3258 in a pitched battle among budget chips. We fully intended to overclock both chips as part of a larger battle, but… well, many things didn’t go as planned in the making of this review.

About that…

I started out with big plans. It’s been a while since we’ve had one of our epic, full-scale CPU reviews, and I figured it was time to produce another one with updated tests, games, and CPUs of every stripe. I collected a ton of chips and threw myself into the task. Nearly everything would be new, and we’d try all manner of intriguing tests, including our famous inside-the-second analysis of frame-by-frame gaming performance. I tested the A10-7800 versus the Core i5-4590 in some particularly difficult gaming scenarios: the “Welcome to the Jungle” level of Crysis 3, wandering the city in Watch_Dogs, Battlefield 4 with and without AMD’s Mantle API. And much, much more.

Then I looked up. Four days had passed. My weeked was gone. So was Monday. And I’d only managed to test two of the planned eight or nine CPU on my list. Whoops.

The review I’d envisioned was going to be glorious. But it was also going to kill me—and severely delay our coverage of the new Kaveris—in the process.

Ultimately, I had to pull the ripcord and slim down the selection of CPUs and tests. What you see on the following pages is just a down payment on a larger CPU roundup that’s still in the works, but it should be sufficient to tell the story of AMD’s new APUs. We’ll get into more detailed gaming coverage and the like, with a broader selection of CPUs, once I’ve had more time to prepare.

You’ll see results for the AMD FX-8350 and the Core i5-2500K in the following pages. Consider them a bonus. I had the sockets open, so I was able to test them alongside the other processors. The FX-8350 is AMD’s fastest CPU—save for the crazy 220W parts that require a water cooler. With eight integer cores and four FPUs, the FX-8350 is an interesting reference, at least. The Core i5-2500K, meanwhile, was introduced at the start of 2011 and was an enthusiast favorite from the start. When the 2500K went end-of-life in early 2013, it cost $224, a little more than the Core i5-4590 costs now. I’m intrigued to see how today’s chips fare against it.

Our testing methods

The test systems were configured like so:

Processor AMD FX-8350 Athlon X4
750K

AMD A6-7400K

AMD A10-6700

AMD
A10-7800

AMD A10-7850K

Motherboard Asus Crosshair V Formula Asus A88X-PRO
North bridge 990FX A88X FCH
South bridge SB950
Memory size 16 GB (2 DIMMs) 16 GB (4 DIMMs)
Memory type AMD Performance

Series

DDR3 SDRAM

AMD Radeon Memory

Gamer Series

DDR3 SDRAM

Memory speed 1600 MT/s 1866 MT/s
2133 MT/s
Memory timings 9-9-9-24 1T 10-11-11-30 1T
10-11-11-30 1T
Chipset

drivers

AMD chipset 13.12 AMD chipset 13.12
Audio Integrated

SB950/ALC889 with

Realtek 6.0.1.7233 drivers

Integrated

A85/ALC892 with

Realtek 6.0.1.7233 drivers

OpenCL ICD AMD APP 1526.3 AMD APP 1526.3
IGP drivers Catalyst
14.6 beta
Processor Core i5-2500K Pentium
G3258
Core i7-3770K Core
i3-4360

Core i5-4590

Core
i7-4770K

Core i7-4790K

Motherboard Asus P8Z77-V Pro Asus Z97-A
North bridge Z77 Express Z97 Express
South bridge
Memory size 16 GB (2 DIMMs) 16 GB (2 DIMMs)
Memory type Corsair

Vengeance Pro

DDR3 SDRAM

Corsair

Vengeance Pro

DDR3 SDRAM

Memory speed 1333 MT/s 1333 MT/s
1600 MT/s 1600 MT/s
Memory timings 8-8-8-20 1T 8-8-8-20 1T
9-9-9-24 1T 9-9-9-24 1T
Chipset

drivers

INF update 10.0.14

iRST 13.0.3.1001

INF update 10.0.14

iRST 13.0.3.1001

Audio Integrated

Z77/ALC892 with

Realtek 6.0.1.7233 drivers

Integrated

Z97/ALC892 with

Realtek 6.0.1.7233 drivers

OpenCL ICD AMD APP 1526.3 AMD APP 1526.3
IGP drivers 10.18.10.3652

They all shared the following common elements:

Hard drive Kingston HyperX SH103S3 240GB SSD
Discrete graphics XFX Radeon HD 7950 Double Dissipation 3GB with Catalyst 14.6 beta drivers
OS Windows 8.1 Pro
Power supply Corsair AX650

Thanks to Corsair, XFX, Kingston, MSI, Asus, Gigabyte, Intel, and AMD for helping to outfit our test rigs with some of the finest hardware available. Thanks to Intel and AMD for providing the processors, as well, of course.

Some further notes on our testing methods:

  • The test systems’ Windows desktops were set at 1920×1080 in 32-bit color. Vertical refresh sync (vsync) was disabled in the graphics driver control panel.

The tests and methods we employ are usually publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Power consumption and efficiency

We tested total system power consumption at the wall socket by plugging our test rigs into a power meter. What you see above is the power consumed during a set period of time. During that span, we asked each CPU to encode a video with the x264 encoder. You can see that some processors took longer than others to finish—and the various systems drew different amounts of power as they worked.

Both Intel and AMD have made some nice strides in recent years at reducing power consumption at idle by integrating more components onto the CPU, where they’re under the control of a power management policy. All of the CPUs we tested participate in those advances except for AMD’s FX-8350, which still plugs into a socket with discrete, chipset-based PCIe connectivity.

We removed the discrete Radeon HD 7950 for some of our tests. Those results are labeled “IGP” at the end. As you can see, the configs with integrated graphics alone tend to be the most efficient overall.

Although the A10-7800 has a lower peak power rating than the Core i5-4590—65W to 84W, respectively—the 7800-based system requires higher wattage at peak in this workload. Manufacturers’ TDP ratings are just peak numbers, and they aren’t terribly useful for comparisons with other brands.

Adding up the total power used for the duration of the test period is one means of measuring efficiency. We can also look at the power used only as the encoding work was being done, which involves shorter spans of time for the better performers.

Intel rules the efficiency rankings in this workload, thanks to a combination of lower peak power draw and shorter encoding times. The good news for AMD: Kaveri is clear progress over Richland. The A10-7800 requires less power than the A10-6700 to encode the same video. The generational improvement isn’t huge, but it’s definite progress.

Discrete GPU gaming

For now, we have only a single game test, Thief‘s built-in benchmark, to show us discrete gaming performance. No matter. The results tell a familiar story.
AMD’s latest CPUs just aren’t terribly good at cranking out frames quickly in PC games. Even the stock-clocked Pentium G3258 outperforms the A10-7800 in this game’s default Direct3D mode. With its higher clock frequencies, the A10-6700 is slightly faster than its Kaveri-based counterpart, too. Since it has only two relatively pokey integer cores (and a single, shared FPU), the A6-7400K suffers even more.

Switching over to AMD’s Mantle graphics API, with lower CPU overhead and apparently better threading, seems to help somewhat. All of the CPUs get faster, and the A10 APUs are at least able to maintain a 60 FPS average (although FPS averages alone won’t tell you much about true smoothness.) Oddly, the two dual-core CPUs wouldn’t start Thief properly in Mantle mode, for whatever reason, which is why those results are missing above.

Gaming with integrated graphics

In the test above, we used a discrete GPU to remove any graphics bottleneck from the picture. Now we’ll consider what happens when we switch to the graphics processors integrated into each of these chips. In this case, the IGP is much more likely to be the primary performance constraint—which puts us on Kaveri’s home turf.

Yeah, that changes things. Not only does Kaveri have more graphics grunt than Richland and Haswell, but AMD has also officially blessed the use of DDR3-2133 memory with the A10-7800. As a result, the 7800 clearly outperforms both its predecessor and the competition from Intel in these tests.

This outcome is a big part of AMD’s pitch: if you care about both graphics and CPU performance, Kaveri can offer the best mix of the two. I can see the logic, but there’s still not enough bandwidth going into the CPU socket to allow graphically intensive 3D games to run terribly smoothly on this IGP. Hmm.

Productivity

Let’s run through a quick sampling of some desktop-style applications that rely on the CPU cores to do their work.

Remember what I said about AMD’s CPU performance treading water from one generation to the next? It’s on display above. The A10-6700 and 7800 trade spots from test to test without any clear overall victor. The A10-7800 also exchanges blows with the Core i3-4360, and to my eye, it looks like the i3-4360 comes out a bit ahead in the overall mix. The more expensive Core i5-4590 is clearly much faster than either of them.

The A6-7400K, meanwhile, suffers through a series of embarrassments against the Pentium G3258. Putting a single Steamroller module up against a dual-core Haswell is not a good proposition. Then again, the integrated graphics results on the last page were almost as lopsided in the 7400K’s favor.

LuxMark OpenCL rendering

LuxMark is a nice example of GPU-accelerated computing. Because it uses the OpenCL interface to access computing power, it can take advantage of graphics processors, CPU cores, and the latest instruction set extensions for both. Let’s see how quickly this application can render a scene using a host of different computing resources.

I found that AMD’s APP software driver was faster on both Intel and AMD processors than the latest Intel ICD. Even so, Intel’s FPUs outperform AMD’s here. The dual FPUs (each shared between two integer cores) on the A10-7800 can’t keep pace with the dual cores on the Core i3-4360.

The A10-7800’s potent IGP comes out on top here, well ahead of the pre-GCN graphics units in the A10-6700. Intel has made some nice progress on the GPU compting front, too, though. The Core i5-4590’s IGP isn’t that far behind the A10-7800’s.

Perhaps due to some changes in the plumbing in Windows 8.1, combining the CPU cores and integrated graphics to work on a shared task is much more effective now than it used to be. There’s a clear speedup in each case. Unfortunately for AMD, the Haswell chips appear to have the better combination of CPU and GPU computing power for this application. The A10-7800 trails the Core i3-4360 slightly.

Heh, well. The numbers get much larger when you farm out the work to a proper discrete GPU like the Tahiti-based Radeon HD 7950 (same thing as a Radeon R9 280, if you want to feel minty fresh.) The faster CPUs wring a little more performance out of the discrete GPU, but the differences aren’t that large.

Ask the CPU and discrete GPU to team up, and the sample rates go even higher. The A10-7800 just barely outperforms the $72 Pentium G3258 in this case, though.

Cinebench rendering

For our final benchmark, we have an expanded series of results sourced from my recent overclocking experiments. Cinebench also renders a scene, but it uses only CPU power to do so.

The single-threaded results above emphasize the issue with AMD’s CPU performance right now. Nothing AMD has to offer, even a chip overclocked to 4.8GHz, can match the per-thread performance of the 3.2GHz Haswell Pentium. At least in this workload, AMD has taken a step backward with Kaveri versus Richland.

Conclusions

Today’s changes, including the addition of the A10-7800, bring AMD’s desktop CPU offerings up to date with the latest Kaveri silicon. Our direct comparison of the A10-7800 versus the A10-6700 illustrates what that shift means. Kaveri is somewhat more power-efficient than Richland, and its integrated graphics are substantially improved. Unfortunately, the greatest single weakness of AMD’s current APUs, their per-thread performance in CPU-intensive tasks, hasn’t gotten better. In fact, in some cases, the 7800 is a bit slower than the chip it supplants. The ground lost isn’t worth getting worked up over, but who could blame us for wishing for something more?

I’d like to think AMD’s push for GPU-accelerated computing could help make up the difference, but there are two problems with that prospect. One, the applications that use the GPU to accelerate common tasks have been terribly slow in coming. AMD has been talking about these things for years, but everday Windows programs just haven’t made the transition to GPU acceleration in any great numbers. Two, the results we saw in our LuxMark OpenCL rendering test suggest the Core i3-4360 offers a more potent combination of CPU and GPU computing power than the A10-7800. Whoops.

One area where AMD’s APUs clearly do have a lead is integrated graphics performance in intensive 3D games. The A10-7800 has the world’s best integrated graphics in a socketed desktop processor. The question is: does that matter? There’s not enough bandwidth going into a CPU socket—or, more importantly, enough IGP performance coming out of it—to make hard-core PC gaming on an APU an attractive proposition. If you plan to play games, our recommendation remains the same as ever: get a decent CPU and plug in an inexpensive discrete graphics card like the GeForce GTX 750. You’ll have a much better experience.

In light of everything we’ve just said, we’re led back to a familiar conclusion: that AMD APUs make a lot more sense for environments constrained by size, power, and thermals than they do in big desktop systems. That shouldn’t be much of a revelation. Kaveri is clearly a mobile chip first and foremost. Many of AMD’s engineering choices were driven by that fact.

There are desktop systems where Kaveri could play well: all-in-one PCs, small-form-factor systems, Steam machines, and so on. The A10-7800 and the A6-7400K both have “configurable TDP” modes; they’ll sacrifice clock speeds to fit into a 45W power envelope. That option could be interesting for a quiet home theater PC or the like. Still, finding the right niche where an AMD APU clearly makes more sense than a competing Core i3 or i5 processor will require some creativity.

Fortunately, we now know AMD is committed to improving this situation. Kaveri is the first CPU that’s fully compliant with AMD’s HSA accelerated computing architecture, and the APUs based on it can serve as the development platform for HSA-enabled applications. That is just a start, really, but it’s an important piece that was missing. Second, the firm has an all-new, x86-compatible CPU core in development. If all goes as planned, we could see products based on this new core in 2016.

Comments closed
    • Gadgety
    • 5 years ago

    “AMD APUs make a lot more sense for environments constrained by size, power, and thermals… Kaveri is clearly a mobile chip first and foremost. ..here are desktop systems where Kaveri could play well: all-in-one PCs, small-form-factor systems, Steam machines, and so on.”

    Very well put. And if I may add, in systems constrained by money. To me the A8-7600 is the sweet spot, rather than any of the A10s. It’ll perform very, very close to the A10s, in a tighter power and heat envelope, in an SFF system. I compared other tests, and the A8-7600 is within 6-7% of the 7850k, and within 3% of the A10-7800 performance for 2/3s of the cost of the A10-7800. In comparison with a Pentium+GTX750 or similar, the A8-7600 will present playability in games for the cost of the GTX750 alone, albeit it’ll need higher speed memory modules. Even so, in a home cinema setup, passively cooled even, it’ll able to do occasional gaming. It’ll serve as an excellent platform for a family system.

    • deruberhanyok
    • 5 years ago

    Has anyone seen these available for purchase yet? I’d still like to buy one, maybe, but I’m wondering if, two weeks on, the lack of them anywhere is intentional or if they’re going to have another A8-7600 incident.

    • strangerguy
    • 5 years ago

    Remember the old days where AMD used to stand for better for value for dollar, even if they weren’t the beating Intel in performance in the days of AXP/A64? Then they fell off somewhat on the Phenom era and the complete plunge today where there is absolutely NOTHING redeeming in the entire desktop lineup.

      • raddude9
      • 5 years ago

      [quote<]NOTHING redeeming[/quote<] ??. Granted, the A10 chips are not great value, but, as with intel's chip range, you have to look harder for the good value chips. In particular, the A8-7600 is only a tiny bit slower than A10-7800, but with the recent price cut it's down to $105, a full $50 less than the chip reviewed here.It also make is cheaper than the cheapest haswell core-i3, sounds like good value to me.

        • strangerguy
        • 5 years ago

        Even the A8-7600 makes no sense whatsoever. The G3528 CPU wise is still faster and cheaper by $30…And if anyone so tight on budget for a gaming rig they should be looking at used ~$50 HD7770s etc and not saving $20 for AMD APUs.

          • raddude9
          • 5 years ago

          G3528? I assume you mean the G3258 yea? Anyway, it makes no sense to risk buying a suspiciously cheap HD7770 to add to your new computer and the cost of a new HD770 would put the cost of a G3258 miles past that of an A8-7600.

          [quote<]The G3528 CPU wise is still faster[/quote<] Sure, it'll beat the AMD by 25% or so in single threaded tasks (assuming you're not overclocking it to the max), but multi-threaded tasks will generally go the way of the AMD chip. Who are you to say which might be more important for somebody else? And maybe I'm reading Techreports charts wrong: [url<]https://techreport.com/review/26735/overclocking-intel-pentium-g3258-anniversary-edition-processor/3[/url<] but a gamer on a tight budget would be better off with the A-7600, it'll give better frame rates using an external GPU without resorting to overclocking. The A8-7600 has an additionaly trick for the tightwad gamer: To start with they can get playable frame rates in pretty much every game out there without a graphics card, then, at a later date you can add in a dedicated GPU. With the G3258 on the other hand, virtually no games will be playable without the external GPU: [url<]http://www.anandtech.com/show/8232/overclockable-pentium-anniversary-edition-review-the-intel-pentium-g3258-ae/3[/url<]

            • derFunkenstein
            • 5 years ago

            Doesn’t really matter until you can actually buy one. I’m still not finding A10-7800 or A8-7600 or any of these non-100W chips for sale. Everything sub-$150 is Trinity and Richland, and even then the really desirably 65W chips are only 10-15 bucks less.

            • raddude9
            • 5 years ago

            Then it does really matter because TigerDirect.com say they have them in stock:

            [url<]http://www.tigerdirect.com/applications/searchtools/item-details.asp?EdpNo=9134200&SRCCODE=WEBGOOPA&utm_source=google&utm_medium=paid_search&utm_campaign=paid_search_google_pla&scid=scplp1516098&gclid=CPfgk8jBi8ACFUSy2wod-YUAiQ&gclsrc=aw.ds[/url<]

      • rrr
      • 5 years ago

      Actually, AMD WAS beating Intel performance wise in the days of Athlons XP and 64.

    • uni-mitation
    • 5 years ago

    Loved reading the review. I am waiting for that glorious upcoming review with all the goodies.

    • Kretschmer
    • 5 years ago

    These APUs wouldn’t work well as my “daily driver” PC, but I could see them working for a secondary system that requires neither CPU nor GPU heft in favor of a bit of both. My 2011-era netbook held a (pitiful) E-350, but came in handy as a spare WoW-capable machine to quest with my ex.

    So a niche exists, but it will take serious x86 improvements to expand it to general applicability.

      • sschaem
      • 5 years ago

      What general application does the A10-7800 got problems with ?

      To me the A10-7800 seem like absolutely overkill for 99.9% of PC users.

    • UnfriendlyFire
    • 5 years ago

    Hm… The A10-7800 is 45W and cost less than $200.

    There are mobile Intel CPUs that are rated at 37W, 47W and 55W, and they cost $300 to $1000. Not including a dedicated GPU.

    What if the A10-7800 was put in a laptop?

      • Chrispy_
      • 5 years ago

      It would do pretty well as a budget gaming laptop, I think.

      Sadly, the list of available AMD laptops with anything other than budget processors (Kabini, or low-end, richland A4 and A6) is awful.

    • fhohj
    • 5 years ago

    interesting to see the g3258 surge ahead in Thief using direct 3d. I was under the impression that it and and the 750K performed about the same at stock clocks. so it was interesting to see a newer multithreaded like Thief like the pentium better. wasn’t expecting that.

      • sschaem
      • 5 years ago

      Thief doesn’t seem very multithreaded friendly under d3d.

      FX-8350 at 4ghz – 47FPS
      FX-4320 at 4ghz – 45FPS

      So twice the core give you less then 5% FPS gain.
      For sure this game plateau at 4 thread, but I’m wondering if it might also plateau at 2 core….?

      [url<]http://www.techspot.com/review/787-thief-benchmarks/page4.html[/url<]

        • Jason181
        • 5 years ago

        Or threads that share an fpu are confusing it. Windows’ scheduler has been hyperthreading aware for some time, but how do you deal with a cpu that has two integer cores and only one fpu; you can’t just default to parking the one of the threads because the integer cores might be needed.

        The idea of “lopsided” execution resources makes perfect sense for an office setting, where there’s just not a lot of highly threaded applications, and even less that depend so heavily on the fpu. Gaming is a big problem though.

        Look at the pentium versus the i3-4360. The major difference between hyperthreading and the two threads assigned to a single fpu in amd’s architecture is that intel’s fpu design has a lot of headroom left for that second thread to take advantage of.

          • fhohj
          • 5 years ago

          good point. the consoles have twice as much the fpu resources. the clocks are higher on the pc, but it could very well be that it isn’t making up for threading designed for heavy hardware parallelism that just isn’t up to the same level.

          • Meadows
          • 5 years ago

          There is, in fact, a patch available for Windows 7 that makes the kernel aware of AMD’s lopsided design, and Windows 8 (and later) is purportedly aware by default, without any patching.

            • Jason181
            • 5 years ago

            Yeah, I know about the patch, but I’m pretty sure it still assigns two threads to a module if it runs out of modules, and AMD’s fpu is barely up to the task with a single thread, it seems.

    • Mat3
    • 5 years ago

    Why is FSAA being applied on the integrated tests given the limited bandwidth? I’ve played games on these APUs, the performance is fine if you run without AA but with most other settings on high.

      • Damage
      • 5 years ago

      FXAA has pretty small overhead, especially at lower resolutions–potentially as little as a millisecond or so per frame. The effect can be hard to measure, it’s so small. I tend to leave it on in all cases, when the alternative is no AA whatsoever.

      MSAA has substantial overhead, and I didn’t use it for that reason.

        • Chrispy_
        • 5 years ago

        I always felt that FXAA added at least a frame of lag. I don’t have hight-speed cameras to measure it but it’s noticeable, whatever it is.

        I mostly run without AA or vsync so perhaps I’m hypersensitive, but at the same time, an average of at least 8.3ms of input lag is inevitable at 60Hz because that’s half of the 16.7ms frame time. If I can feel the lag from FXAA and other temporal/post-process AA surely it has to be significantly more than a milisecond or two.

        It possibly varies a lot in it’s implementation from one game to another though. Borderlands2 for example seems to be lag-free with FXAA.

          • Waco
          • 5 years ago

          I just force FXAA through my drivers for everything…never really noticed a difference but I always run with adaptive vsync enabled as well.

          • sschaem
          • 5 years ago

          On high end video card FXAA is very minimal.
          But here we are looking at an APU with a 128bit of ddr3 memory bus and only 512KB of GPU L2 cache.

          Total guesstimate, but I wouldn’t be surprised if FXAA on the 7800 takes ~4ms at 1920×1080 and also affect CPU performance.

    • drfish
    • 5 years ago

    …[i<]and[/i<] you still can't actually buy them... :-/

      • HisDivineOrder
      • 5 years ago

      …and you’re somehow still surprised…

      • UnfriendlyFire
      • 5 years ago

      Well, at least you can pre-order them…

    • flip-mode
    • 5 years ago

    [quote<]If all goes as planned, we could see products based on this new core in 2016.[/quote<] Ha ha. That's extremely optimistic. A fairly consistent AMD pattern has emerged over the years: the stated year means the product will "launch" toward the end of Q4, with availability "beginning" in Q1 and "ramping" through H1. If an end-user can lay hands on this future chip in the actual stated launch year it will be a shocking break from that pattern. It's too bad because AMD really needs a new CPU design. Bulldozer has been disastrous. I bet the first thing Dirk Meyer did after he was booted from AMD was to say, "at least now I don't have to use that crappy processor I conceived of and foisted upon the world". Thanks, Dirk.

    • DPete27
    • 5 years ago

    [quote<]Manufacturers' TDP ratings are just peak numbers, and they aren't terribly useful for comparisons with other brands.[/quote<] TDP seems to correlate to [url=https://techreport.com/review/25908/amd-a8-7600-kaveri-processor-reviewed<]die area[/url<]. If you factor in die size between Intel and AMD, their TDPs match up more closely. Intel's TDP generally corresponds rather well with power draw at peak. AMD...we have to do some math: Haswell i5 die area = 177 mm2 Kaveri die area = 245 mm2 65W Kaveri TDP * 245mm2 / 177mm2 = 90W which matches up pretty closely to peak CPU power draw (omitting platform power contributions).

    • tbone8ty
    • 5 years ago

    same old AMD story…*sobs quietly in corner*

    • Shouefref
    • 5 years ago

    I think AMD focusses on mobile. That’s anything up to notebooks, and nowadays that’s an important market.
    But it looks as if they’re losing desktops and high end performance pc’s, excluding servers – that’s still another market.
    So, financialy it might have been the best strategy of AMD, but in the long run…
    … 2016 is a long way off, and maybe they’ll postpone again.
    Maybe they think ‘People don’t buy new pc’s every four years anymore as they did in the past’, but 2016 is too late for Windows 9.
    They might miss the real big replacement of W XP machines (which indeed is not this year, contrary to expectations)

      • peaceandflowers
      • 5 years ago

      Don’t forget that they still have the FX processors though. Okay, they’re not great, and a bit neglected even by AMD, but at least AMD won’t pretend to serve the high-end with these Kaveri chips. Probably that was another reason to focus on mobile with these chips.

        • DPete27
        • 5 years ago

        The only thing FX is doing better than Kaveri is having more than 4 cores. That benefit in itself is relegated to a niche set of programs that are able to take advantage of that many cores efficiently.

    • Chrispy_
    • 5 years ago

    Hmm. Less than 30fps in big titles at only 720p, even with outrageously expensive DDR3-2133.

    I [b<]*want*[/b<] to see APU's remaining relevant but when you can buy a competent dGPU for so little these days (sometimes faster cards are down to $50 with MIR) it seems dump to buy one just for the IGP. There are even a few recent threads on the forum about people trying to find bus-powered single-slot half-height cards and they're all both cheap and double the speed of the best APUs. Once you ignore the APU, the x86 cores AMD's churning out still leave a lot to be desired, enough that a much cheaper pentium is probably the superior option, whilst saving you up to $50 which - hey - you could spend on a dGPU. C'mon AMD. Using system RAM is hampering your IGP's whilst Intel catches up on that front. Meanwhile your x86 cores are a dead-end with the only public anouncement being that the successor (K12) is an ARM-based product.

      • srg86
      • 5 years ago

      Agreed, except for the K12 bit, that’s supposed to come out in both ARM and x86 versions. This is the new arch that Scott alluded to.

      • UnfriendlyFire
      • 5 years ago

      AMD did mention about the stacked DRAM and other RAM research stuff.

      No guarantees if they’re going to be in Carrizo APU. Definitely in a K12 APU though.

      • fhohj
      • 5 years ago

      hopefully the on-package memory and subsequent bandwidth bump give carrizo the boost it will need to make these things really start to shine. otherwise, imagine what Intel’s IGP will be like then?

      • ronch
      • 5 years ago

      Chrispy, haven’t you been reading TR for the last few months? If you are, then you surely know that AMD is developing a new x86 core to replace the fundamental Bulldozer architecture, apart from the ARM-based K12.

        • Chrispy_
        • 5 years ago

        Must have missed that – been travelling a lot lately so may have missed a few days of coverage here and there.

    • TA152H
    • 5 years ago

    AMD isn’t learning their lessons.

    They’ve decided that since they know how to make GPUs, and Intel is horrendous at it, they’ll simply destroy Intel with their GPUs, and sacrifice the CPU to advance that.

    Yes, Intel is not smart enough to make a good integrated GPU, but they are smart to know the market isn’t asking for it. At least not in numbers.

    AMD wins the battle and loses the war. They even ask for a process that degrades the CPU clock speed, so the GPU can be better and completely dominate Intel. Except the market just doesn’t care, and they can’t make it.

    They need to realize just because they are way ahead of the backward Intel in GPU technology, it doesn’t mean they should sell out the processor to maximize the GPU. The missing part is, it’s not adding much value to the product, since the market isn’t asking for it.

    Also, without a better memory controller, it’s become pointless adding so much GPU. Kaveri is a perfect example of a bad design – a massive GPU that can’t perform up to potential because of a bottleneck elsewhere (memory controller), and a weak CPU because it was diminished so the GPU could be bottle-necked by lousy memory controller.

    Compare that to a balanced design like Jaguar/Puma, and you wonder how the company could have gotten one right, and the other so wrong.

    But, then, Intel has the magnificent Haswell, and the dreadful Bay Trail. The latter is 22nm, but the same size as the 28nm Puma, yet does 20% less per clock cycle, and has a much weaker GPU taking up die space. Weird how each company has a crippled dog to go along with their successful design.

      • UnfriendlyFire
      • 5 years ago

      “But, then, Intel has the magnificent Haswell, and the dreadful Bay Trail. The latter is 22nm, but the same size as the 28nm Puma, yet does 20% less per clock cycle, and has a much weaker GPU taking up die space. Weird how each company has a crippled dog to go along with their successful design.”

      Contra-revenue to the rescue!

      • chuckula
      • 5 years ago

      [quote<]But, then, Intel has the magnificent Haswell, and the dreadful Bay Trail. The latter is 22nm, but the same size as the 28nm Puma, yet does 20% less per clock cycle, and has a much weaker GPU taking up die space. Weird how each company has a crippled dog to go along with their successful design.[/quote<] Yeah but how many 7" tablets (or any tablets of any size for that matter) are running Puma? Or even Mullins? BayTrail when hopped up onto a low-end desktop has issues competing against 25 watt Kabini parts because Baytrail is way out of its league in those power envelopes. But put it into a $150 7" Android tablet and you have very strong performance and very competitive battery life (and that's comparing it to ARM not to higher-power x86 designs).

        • Waco
        • 5 years ago

        Bay Trail murders EVERYTHING in the AMD lineup in performance/watt.

        If you stop looking at benchmarks you’d be pleasantly surprised with the performance you can get out of one. Light gaming isn’t even a challenge – you can damn near run Skyrim on a Bay Trail (granted it’s not veryy pretty)…

          • raddude9
          • 5 years ago

          [quote<]Bay Trail murders EVERYTHING in the AMD lineup in performance/watt. [/quote<] Sounds like you missed the Beema/Mullins preview article, check it out here: [url<]https://techreport.com/review/26377/a-first-look-at-amd-mullins-mobile-apu[/url<] The AMD A10 Micro-6700T chips (assuming AMD can actually ship a few of them) with their 4.5W TDPs should give most Bay Trail chips a run for their money in the CPU dept. So, it looks less like a case of "murder" and much more like a case of a light spanking. And then, if you then want to run a bit of Skyrim, the spanking would probably be going heavily in the opposite direction.

            • swaaye
            • 5 years ago

            That preview article is from April so it seems like AMD can’t manufacture them for some reason. Or they just can’t get design wins, which is another traditional AMD problem.

            Baytrail’s 14nm replacement is not that far off anymore either.

            • raddude9
            • 5 years ago

            The Beema/mullins chips seem to be slowly making an appearance, e.g.:

            [url<]https://techreport.com/news/26820/amd-mullins-apu-appears-in-250-hp-netbook[/url<] Hopefully it won't be too long before we see a few more

      • Klimax
      • 5 years ago

      Reminder: BayTrail is Low TDP CPU and it is good for its target use. (Beating ARM in TDP, performance and consumption)

      It is terrible only when compared to larger, higher TDP CPUs, but those are targeting different market.

      • flip-mode
      • 5 years ago

      [quote<]AMD isn't learning their lessons.[/quote<] It's too soon to say that. Take a minute to remember how long it takes to design a new CPU. Last I heard it takes around FIVE years. It's lots of custom logic, with the traces laid out manually. CPU design is not at all on the same time table as GPU design, which can be done much quicker depending on the design approach. That's why AMD is able to keep updating the IGP and making appreciable gains with it while only marginal improvements are possible with each new spin of the CPU. You're not going to see if AMD has learned any lessons until the next MAJOR processor release, and according to reports that will be in 2016 which means late 2016 which really means a paper launch in late 2016 with limited availability in early 2017 and hopefully wide availability in late 2017. This would fit the pattern we've seen from AMD. If there are any major issues with the next big CPU release, it'll be another five years before you will know if AMD has learned anything from that point, which puts you to 2022/2023 and suddenly you're feeling very old. Screw me I'll be fifty in 2026. Ugh, god. AMD may not release another worthy CPU in my lifetime at the rate things are going.

        • Sam125
        • 5 years ago

        [quote<]You're not going to see if AMD has learned any lessons until the next MAJOR processor release, and according to reports that will be in 2016 which means late 2016 which really means a paper launch in late 2016 with limited availability in early 2017 and hopefully wide availability in late 2017. [/quote<] A minor nitpick, but AMD's new CPU architecture is being unveiled next year (2015) so it should replace the bulldozer line of CPUs starting in 2016. So you're only off by one year, barring any delays which AMD has been surprisingly good about executing the past few years.

          • HisDivineOrder
          • 5 years ago

          The lack of any FX updates and the sudden introduction of Richlands in the place of something new both beg to differ with your assessment of their “surprisingly good” execution from “the past few years.”

          And that’s just CPU’s in the last year…

            • Sam125
            • 5 years ago

            Not running into delays due to architectural changes or process shrinks is a definite improvement from the Athlon64 days of AMD where it was pretty much expected that any announcement would be delayed by at least six months.

      • heinsj24
      • 5 years ago

      Intel’s Iris Pro 5200 – GT3e is arguably the best performing integrated GPU available. Luckily for AMD, it’s only available on high-end machines where buyers expect a discrete GPU.

      • Freon
      • 5 years ago

      I suspect they couldn’t touch Intel toe-to-toe if they tried with a more pure CPU, so they’re not bothering. At least they have SOME way to spin it this way.

    • ronch
    • 5 years ago

    Look, I always wanna root for AMD, but they really need to finish off that new x86 core quick. I know there are some people for whom these APUs are just perfect for but really, unless you belong to that minority I (just like Scott) just can’t see why I would really buy one for desktop use. It just seems like it needs an excuse for its existence. Office use or grandma’s PC? Just get a Pentium. Light gamer? A Core i3 with a $70 card could also fill the bill without costing a lot more and would have the added bonus of future upgradability to much faster parts. Avid gamer? Just save up and get a proper Core i5 or i7. OK, some of you may argue that AMD is selling these APUs by the boatload, but their quarterly earnings say otherwise.

    By the way, those AM1 chips seem to have quietly slipped from the limelight. What gives?

    • Cataclysm_ZA
    • 5 years ago

    Thief IGP – 1280×720
    Tomb Raider IGP – 1600×900
    Batman: AO IGP – 1920×1080

    Why the jumping around in resolutions? That doesn’t give me a baseline for performance at 720p because there’s only a single result there. I’m confused me further with this line:

    “I can see the logic, but there’s still not enough bandwidth going into the CPU socket to allow graphically intensive 3D games to run terribly smoothly on this IGP. Hmm.”

    Perhaps this wouldn’t be a problem if the games were benchmarked at 720p with medium settings? That’s the settings that I would expect to use with this kind of hardware. Where’s the frame-time graphs that show how frame latency is affected when you use the 65W and 45W TDP modes on the A10-7800?

      • MadManOriginal
      • 5 years ago

      I think that’s a fair criticism and I agree.

      I also think that playing fairly to highly demanding games like this misses the point of using IGP graphics – we know a $100 discrete card will do much better. I would be more interested in knowing how it runs older, less demanding games that people still play,

        • Bensam123
        • 5 years ago

        Or newer console titles that are built around their new hardware.

      • Bensam123
      • 5 years ago

      Yup… among other things in this review… What does the alt TDP even affect?

      • Meadows
      • 5 years ago

      I hadn’t noticed that. Weird.

      • esterhasz
      • 5 years ago

      I do not agree.

      This is obviously a smaller review on a particular version of a known processor. So choices had to be made. The resolutions seem to have been chosen with “playability” as a target. Yes, the baseline is lost, but – personally – I’m not interested in knowing that this or that game runs < 10 or > 50 fps in this or that resolution. I want to know what I can expect to play on this and at what resolution. Hell, I’d be more interested in holding fps constant (e.g. ~30fps) and simply learn what resolution/quality settings I’d get per game than a bunch of bar graphs.

      The performance reviews I nowadays find most pleasing are things like Geoff’s “subjective look” (https://techreport.com/blog/25930/a-subjective-look-at-the-a8-7600-gaming-performance).

        • Meadows
        • 5 years ago

        This was not intended to be a “smaller review”.

      • Damage
      • 5 years ago

      Hey Wesley.

      [quote<]Why the jumping around in resolutions? That doesn't give me a baseline for performance at 720p because there's only a single result there.[/quote<] I wasn't really worried about trying to keep a consistent resolution, since most PCs have at least 1920x1080-capable displays. I was worried more about establishing something close to playability and then looking at relative performance. I suppose I could consider "standardizing" on 720p in the future, but I don't see consistency in resolution tested as an important value. Obviously that's something you had in mind in this case. [quote<]Where's the frame-time graphs that show how frame latency is affected when you use the 65W and 45W TDP modes on the A10-7800?[/quote<] As I wrote in the article, I ran out of time to do everything I'd like to do. We did test frame times and dual TDP modes with the A8-7600 based on the same Kaveri architecture here: [url<]https://techreport.com/review/25908/amd-a8-7600-kaveri-processor-reviewed/6[/url<]

        • Cataclysm_ZA
        • 5 years ago

        0_0 Damage knows my name!

        /bows

        I get your reasons for why and they make sense to me. I was mostly asking because so many other sites don’t stick to a particular res for APU reviews and it annoys the hell out of me because people ask me questions about what they can expect. I remember Geoff”s tests of the A8-7600 and with all the tests at 1080p at least I had a solid idea of what to expect if I had to recommend it to anyone (I usually don’t, because it’s ludicrously priced in South Africa). I guess I just mostly want to see how the A6-7400K does at 720p.

        I completely forgot about the TDP tests in the A8-7600 review! My bad.

        I look forward to the head-on between the A6-7400K and the G3258.

          • ssidbroadcast
          • 5 years ago

          Yeah… [i<][b<]Wesley![/i<][/b<] You better watch yourself!!

            • Cataclysm_ZA
            • 5 years ago

            I’m on the internet, it’s dangerous enough as it is.

            I’m more honored that Scott remembers my name. I write for a tech/gaming website, but it’ll be years before I get to Scott’s level.

            • NeelyCam
            • 5 years ago

            Hey – could you mention the name of your website? I’m curious.

            • Cataclysm_ZA
            • 5 years ago

            [url<]http://www.nag.co.za/category/technology/[/url<] I sometimes do posts in the news category as well, so you won't be able to see everything I've written for NAG in all the time I've been there. I was also previously writing for MyGaming: [url<]http://mygaming.co.za/[/url<] Whomever replaced me doesn't really seem to care about hardware at all.

        • sschaem
        • 5 years ago

        Then why write a conclusion about frame rate/ payability if your goal was only relative performance between platform?

        1920×1080 require 2.5x the GPU power, it will impact your ‘smothness’

        So sure you got a low FPS on an APU with a modern game (an nvidia TWIMTBP title at that)
        But you might have seen at 1280×720, with the same settings, that the 7800 reached 50fps while the i5 couldn’t break 20fps. 50FPS is enjoyable, 20 FPS is not.

        And this:

        “Switching over to AMD’s Mantle graphics API, with lower CPU overhead and apparently better threading, seems to help somewhat.”

        So going from 43 to 59fps just help somewhat ? thats over 35% in performance gain….

        Guys… I give up. My suggestion.. take a step back, and try to be more neutral.

        Also any reason why you didn’t test Thief using the IGP/ Mantle ?
        Seem like it wouldn’t have been the most revealing test to include.

        Edit: Did I spaced out and missed the fact that this review does indeed include a Thief IGP test including mantle?

      • Deanjo
      • 5 years ago

      [quote<]Perhaps this wouldn't be a problem if the games were benchmarked at 720p with medium settings?[/quote<] Actually, IMHO, they should be benched at the most common native resolution of monitors (and now tv's), 1080P. They should also have it with eye candy cranked to the max as that is how the developers really want you to see the game. You wouldn't watch a sci-fi flick if they all of a sudden removed or reduced the special effects so why would a person want to see a game that is a barren version of it?

        • sschaem
        • 5 years ago

        Many, MANY channel broadcast at 720p. Even the Xbox one games run at sub 1080p
        And I would be OK with less lens flare in the new Star Trek…

        When I played crysis, I set my res to 720p, even so I had a discreet GPU.
        a) it looked better (was able to turn on more eye candy)
        b) it was also smoother

        For a RTS game, I would go with native res. For games like batman, 720p at 50fps is preferable then 1080p at 25fps.

        BTW. check a copy of avatar on DVD… it crush the visual experience of the batman game at 1080p.

          • Deanjo
          • 5 years ago

          [quote<]Many, MANY channel broadcast at 720p. Even the Xbox one games run at sub 1080p And I would be OK with less lens flare in the new Star Trek...[/quote<] Yes many do, but even the layman can spot the difference between 720p broadcast vs a 1080 Bluray. Given the choice, they would pick the bluray every time. [quote<]When I played crysis, I set my res to 720p, even so I had a discreet GPU. a) it looked better (was able to turn on more eye candy) b) it was also smoother For a RTS game, I would go with native res. For games like batman, 720p at 50fps is preferable then 1080p at 25fps. [/quote<] What would be preferred and look better however? 1080p running at full res with eyecandy at good framerates or 720p with full eyecandy at good framerates. [quote<]BTW. check a copy of avatar on DVD... it crush the visual experience of the batman game at 1080p.[/quote<] Again, which would you prefer to see on your TV? The DVD or the Bluray and which do you think is more accurate to how Cameron wants you to see it? I'm pretty sure everyone if they had the choice would want to see their display running at it's fullest capability. You don't see many TV reviews praising a TV for it's 720p capability if the display is 1080p and offers crap display at that resolution. Now days you wouldn't find anyone recommending a 720 display for TV despite a large number of broadcasts being transmitted as such. Same standard should be held to graphics solutions.

            • sschaem
            • 5 years ago

            You missed the point. Resources are limited, and you configure the signal for the best experience.

            This is what drive some broadcaster to pick 720p over 1080i. Higher quality experience, even on 1080p display.

            So the case here is that with limited resource, running a game at 1080p might require lowering visual quality… when instead you can go with 720p and keep the visual quality up.

            Again, Avatar at 480p look massively more impressive then Crysis3 at 3840×2600. even on a 4K TV.

            Resolution is only one part of the experience.

            • Deanjo
            • 5 years ago

            [quote<]This is what drive some broadcaster to pick 720p over 1080i. Higher quality experience, even on 1080p display.[/quote<] It has nothing to do with visual quality but all to do with bandwidth limitations of content delivery. The one exception to that is live broadcasting of sports events where the framerate can be kept up at 59.94 fps. If they could effectively deliver that frame rate over cable systems @ 1080p they would in a heartbeat. There is a reason why baselines are set. Moving goalposts by using different resolutions and settings every comparison doesn't help people with their purchasing decisions and seeing if a APU is a viable alternative to a dedicated card and a reasonable baseline resolution is what a vast majority of people have on their systems. If they want to do a scaling of resolutions and settings as well fine they can present that but there should always be baseline resolutions where all solutions are compared on equal settings.

        • Cataclysm_ZA
        • 5 years ago

        I can tell you first-hand from selling APUs to customers in years past that many of them don’t crank up the eye candy or play at native res. They understand that performance is limited, so they opt for settings that provide good balance between playability and visual fidelity.

        Developers want you to play the game the way they intended which is why they spend so much time making it pretty for enthusiasts and their core market; on that I can agree. But they have Low settings available for a reason – for people with weaker CPUs and/or GPUs.

        The lowest common denominator, even with current-gen consoles, is 720p. Benching for that resolution makes sense for reviews of weaker hardware.

        • BobbinThreadbare
        • 5 years ago

        Well I’m pretty sure the most common resolution is 1360×768 since screen makers hate us.

          • Deanjo
          • 5 years ago

          Not according to Steam’s hardware survey. 1920×1080 accounts for 1/3 rd of all systems.

            • Meadows
            • 5 years ago

            “1366×768 or less” accounts for another third and possibly comprises the people who can have an actual desire to own this processor.

            • Cataclysm_ZA
            • 5 years ago

            And 1366×768 is 1/4 or just over 25%. It’s still an important resolution to benchmark for. Steam accounts for 75 million gamers (and not all of those accounts are active) but that’s only half of the North American PC gaming market in 2013. There’s tonnes of computers out there that don’t have Steam installed and won’t count towards Steam’s survey and yet they’ll be used for low-end gaming, possibly at 720p as well.

    • Bensam123
    • 5 years ago

    I don’t think when most people would opt for one of these over a i5 or a 8350 when they’re making a low end system, they’re just much more powerful. The CPU/GPU combobreakers only make sense when they’re just breaking $100 or lower.

    • ptsant
    • 5 years ago

    Do you get any tangible benefit by pairing with a cheap AMD dGPU like the R7-250 (dual graphics)? That would be an interesting upgrade proposition for people with tight budgets who can later add a cheap dGPU. Although, obviously, an R9-280/7950 is at another level completely.

    Also, are you aware of any applications that use some of the newer features/gimmicks, like TrueAudio and VCE 2.0? I’ve heard that game capture is quite good with VCE on AMD cards but haven’t heard of any software using TrueAudio. It’s a pity all these transistors don’t get utilized.

      • Alexko
      • 5 years ago

      Dual graphics is somewhat erratic: sometimes it brings substantial benefits, sometimes not really, and I’ve also seen it reduce performance. Note that I’m talking about FPS here, don’t know about frame times.

      TrueAudio is used in a small number of games, most of which are still in development, I think. VCE is used by some encoding apps but from what I’ve seen, hardware acceleration usually yields somewhat lesser results in terms of quality, so few people use it (same for Intel and NVIDIA’s solutions). However it is used by AMD’s Raptr thingy that lets you record your gaming sessions.

    • Meadows
    • 5 years ago

    How soon can we expect to see the remainder of the tests and results added?

      • Bensam123
      • 5 years ago

      XDMA.

        • Meadows
        • 5 years ago

        What?

    • USAFTW
    • 5 years ago

    People who invested on 2500Ks and the sandy bridge platform got one heck of a lifetime out of it. As for Kaveri 7800, I was never into IGP instead of meaty caches, so it’s not something that’s in my spotlight.
    And unfortunately for AMD, a good review (review quality, not bias) doesn’t make up for lack of performance.

      • uni-mitation
      • 5 years ago

      The 2500K would go down as a legend. And the fact that any proper gamer has no reason to swap their awesome 2500K for a Haswell tells us two things. One, that if it lends very well to overclocking then a chip can extend its lifetime of usefulness. And second, the stagnation of great generational advances in desktop CPUs is here to stay.

      • Farting Bob
      • 5 years ago

      Still rocking a 2500k here, i feel no need to upgrade, and will probably only do so when DDR4 is plentiful and cheap and Intel is another 2-3 generations down the line with it’s CPU’s.

      • Freon
      • 5 years ago

      Hell, still rocking my Bloomfield.

    • UnfriendlyFire
    • 5 years ago

    Hey guys, the A10 7800 ( [url<]http://www.shopblt.com/item/amd-ad7800ybi44ja-radeon-a10-7800-fm2/amd_ad7800ybi44jah.html[/url<] ) and A8 7600 ( [url<]http://www.shopblt.com/item/amd-radeon-a8-7600-fm2-4mb/amd_ad7600ybjabox.html[/url<] ) are finally here. EDIT: Turned off No Script and found this: A10: "This item is not in stock. It is ordered from the manufacturer as needed..." A8: "Total Back Ordered: 150. Total Incoming: 1800" EDIT2: Does AMD really expect an average joe to go into BIOS to select 65W/45W TDP settings? I mean the option is nice, but there's going to be a lot of people that are going to be staying on the default TDP setting because either they don't know about it, or they're not interested in exploring the BIOS options.

      • Jason181
      • 5 years ago

      I really don’t see average joe even caring that the cpu uses 45 watts versus 65, even if it was a toggle switch on the front of the case clearly labeled.

        • MadManOriginal
        • 5 years ago

        The average Joe doesn’t assemble a PC from parts either.

    • albundy
    • 5 years ago

    i guess it all depends on your current needs. can you drop in a discrete gfx card later on? i mean, the i5 is within the A10’s price point, so I would likely go with the horsepower of the i5. but if this was being used for an HTPC, the A10 might make for a better choice.

    • yokem55
    • 5 years ago

    I’m amazed that the 2500K is still not that far behind the 4590 even at stock clocks. I imagine the typical 4+ghz most get with CM Evo or better cooling, it would be right up there with the newest, top end i5….nearly 4 years later.

      • Damage
      • 5 years ago

      The 4590 isn’t the top-end Core i5. ;). Still, the 2500K is quite nice still.

        • yokem55
        • 5 years ago

        Ahh., looks like that would be the 4690K at 3.5/3.9. But the 4590 makes a good comparison to the 2500K because they both have the same stock frequencies (3.3/3.7).

    • MadManOriginal
    • 5 years ago

    So uuuuh…that marketing video? It makes me think AMDs CPU is actually fighting the programs and literally killing them (although in some benchmarks you’d think it’s the other way around, heh). I mean, I know it’s called ‘executing instructions’ but that’s a bit too literal.

      • the
      • 5 years ago

      And when it is done executing instructions for a process, it is [url=http://en.wikipedia.org/wiki/Terminator_2<]terminated.[/url<]

        • UnfriendlyFire
        • 5 years ago

        Or gets send back to the source.

        *Obviously not a reference to the Matrix

    • fredsnotdead
    • 5 years ago

    “The A10-7800 and the A6-7400K both have “configurable TDP” modes; they’ll sacrifice clock speeds to fit into a 45W power envelope. That option could be interesting for a quiet home theater PC or the like.”

    It would be interesting to know what the actual power consumption and performance are in that 45W configuration. As “65W” processors, you’ve shown they have considerably higher peak draw than the comparable Intel processors. I guess if you want low idle power and a beefy IGP, they win, but do you really need a (relatively) powerful IGP in a HTPC? Can’t all IGPs do the necessary video de/encoding work these days?

      • mikato
      • 5 years ago

      We see in this article it trades blows with the i3 for regular CPU performance. With the beefy IGP and Steamroller’s high threaded performance, I would still go with the AMD IGP vs the Intel IGP for the price on an HTPC.

    • mczak
    • 5 years ago

    The comment about the intel opencl icd not supporting AVX2 is quite bogus. AVX2 vectors are still (max) 256bit wide just as their AVX siblings (you’d need avx-512 for 512bit wide vectors – or in other words, knights landing or skylake hw).
    I don’t know if the test would benefit from AVX2 at all – if the workload is all float vectors avx2 can still make a difference (fma support would be a big reason, though this is not tied to avx2 thus could be supported separately, avx2 gather support would be another) but it’s difficult to tell how much. Of course, if the test can make use of int vectors AVX2 is indeed going to make a difference.
    Though I don’t believe the opencl icd wouldn’t do fma or avx2 in the first place. According to docs the sdk has avx2 target support.

      • Damage
      • 5 years ago

      Yeah, I got the vector width wrong. Was just 256 bits. Corrected now. Still, compare the 4590 to the 2500k. We are not getting the benefit of FMA with Haswell yet. At least, sure don’t think so! And Intel’s ICD is slower than the result I reported from AMD APP.

        • moozoo
        • 5 years ago

        >We are not getting the benefit of FMA with Haswell yet. At least, sure don’t think so!

        The latest Intel CPU OpenCL does generate AVX2 FMA instructions.
        See my last post here [url<]https://software.intel.com/en-us/forums/topic/401161[/url<] Yes it gives a boost to FlopsCL

          • UnfriendlyFire
          • 5 years ago

          In 2008, Autodesk required CPUs to support SSE2.

          The first CPUs to support SSE2 are the pre-Prescott P4 CPUs, such as Willamette (launched in 2000) and Northwood (launched in 2002).

          I pity anyone that tried to use Autodesk Inventor on an early P4 CPU, and it’s likely that such computer probably had an equally old GPU.

          So… expect a few years for new extensions such as AVX2 and FMA or the upcoming AVX-512 to be widely used. HSA would also take a while to gain traction, assuming it doesn’t end up like 3DNow!

        • mczak
        • 5 years ago

        I’m not sure how the kernel for luxmark looks like, but note that if it’s just using muls and adds, it is possible they cannot be automatically fused to a fma() since as per floating point rules that would give a different result (and the fma opcodes for intel cpus always skip the intermediate rounding, there aren’t versions available which really do mul/round/add). Though actually opencl has FP_CONTRACT pragma on by default which should allow this behavior if I see that right (otherwise, some unsafe math optimization options should also do). But yeah maybe you’re right and it doesn’t use fma yet in any case…

        • the
        • 5 years ago

        FMA is separate from from AVX as there are two versions: FMA3 which Intel (Haswell and later) and AMD (Piledriver and later) support and FMA4 which only AMD supports (Bulldozer and later).

    • chuckula
    • 5 years ago

    Thanks for the review Scott. It looks like the A10-7800 is the chip to have if you want Kaveri.

    One quick question: While I’ve seen your review and Anand’s floating around, I still don’t see either the A10-7800 or A8-7600 listed on Newegg or Amazon. NCIX did have the A8-7600 listed as “in stock” but not the A10-7800. Did AMD give you a date when these parts would be widely available?

      • Voldenuit
      • 5 years ago

      [quote<]Did AMD give you a date when these parts would be widely available?[/quote<] Remember the last 6 months? Same as the next 6 months.

      • Damage
      • 5 years ago

      AMD says 7/31!

        • ibnarabi
        • 5 years ago

        Why would anyone buy one of these when Microcenter has the faster A10-7850k’s selling for $129.99 ??? AMD is a very confusing company.
        [url<]http://www.microcenter.com/product/427565/A10_7850K_40_GHz_Black_Edition_Boxed_processor[/url<]

          • MadManOriginal
          • 5 years ago

          Same reason as any other Microcenter in-store-only deal.

            • HisDivineOrder
            • 5 years ago

            To drive the rest of the world without Microcenters crazy with envy and scorn and hatred and eventually unsated resentment that grows and grows with each and every thread where someone mentions their sales until finally one cold Tuesday that nice little old couple that used to give kids sugar free gum on Halloween and bake cookies for the orphanage down the road sees a stray mention of a Microcenter Black Friday deal that’s coming up that will sell a Haswell-E CPU for $2 along with a motherboard for $100. They snap, drive down to their local gunshop, and are last seen on a rampage down the interstate with a pickup truck bed full of automatic assault rifles driving the 200 miles in the direction of the nearest Microcenter while leaving a path of destruction at every Best Buy, Fry’s, or Tiger Direct they encounter along the way.

            When you see the story on the news, don’t be surprised. Just know…

            …you did your part to contribute to that awful outcome.

        • chuckula
        • 5 years ago

        Ah snap.. they didn’t bother to put the year field into that date…

      • raddude9
      • 5 years ago

      [quote<]It looks like the A10-7800 is the chip to have if you want Kaveri.[/quote<] I disagree, with the price drop down to $105, the A8-7600 costs $50 less than the A10-7800 and runs only 100-200Mhz slower. Sure it has 2 less GPU Compute Units, but the extra 2 on the 7800 are mostly hampered by a lack of memory bandwidth anyway. The lower price tag also brings the Kaveri into the budget CPU market where it stands a better chance of competing.

Pin It on Pinterest

Share This