Process tweaks and power management define AMD’s Richland APU

Starting early this year, AMD will begin replacing its Trinity APUs with new models based on Richland. This new APU uses the same basic silicon as Trinity, but time has been spent tuning the firmware and process technology to enable higher clock speeds and lower power consumption. Richland features new power management tech that’s based in firmware and governs how power is shared between the processor’s CPU and GPU elements. Although application profiles are part of the picture, the management scheme relies primarily on algorithms that control the chip’s general behavior. AMD doesn’t want to go too far down the road of optimizing for specific workloads.

At the Consumer Electronics Show in Las Vegas, AMD discussed Richland with our Editor in Chief. He was told Trinity has lots of architectural headroom that wasn’t exploited by the products launched thus far. AMD has been working on process tweaks to increase clock speeds, although its focus seems more on improving the chip’s suitability for ultra-low-voltage applications like thinner notebooks.

19W versions of Richland are expected to deliver a 40% boost in graphics performance and 10-20% jump in CPU performance over their Trinity-based counterparts. Right now, the only 19W Trinity APU listed on AMD’s website is the A8-4555M, which has quad cores clocked at 1.6/2.4GHz with an integrated Radeon running at 320/424MHz. It will be interesting to see how much process tweaks can increase clock speeds within the same thermal envelope—and what that tuning can do for AMD’s 17W duallies.

Richland is also headed to the desktop, and it will be compatible with the same FM2 motherboards as current Trinity chips. We don’t have numbers for expected performance gains on the desktop front, but I wouldn’t be surprised to see slightly higher clock speeds. The performance gains seem unlikely to exceed those quoted for the 19W mobile parts. AMD says Richland APUs are shipping for revenue already, so it won’t be long before we see them in the flesh.

Comments closed
    • rwburnham
    • 7 years ago

    I would not use these in a primary gaming machine, but for a second machine that would be used as a workstation or casual gaming machine, they seem perfect.

    • tfp
    • 7 years ago

    Apu?

    [url<]http://simpsons.wikia.com/wiki/Apu_Nahasapeemapetilon[/url<]

    • anotherengineer
    • 7 years ago

    hmmm I was thinking of getting a little trinity desktop for the garage, I guess I will wait until summer to see if richland is out and what it offers.

    • ronch
    • 7 years ago

    I have a friend who recently got himself an A10-5800K. He told me he’s been having heat issues with the stock fan, hitting 90C at idle. He confirmed that the heatsink is very hot to touch so I can suppose the heatsink was seated property. This got me thinking whether Trinity is really as low-power at idle as review sites say. For the record, I also saw a Core i3-2120 hit 60C in the BIOS alone. I don’t know if these temps are the new norm but my ‘power hungry’ Phenom II seems to run much cooler, using the stock fan. I don’t know whether the boards have anything to do with the readings either. Still, hitting 90C at idle is a bit worrisome for a chip that supposedly sips power at idle.

    Perhaps AMD knows this and is addressing it with Richland.

      • sschaem
      • 7 years ago

      The a10-5800k is one of the lowest idle power dissipation APU, rivaling and often beating intel best 22nm chips.
      example:
      [url<]http://www.anandtech.com/show/6347/amd-a10-5800k-a8-5600k-review-trinity-on-the-desktop-part-2/8[/url<] Collaborated by ALL review , including TR. There is nothing to fix but your 'friend' bios/configuration.

        • MadManOriginal
        • 7 years ago

        It could be a non-user error too though. I’ve seen some FM2 motherboard reviews where the CPU would get very hot in the BIOS which obviously it shouldn’t.

        • ronch
        • 7 years ago

        [quote<]There is nothing to fix but your 'friend' bios/configuration.[/quote<] He's actually a computer engineer who's quite adept with technical stuff. The BIOS should, by default, run the APU using proper parameters to avoid reaching 90C at idle, but yes, he did dig into it but didn't find the fix there. In the end, he had to resort to buying an aftermarket cooler, which brought temps down to 70C. Still not great. Not sure if his board is the culprit. My FX-8350 is also running pretty darn hot, reaching 71C in the BIOS (and that's with the stock HSF screaming at >5,000rpm!... yes, it was mounted properly with stock TIM). Why it reaches that temp in the BIOS menus is beyond me too. I'm still using the stock HSF though. In the end, I had to undervolt the CPU, apply better TIM, and raise the fan speed target to 60C so the CPU fan wouldn't sound like a Boeing taking off. Up to now I can't know the temps under Windows 7 because every darn temp reading app I've tried (Coretemp, AMD OD, Speccy) can't seem to read the temp sensors properly (temps [b<]wildly[/b<] jumping all over the place). My friend is also having this 'wild temp' issue with his A10. Very strange.

          • ermo
          • 7 years ago

          In my experience, this is very common. P4s suffer from the same issue, actually. I got my hands on an old board with a 65nm Pentium D a couple of months ago and was amazed at how hot it runs in the BIOS — it approached 65C with full fan speed on a fairly beefy stock aluminium intel cooler. And this was after the BIOS had been upgraded to the latest version for that board, which supported the power saving feature on that particular CPU.

          I’m suspecting that the issue is that both the CPU and the GPU run at full speed on all cores in the BIOS/EFI without issuing any CPU/GPU HLT commands. So it’s full voltage, full frequency, no HLT ‘idle’ — essentially the chip isn’t told to save power at all. That’s bound to run hot?

            • ronch
            • 7 years ago

            Yeah, I agree with your explanation. What I don’t understand is why mobo makers do that. They could enable Speedstep/CnQ in the BIOS, couldn’t they?

            • A_Pickle
            • 7 years ago

            I had a similar issue with an Athlon 64 X2 4200+ that was in a friend’s computer. I’d boot it, and it would just get hotter, and hotter, and hotter, and hotter… and it had a Xigmatek HDT-S1284 heatsink on it — that is, a heatpipe-direct-touch heatsink with a 120mm case fan and four heatpipes. The very same heatsink keeps my Phenom II X6 1055T at a cool 36° C at idle, and seldom above 45° C at load. But his 4200+ would just get hotter, and hotter, and hotter until the computer would just shut down. It’d get to over 100° C.

            I finally fixed the problem by swapping out his 4200+ with another one that I had laying around, and it’s been working great ever since. First time I’ve ever had to replace a CPU. :/

            • ronch
            • 7 years ago

            I actually had two Athlon 64 X2’s that had that same exact problem. Cooling was fine for both. Had the chips RMA’d directly to AMD since the dealer 1-year warranty was over. It seems to be a problem with some Athlon X2 chips. I have a hunch the TIM between the silicon and the heat spreader inside the package degrades over time or the silicon itself has gone bad. Perhaps it increased in terms of electrical resistance?

            • A_Pickle
            • 7 years ago

            That’s what I suspected, that the TIM must’ve gone bad. I wish I knew how to replace that, because I know plenty of people that could (and would) put a dual-core, 64-bit processor to good use.

          • sschaem
          • 7 years ago

          Yes. in the bios itself the CPU often doesn’t run many (if any at all) power saving modes.

          So if you guys only see this in the bios, its really not a big deal.

          You can try to use a tools like CPU-z to check what is reported in term of multiplier and voltage at idle under windows.

          And for the temp? try this. stay in the bios 5 minutes.. then put your finger on the side of the heatsink . Do the same after you booted windows, let the machine iddle for 5 minutes.
          My guess is that it will feel warm/hot under the bios, and cool in windows.

          The fan speed should also be dramatically lower. so you might hear the difference.
          Proving that the bios is not running the CPU power management modes.

          If you guys are techies, check some of the undervolting guides. the FX you have and the 5800k are unlocked and come with stock with a ‘one size fit all” voltage regiment.
          You can drop your temp at load by 20 to 30% by adjusting the voltage for your chip.
          This should make no diff at idle, but a big difference at load.

          I done this on a q6600, x6 1100t, C-60, etc… with amazing success.

            • ronch
            • 7 years ago

            Yup, I already did undervolt and set fan speed targets lower, as well as apply better TIM. It’s much much quieter now but still a bit noisy when the air conditioning is off. However, I’d love to know the temps under Windows but the apps I’ve tried show the temps jumping all over the place. This happens with Coretemp readings on our Core i3-2120 + Intel DH67BL PC too, btw, albeit the jumps are not as far from each other. These new-fangled processors seem to drive temp monitoring apps crazy. Any recommendations, please let me know.

          • anotherengineer
          • 7 years ago

          What motherboard does your friend have?

          What motherboard do you have?

          Maybe I will avoid them.

            • ronch
            • 7 years ago

            Mine is an MSI 990FXA-GD65. His is a Gigabyte. I asked him about the model but didn’t get it. I’d assume it’s one based on the A85 chipset.

      • Anonymous Coward
      • 7 years ago

      Is the fan spinning? Is it actually idle? AMD has had idle power nailed for a long time.

      • swiffer
      • 7 years ago

      If your friend notices that their CPU is actually hot, persuade them to invest in a $30 aftermarket CPU cooler. The stock HSF is for those that don’t care about the inside of a computer.

        • clone
        • 7 years ago

        1st stock heatsink & fans are fine for running cpu’s at stock & or mildly overclocked speeds, if a cpu is unstable and can’t live without an aftermarket heastink then return it because it’s a failed cpu.

        p.s. decent overclocking heastinks can be found for $20 and as low as $15 if you look hard enough.

        [url<]http://ncix.com/products/?sku=47671&vpn=RR-H101-22FK-RI&manufacture=COOLERMASTER[/url<]

    • DarkMikaru
    • 7 years ago

    Agreed!! I’ve been a long time AMD fan, only owned a Celeron based HP laptop several years ago because my girlfriend at the time bought it for me. lol Go AMD go!! And yes, I do give Intel credit where it is due and recommend the i5 / i7 CPU’s to my clients who want / can afford more horse power.
    But again, for HTPC & Standard Desktop builds, I find my clients to be just as happy & surprised by an AMD based machine.

    I really wish they’d release some of the lower wattage APU’s that are being used in Laptops as boxed APU’s I can get from the Egg. As I’d love to upgrade my Server / HTPC with the following board while maintaining silence & energy efficiency.

    [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16813138366[/url<] With 8 Sata ports for under 100 bones.... im sold!

      • tbone8ty
      • 7 years ago

      Cool and quiet or undervolt it since its unlocked

    • Bensam123
    • 7 years ago

    Perhaps this is why AMD hasn’t filed for bankruptcy like some people think they should?

    • spigzone
    • 7 years ago

    Cheap and easy refresh at the 32nm node while the 28nm Kaveri & variants lines ramp up to fulfill AMD’s top priority, next gen consoles and Steam Box, followed by other OEMs and finally retail availability.

    • chuckula
    • 7 years ago

    Ah.. we see some context for those 40% improvement numbers. It’s not 40%+ from the A10-5800K that’s sucking down 100 watts, it’s 40% from the low-end Trinities that aren’t really much better (if at all) than Ivy Bridges in the same low-power envelopes.

    Not that this is a bad thing, I think that Richland will finally make “Ultrathins” a pretty nice option. But it won’t be the killer desktop upgrade that the earlier rumors hyped it as being.

      • sschaem
      • 7 years ago

      Its 40% TDP for TDP. (in 3dmark)
      Richland is said to include the new HD 8000 Radeon graphics core.(according to Tom’s HW)

      So if you have a 19w Richland, it will deliver 40% higher 3dmark score then a 19w Trinity.
      Same seem to be true comparing a 100w A10-5800k to an A10-6800k

      Who posted the A10-6800k 3dmark scores, where did they come from?

      What AMD is telling the world is that for the same thermal envelope Richland will boost your 3dmark score by 40% and your CPU core ~15%.

      Thats actually pretty decent knowing this is the same 32nm process llano is built on.

        • willmore
        • 7 years ago

        It may be HD 8000, but it’s not GCN, yet, right? That’s Kabini.

          • faramir
          • 7 years ago

          That’s Kaveri.

          Kabini is Brazos’ successor (not exactly in the same league with Trinity).

        • chuckula
        • 7 years ago

        Did you get that information from this article or from another article? I’m not seeing support for it in TR’s interview.

          • sschaem
          • 7 years ago

          The only thing they released (Direct TDP for TDP numbers)

          3DMark 11
          A8-4555M -> 780
          A8-5545M -> 1100

          [url<]http://www.amd.com/us/press-releases/Pages/amd_unveils_new_apus.aspx[/url<] Thats a 40% gain in their 3dmark score in the same 19w TDP. But the other number show a drop-off in gain with higher TDP parts, to 20% So yea, 40% on the 100w part seem unlikely. But taking a step back, the 19w A8-5545M is neck to neck with the 35w A10-4600m (Only 10% slower CPU) The A8-5545M might make for a decent gaming laptop (in slim format), considering the "steam box" is based on a A10-4600m ...

      • flip-mode
      • 7 years ago

      If AMD could get 40% more than the 5800K that’d be great. I just checked my X4 955 against the 5800K over at Anandtech Bench and the are almost identical twins in terms of CPU performance.

        • A_Pickle
        • 7 years ago

        …that’s actually mildly impressive, to see an APU delivering X4 955 performance…

    • tbone8ty
    • 7 years ago

    I wonder if these firmware and microcode updates will some how make it to trinity !

      • dpaus
      • 7 years ago

      Yes, they are being applied to Trinity, and the improvement is so dramatic, they’re giving Trinity a new name: Richland.

        • tbone8ty
        • 7 years ago

        I ment current trinity line up dummy

          • ermo
          • 7 years ago

          Perhaps you should have endeavoured to make that clearer in your original post, then?

          Example:

          [quote<] I wonder if these firmware and microcode updates can/will be backported to the current trinity-based systems? [/quote<]

    • sschaem
    • 7 years ago

    Did I just enter the twilight zone ? Positive news about AMD… I’m freaking out, what is going on !

      • Prestige Worldwide
      • 7 years ago

      It’s nice to hear something good about AMD for a change. I’m rooting for them, even if I haven’t bought a GPU from them since the HD 4870 and a CPU from them since a 1.333 Athlon GHz in 2001. Here’s hoping they bounce back!

        • yogibbear
        • 7 years ago

        Yeah it’s really weird… i want them to sell good products, and other people to buy them, so my intel one’s don’t stagnate and don’t cost too much… but I still can’t drag myself over to buy one myself. 🙁

          • NeelyCam
          • 7 years ago

          It’s hard to make sacrifices for the common good.. that’s why I appreciate all the AMD fanbois (even though I make fun of them any chance I get)

        • The Dark One
        • 7 years ago

        Same here, except the products were a 1400 Thunderbird and a 3870.

        Oh, AMD, please make competitive products. I [i<]want[/i<] to reward your work with my moneys.

        • Veerappan
        • 7 years ago

        Agreed… I’m really hoping for AMD to drag itself back out of its current slump.

        I’ve got plenty of AMD hardware at home, not because it’s better than Intel/Nvidia, but because I don’t want to see them disappear (and I’m a fan of their OSS GPU driver in Linux).

        Desktop: Phenom II 1055T, Radeon HD 6850 1GB
        NAS: Phenom II 720, HD3200 IGP
        HTPC: Llano A5? in mini-itx.
        Laptop: Core 2 Duo, GF9400M

        • A_Pickle
        • 7 years ago

        [quote<]1.333 Athlon GHz in 2001[/quote<] ...I... I think I had one of those... it was my first CPU to clock over 1 GHz...

      • DPete27
      • 7 years ago

      I’ll believe it when I see it.

      • Bensam123
      • 7 years ago

      I don’t know… what made you think it always had to be negative?

      • ronch
      • 7 years ago

      Perhaps if we all held our hands together and thought that AMD will start pouring out lots of good news, it will happen.

      The Secret, anyone? LOL

    • dpaus
    • 7 years ago

    It’s becoming clearer and clearer that Bulldozer was rushed out the door far too early.

      • flip-mode
      • 7 years ago

      While still being a year late!

        • dpaus
        • 7 years ago

        Exactly. Engineering management clearly went seriously off the rails somewhere.

          • NeelyCam
          • 7 years ago

          Don’t forget that GloFo 32nm also went off the rails at the same time. This is not entirely AMD’s fault

          • Anonymous Coward
          • 7 years ago

          Everyone is a processor designer these days.

        • willg
        • 7 years ago

        And the rest… The Bulldozer concept originated in in the 1990s and AMD patents showed they were thinking this way in 2000-2001

        [url<]http://www.chip-architect.com/news/2000_09_27_double_pumped_core.html[/url<] [url<]http://www.chip-architect.com/news/2001_10_02_Hammer_microarchitecture.html[/url<] The integer clustering concept is even older than that: [url<]http://en.wikipedia.org/wiki/Alpha_21264[/url<] One day maybe somebody in the know at AMD will shed light on exactly what happened between 2003's K8 and 2011 Bulldozer.

          • willmore
          • 7 years ago

          Good catch. AMD did get a chunk of the Alpha technology when DEC died. The FSB for the K7 was one of them. IIRC, they shared a chipset with the current gen alpha. Irongate was it?

            • willg
            • 7 years ago

            Yes, I think it was. AMD 751 or something?


            Recently issued AMD Patent (6,275,905) on the name of Dirk Meyer and Jim Keller gives a possible system solution for an 8 way Sledge Hammer multiprocessing system. Such a system would contain four SledgeHammers each with two cores. Each SledgeHammer has its own memory controller on chip and three Hyper Transport busses. Two of these link to other SledgeHammers forming a rectangle with a SledgeHammer on each corner. The third bus interfaces each processor to its local I/O slots.”

            • mesyn191
            • 7 years ago

            Irongate was a good chipset. Had a FIC slot A mobo that used it. Problem was that it also used a VIA southbridge IIRC which caused some stability problems. The board itself though lasted for many years.

      • Alexko
      • 7 years ago

      Too early is a matter of perspective.

      Bulldozer was so late, and AMD was so far behind, they had to put something out. Granted, it didn’t help that much, but it was better than nothing.

        • NeelyCam
        • 7 years ago

        Was it really? Phenoms were doing OK, and they were smaller, cheaper-to-make chips. If anything, AMD could’ve shrunk Phenom IIs to 32nm while getting BD architecture (and GloFo 32nm) figured out.

        New process node and new architecture at the same time… a risky bet that failed

          • willg
          • 7 years ago

          I think by the time they realised Bulldozer wasn’t going to meet their performance or power consumption projections, it was too late to do anything.

            • NeelyCam
            • 7 years ago

            Probably very true

            • spigzone
            • 7 years ago

            Doubtful, considering how it drug on and on and on. They had plenty of time to do a 32nm Phenom II, which would have been far more timely and competitive with Intel. DIrk for whatever reason, probably ego related, decided not to and then tried to force Bulldozer on the market with a very rapid and complete Phenom II EOL. Which pissed me off, I LIKED Phenom II. A LOT of people like Phenom II.

            I don’t get the whole Bulldozer > Excavator plus lots of tweaking thing. It’s all on 32nm so why not start with Excavator and tweak that over time? Are the designers that incompetent?

            • willg
            • 7 years ago

            In software development there is a group of methodologies called “agile” where you put together a feature list (backlog) and then execute “sprints” where for each sprint you pick the most important/sensible things from your backlog at that time and build them into the product in that sprint. After that you quickly begin on the next sprint, repeating the process.

            Throughout the backlog can change to reflect the new requirements and because you don’t pick what you are going to do in each sprint until you are about to start it you remain able to respond to changing demands more rapidly than if you’d planned everything at the start and built the entire thing in one go.

            maybe AMD thought this could be applied to processors too

            • Anonymous Coward
            • 7 years ago

            [quote<]Are the designers that incompetent?[/quote<] No. Some advice on that subject. If you find people who appear to be doing something totally stupid, there is a very good chance that you simply do not understand the situation.

            • Bensam123
            • 7 years ago

            Sounds about right… and at that point it’s better to profit from it then shut things down and sever contracts. They still ended up breaking chip contracts which cost them a hefty amount.

          • OneArmedScissor
          • 7 years ago

          They did shrink K10 first. Remember Llano? No? Because it was a disaster.

          A hypothetical Phenom III was not to be, for whatever reason. Bulldozer and Trinity were rushed because there was no choice.

            • NeelyCam
            • 7 years ago

            Llano wasn’t a shrink. Llano was a crippled cpu with a bolted-on gpu. No; I’m talking about a true shrink, with improved clocks and lower power consumption and area.

          • Anonymous Coward
          • 7 years ago

          They did make a 32nm K10, you may recall. It wasn’t exactly more awesome than the 45nm versions. Maybe what they should have done is make a 45nm BD!

            • Mat3
            • 7 years ago

            They already shrunk the core (Llano), they could have easily done a Phenom 2 X8. The Llano APUs, judging by the top desktop parts, didn’t look great for clock speed, but how much of the problem was because of the GPU on the same die? We’ll never know, but Charlie said they had a lot of problems just from the combined CPU + GPU. An X8, all CPU part would perform a lot better than BD easily: the 6-core 1100T often matched or even exceeded the 8-core FX-8150 in many multi-threaded loads and almost all light and single threaded ones.

            • Anonymous Coward
            • 7 years ago

            An 8-core, low clocked K10? That would never fly. Bulldozer was at its strongest with 8 active threads, and the 6-core K10s that could challenge it were clocked a lot higher than any 32nm K10 AMD released. Was it 600mhz higher, 45nm vs 32nm? They might have been able to get some 8-core 32nm K10s to beat the best 6-core 45nm K10s in total throughput, but at the cost of single threaded performance, so that doesn’t really address the weakness of Bulldozer.

            Additionally I am not impressed by blaming the on-die GPU for the poor CPU performance. They sold versions with the GPU disabled, and they were still a pretty poor product. Further, I do not see any connection between CPU and GPU performance except for thermal concerns, which clearly adding another 4 cores could also cause.

      • Bensam123
      • 7 years ago

      Was it early or was this an executive decision? If we had waited for all these variants of BD to come out we’d still be working on Phenoms and people would still be giving AMD crap for slow, dated chips, and AMD would have no revenue from chips past the $100 price point.

      I think it was a gamble and as to whether or not it worked in their favor, I’m not entirely sure. Their brand name has definitely taken a hit, but they continue forward… If their chips improve, so will their adoption and favor among hardware builders. As to how much fallout this caused internally at AMD is also up in the air. If all these hardware developers and executives left because of BD, what does that say? If they knew full well that better variants were in the pipe that would improve performance noticeably… I don’t think holding off on release would’ve fixed that either.

        • Sam125
        • 7 years ago

        BD was most definitely late. Which is why AMD’s CEO stated a few months back that they were going to “streamline” (meaning layoffs) their management structure as having too many layers of management were causing all kinds of delays and problems.

      • ronch
      • 7 years ago

      It was the Ruby FX comic that held the project back. They couldn’t finalize the story and it had to come out before the actual CPU did.

Pin It on Pinterest

Share This