Desktop Trinity APU models revealed

With little fanfare, AMD has released some details on its first desktop-bound Trinity APUs. These chips will be based on the same quad-core design as AMD’s mobile Trinity APUs, complete with Piledriver CPU cores and updated Radeon graphics. The desktop parts will run at higher speeds, though. Here are the specifics on the incoming models:

Model Base clock Boost clock IGP ALUs GPU clock TDP
A10-5800K 3.8GHz 4.2GHz Radeon HD 7660D 384 800MHz 100W
A10-5700 3.4GHz 4.0GHz Radeon HD 7660D 384 760MHz 65W
A8-5600K 3.6GHz 3.9GHz Radeon HD 7560D 256 760MHz 100W
A8-5500 3.2GHz 3.7GHz Radeon HD 7560D 256 760MHz 65W

All the CPUs feature quad cores (two modules), 4MB of L2 cache, and support for DDR3 memory speeds up to 1866MHz. The chips slip into AMD’s FM2 socket, which means new motherboards will be required. Overclockers will want to pay attention to the K-series models, which feature unlocked multipliers.

While the A10-5800K’s 4.2GHz Boost clock is impressive, that probably won’t be enough to match the CPU performance of Intel’s dual-core Ivy Bridge processors. We’ve already seen the Core i7-3770K dominate the FX-8150, an eight-core Bulldozer part that also peaks at 4.2GHz. It seems unlikely Trinity’s per-clock performance improvements will be substantial enough to close the gap.

These first desktop Trinity models will appear in all-in-one PCs starting this month, according to AMD.

That’s interesting, because at Computex last week, numerous motherboard makers told us the desktop Trinity parts were delayed. Looks like big-name PC makers have first dibs on Trinity chips, although this silicon might not be the same as the chips destined for retail-boxed APUs, which are purportedly due this fall. We’re told by a trusted source that AMD is respinning the silicon for a collection of desktop Trinity parts to be released in October.

Comments closed
    • maroon1
    • 7 years ago

    I wonder why many people still believe AMD marketing slides to this date ? Do you know that real performance is almost always less that what AMD claims

    Even though A10-5800K has extremely high stock clock speed, it is barely faster than old Llano A8-3850

    Actually, A8-3850 beats it in some cases. Here check the benchmarks from Tom’s hardware
    [url<]http://www.tomshardware.com/reviews/a10-5800k-a8-5600k-a6-5400k,3224.html[/url<]

      • NeelyCam
      • 7 years ago

      Wow, you’re right! Bulldozer strikes again with its “IPC goes up – not” architecture..

      At least the power consumption didn’t get worse

    • Silus
    • 7 years ago

    Guess no one complains about the fact that the IGP is named HD 7xxxD, yet is still based on VLIW4 (HD 69xx)…

    • WaltC
    • 7 years ago

    I’ve got this AMD 3+ motherboard at home that is right now hosting an Athlon II x2 245, that is just marking time until I throw a stand-alone piledriver into it…;) I have no interest in an IGP, even if AMD’s far outclass anything else available in the IGP category (from nVidia/Intel.) Even so, I’d imagine there are plenty of people who’d gladly go with one of these AMD IGPs. I mean, look at the people who think the iPad has great graphics…;) Any one of these would blow an iPad out of the water.

    It’s so odd to hear negative commentary about bulldozer–usually made by benchmark fanatics or other people who aren’t running AMD in the first place–is what it seems like. There are hundreds of supposedly verified-owner testimonials on Newegg’s [url=http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=100007671%2050001028%20600213781&IsNodeId=1&name=Socket%20AM3%2b<]bulldozer product pages[/url<] where enthusiastically positive commentary on these processors outweighs negative commentary by at least 10-1. Here's something I'm not clear on, though...Won't piledriver be eschewing an L3 cache just like Zambezi? (Awkward way of saying there's no L3.) So why is Newegg advertising an 8mb L3 for these processors?

      • StuG
      • 7 years ago

      The average person will be ok with a Bulldozer CPU if they don’t know or don’t care any better. However, this is a site of enthusiasts that very much do care. I had a Phenom II X4 at 3.8Ghz and seeing as Bulldozer couldn’t outperform or tied that, it was a failure. I had programs/games that required more horsepower than that, and were instantly satisfied when I upgraded to the 2600k.

        • A_Pickle
        • 7 years ago

        Honestly, that’s where I was. The Phenom II’s weren’t the [i<]best[/i<] performers, but for the price, they were performance competitive and didn't have unreasonable power consumption. As a result, I was happy to throw down for an X6 1055T and an 890GX+SB850 mobo. I'm happy to help out the little guy, competition is good. Bulldozer, on the other hand, performed worse in so many crucial applications (games and content creation) for me while [i<]increasing[/i<] power consumption that it just isn't worth it. For anyone. I've have a few friends who are less computer saavy, and just want a sweet rig on which to play games, went to IBuyPower and got themselves some really pricey FX-8150 systems. With DDR3-1600 RAM to boot. It kills me to see that (considering they didn't even get the higher speed memory that really helps Bulldozer out) at the end of the day, they could've gotten an Intel-based system that wouldn't have cost one red cent more AND would've run circles around Bulldozer (even with the faster memory). And it would've done it with lower power consumption, to boot. I love AMD, I want to see them succeed and they've done well in some areas (APU's and GPU's)... but even on their shoestring of a budget, I think they could've made some wiser choices to improve x86 performance. They really had no business developing some crazy new architecture unless they were pretty certain it would deliver... and it looks like they weren't.

      • FuturePastNow
      • 7 years ago

      Zambezi has L3 cache. So will Vishera, the four-module, no-IGP desktop Piledriver processor.

      Trinity doesn’t have L3, and neither did Llano.

      • vargis14
      • 7 years ago
    • loophole
    • 7 years ago

    I thought desktop Trinity was supposed to get official DDR3-2133 support?
    Towards the end of 2011 we got this:
    [url<]http://i1-news.softpedia-static.com/images/news2/AMD-Trinity-APUs-Get-Detailed-Include-DDR3-2133-Support-3.jpg[/url<] Seems that APUs need all the bandwidth they can get. Wonder what happened to this?

      • Rand
      • 7 years ago

      THG says it does support DDR3 2133 with one module per channel. Two per channel with DDR3 1866 and below.
      The benchmarks definitely show how bandwidth constrained it is.

        • loophole
        • 7 years ago

        Ah cool.

        I wonder if this means we’ll see official DDR3-2133 support on Piledriver based AM3+ CPUs (Vishera).

    • HisDivineOrder
    • 7 years ago

    [quote<]While the A10-5800K's 4.2GHz Boost clock is impressive, that probably won't be enough to match the CPU performance of Intel's dual-core Ivy Bridge processors.[/quote<] Yeah. Like the Pentium 5 was impressive, right? Like Wiliamette was impressive? I mean, clockspeeds are just so impressive as numbers that don't really equate to actual performance improvements compared to slower, better CPU's. I still remember when AMD was having to create farce numbers to give consumers a way to compare the performance of their slower chips to the faster clocked, lower performing Intel equivalents. Back then, impressive meant more performance, not just higher numbers displayed for your clockspeed. Back then, AMD gave a crap. Alas, those days have passed. RIP, AMD. You were fun while you lasted, but the inmates are running the asylum and it's probably time to just hang it up and go home. And game on that Intel CPU you made us all buy to have great gaming performance without the extra heater you include for free built-in with every new Bulldozer CPU.

    • ronch
    • 7 years ago

    I hope sites would compare the desktop Trinity to Bulldozer (FX-4xxx) when it comes out, even if it wouldn’t be an Apples to Apples comparison. Clock the FX-4xxx at the same clock as the desktop Trinity in the test and see if IPC has really gone up.

      • faramir
      • 7 years ago

      Tomshardware has disabled 1/2 of FX-8150 and clocked it the same as their A10-5800K Trinity sample and made a direct comparison. Trinity was significantly faster at same clock and with same number of cores, so yeah, IPC has really gone up.

        • NeelyCam
        • 7 years ago

        Link…? I only see the article where they compare Trinity to Llano..

          • Rand
          • 7 years ago

          [url=http://www.tomshardware.com/reviews/a10-5800k-a8-5600k-a6-5400k,3224-2.htm<]Linky[/url<] Only a handful of benchmarks but they do show some pretty impressive IPC gains over Bulldozer. The starting point was sufficiently low that even significant gains still leave it well behind the older Llano/Athlon 2/Phenom 2 in terms of IPC though, but at least it's getting to the point where it's not completely laughable and the sheer clockspeed allows a small performance bump compared to Llano. A similarly large sized step forward with Steamroller and it might come within range of merely looking bad compared to the Core i3, at least as long as Intel doesn't make any significant gains of their own... Yeah, standards have really dropped when one has to hope Intel stands (mostly) still and AMD has another significant IPC bump just to get to the point where performance is merely bad. At least they can hang their hat on the GPU, though Haswell looks like is will getting dramatic improvements there.

            • NeelyCam
            • 7 years ago

            Thanks! (the linky was a tad broken, though – htm instead of html)

    • Arclight
    • 7 years ago

    More crippled quads from AMD….yay!

      • NeelyCam
      • 7 years ago

      Don’t call it a quad – it cheapens the meaning of the word

    • Deanjo
    • 7 years ago

    Benches @ Tom’s

    [url<]http://www.tomshardware.com/reviews/a10-5800k-a8-5600k-a6-5400k,3224.html#xtor=RSS-182[/url<]

      • ET3D
      • 7 years ago

      Thanks. I used to not like Tom’s that much, and for some years didn’t visit it, but I regularly see good articles there these days.

        • Chrispy_
        • 7 years ago

        Tom’s fell apart when they sold out, but it really hit rock-bottom when Bestofmedia took over.

        Perhaps site hits plummetted and they hired someone who actually cared.

      • vargis14
      • 7 years ago

      Very Very Disappointing 🙁 No x86 power…….just plain pitiful

      I expected a bit more improvement from liano but trinity’s borked x86 cores are worse then the stars cores in liano……….The IGP only a very minor improvement over Liano.
      It cant compare with a sandy based 40$-60$ celeron or pentium mated with a 50-60$ 6570. Even then combined your around 100 watts TDP cpu and discreet gpu.

      It hurts to say but i will repeat it…….I will not recommend a amd build for any desktop anymore. Its a shame too:(

        • Jigar
        • 7 years ago

        It’s too early to say that, wait, you just can’t trust Tom’s blah blah…

          • vargis14
          • 7 years ago

          I might be jumping the gun but the x86 cores are just Bullcrap i mean tiny bit improved half bulldozer cores running at a much higher speed then the liano cores. I want to see a overclocked liano at the same speed as the trinitys, then you will see i promise.:(

          Even so once you pay for a K model trinity apu, pay for a decent MB to overclock more expensive ddr3 1866mem you still won’t get the memory bandwidth of a celeron with ddr3 1066. Its just not worth it on the desktop platform. Intel CPUs are inexpensive enough for anyone to be honest with plenty of upgrade options for later if you can only afford a celeron for 40$

          I hope TR’s review will include similarly priced sandy dual core platform with cheap discreet 6570 or something…..I am sure they will be more thorough and prove it. They always are.

            • Arag0n
            • 7 years ago

            I dont get your complains. Llano as far as I remember uses the same 32nm technology and both chips have the same TDP. Still, you get a 30% GPU improvement and depending the area 100 to 10% CPU improvement. So, for me that it’s a huge advance just from architecture point of view. Just think that Trinity will be using LESS energy than the FX-4170 (125W), be manufactured in the same 32nm and also provide a good enough GPU for most 3 years old games, fair performance for current ones. Really, I find hard to complain about the new trinity chips.

            Bulldozer was a disaster but Trinity seems to be a pure awesome chip. Most of people talks about the A10-5800K, but I have my eyes in the A10-5700, almost as good and only 65W. It will be much faster than the current FX-4100 and consume 30W less with GPU included. Same 32nm again. I think you don’t really realize how vastly AMD improved over bulldozer last year with trinity.

            I just hope they improve as much with next version so intel can have some decent competition in the near future.

            • vargis14
            • 7 years ago

            I guess we will have to wait for TR’s review.

            BTW most people who want to build a gaming machine now, are not doing it for 3 year old games. At least the majority are not.

            • clone
            • 7 years ago

            ppl building a gaming machine don’t use Integrated graphics.

            Trinity is a decent upgrade overall…. actually a really nice one despite still being weak compared to Intel on the CPU side.

            as a do it all master of none Trinity is the compelling part which isn’t a bad thing.

            I know when visiting grandma and passing time on her bare bones system / web surfer I’d prefer a trinity setup over anything Intel just for the gaming capability because I’d happily bloat it with older games and even some newer ones.

    • ish718
    • 7 years ago

    [quote<]While the A10-5800K's 4.2GHz Boost clock is impressive, that probably won't be enough to match the CPU performance of Intel's dual-core Ivy Bridge processors.[/quote<] Unless the application is micro optimized for Bulldozer's architecture, which is too much for ask for.

      • Deanjo
      • 7 years ago

      Even then there is too large of a performance gap.

    • Anarchist
    • 7 years ago

    it will be nice to see some mini-itx mobos for AMD cpus finally …

      • barleyguy
      • 7 years ago

      MSI demoed an FM2 socket Mini-ITX motherboard recently. These chips should be really nice for Mini-ITX cases that don’t have room for a discrete video card.

      I’m wondering if the Commodore 64 barebones will handle a 65-watt APU. Could be a fun retro looking system, with a built in keyboard.

    • yokem55
    • 7 years ago

    With regard to the article photo, given that character’s…erm…fate, I don’t know if AMD will really appreciate the association…

      • Prion
      • 7 years ago

      Was the photo removed or am I blind?

      Edit – Okay it was neither, I just wasn’t able to see it since I came here from the news page.

        • FuturePastNow
        • 7 years ago

        The frontpage images rotate.

      • entropy13
      • 7 years ago

      [url<]http://xkcd.com/566/[/url<]

      • CuttinHobo
      • 7 years ago

      You mean getting shagged by Keanu Reeves?

      • NeelyCam
      • 7 years ago

      The first Matrix still holds a place in my heart as one of the biggest jaw->floor movies of all time

      • pikaporeon
      • 7 years ago

      Yeah, it’s a shame they never made any sequels to the Matrix

        • willmore
        • 7 years ago

        I +1’ed you just for the Sgt. Frog reference.

    • brucethemoose
    • 7 years ago

    I hope these chips have the wonderful OCing headroom llano had… but that’s not likely. Considering what BD OC’d to, 4.2ghz is already pretty high, and it doesn’t sound like 800MHz will leave much headroom with the GPU either.

    • Arag0n
    • 7 years ago

    The A10-5700 looks like a great deal to me.5% less GPU performance, 10% less CPU performance (5% in boost mode) all in all, with 65% of the TDP. You won’t notice the 5-10% speed difference, but you will definitely notice the 35% less power consumption, and if you ever miss the speed, I’m sure a 10% can be easily get by overclocking even with non-unlocked multiplier.

    • chuckula
    • 7 years ago

    This is sad… they’ll only be able to put her photo up one more time once the desktop chips are finally reviewed 🙁

      • OneArmedScissor
      • 7 years ago

      Well, we [i<]still[/i<] haven't seen any laptops...

    • crabjokeman
    • 7 years ago

    Wait, so AMD and Intel marketers agreed to use the ‘K’ suffix to denote an unlocked multiplier? Where’s the consumer confusion in that? Some heads should roll.

    At least Nvidia is moving in the right direction…

      • Duck
      • 7 years ago

      It’s to mark which one the is for the kool kids to get.

        • forumics
        • 7 years ago

        holy cows! i’ve got a 3770k, i’m going to school tomorrow to tell everybody about it!

          • NeelyCam
          • 7 years ago

          You’ll be the most popular guy/girl in school – you get to date cheerleaders and football players

            • BobbinThreadbare
            • 7 years ago

            *kheerleaders and *lakross players

    • Myrmecophagavir
    • 7 years ago

    Is it definitely not a typo that the 5700 runs its GPU at 760 MHz? Blurs the line between the A8 and A10 models.

      • Goty
      • 7 years ago

      Well, you’ve got the shader count to differentiate them, then.

    • dpaus
    • 7 years ago

    <sigh> I would’ve liked to see what they could do with a 125W TDP, but, OK…

    [quote<]probably won't be enough to match the CPU performance of Intel's dual-core Ivy Bridge processors[/quote<] Probably not; although some process improvements were incorporated into 'Piledriver' cores, I heard somewhere the other day that AMD is targeting the 'Steamroller' itreration (due early 2013, I think) for more significant performance enhancements.

      • grantmeaname
      • 7 years ago

      Power increases with the cube of the frequency. That means a 25% power boost is a 7.7% frequency boost, which is about 4.5GHz. 300 MHz isn’t really going to do a ton of good.

        • codedivine
        • 7 years ago

        Power increases as cube of frequency? Not really. Power increases linearly with frequency and square of voltage. So if you can clock higher without increasing voltage (or with very little increase), power is only going to increase mostly linearly.

          • OneArmedScissor
          • 7 years ago

          That may be true in a perfect world of a lone CPU core, but in the present reality of multiple core combinations, boost modes, cache, memory controllers, and system busses all being linked, anything at or above 4 GHz as the base clock will send power use skyward.

          Boosting is allowed to temporarily exceed TDP. It’s an illusion that higher clock speeds are now possible with tamer TDPs than the ceiling the Pentium 4 hit its head on.

          This remains true of Bulldozer/Piledriver:

          Bulldozer FX-4120 quad-core – 95w TDP – 3.9 GHz base, 4.1 GHz turbo

          Trinity A10-5800K quad-core – 100w TDP – 3.8 GHz base, 4.2 GHz turbo

          Bulldozer FX-4170 quad-core – 125w TDP – 4.2 GHz base, 4.3 GHz turbo

            • NeelyCam
            • 7 years ago

            [quote<] anything at or above 4 GHz as the base clock will send power use skyward.[/quote<] I can't believe it... you [i<]still[/i<] think this is true?!?!? Look at IvyBridge with reasonable overclocking to 4,2-4.3GHz (without a voltage bump).

          • grantmeaname
          • 7 years ago

          That would only be true if you made your binning less conservative, because the chips currently have the lowest voltage allowed by the binning ratios and speed. You could do that without changing the frequency, so it’s not really germane to the discussion.

          • jensend
          • 7 years ago

          That only makes sense if frequency and voltage are independent variables. In this context they aren’t- the voltage is always chosen to be the lowest that will allow the processor to be stable at a given frequency.

          Saying the required voltage increases linearly with frequency isn’t exactly correct, but it’s a decent approximation across a fairly wide range of clock speeds. (The first thing I was able to come up with is page 5 of [url=http://empathicsystems.org/Papers/ispass09.pdf<]this paper[/url<]; a linear fit is quite good here, with rather small residuals.) So saying power ~ frequency ^3 is actually rather accurate.

            • BobbinThreadbare
            • 7 years ago

            I expected voltage to increase on an exponential scale as frequency increased.

            • jensend
            • 7 years ago

            I don’t have an EE background, but on another forum I read one very rough explanation of the reason why voltage has to be increased with frequency. It sounded somewhat plausible given what physics I know:

            One reason supplied voltage has to be increased is because you can’t instantaneously increase/decrease voltage of a particular part of the chip (e.g. on one side of a junction); instead, the difference between the present voltage and the higher/lower level you’re switching to decays exponentially. For the behavior of the junction to switch, the difference from the previous voltage has to exceed some level. So you have to drive it harder if you want the switch to happen faster. At extreme frequencies, other factors start to come into play more.

            That’s all they said, but a little math shows that if they’re right and if we’re in a region on the frequency-voltage curve where that’s the dominant link between the two, then we expect the relationship to look linear:

            Vdd= voltage supplied to chip
            V(t) =Vdd*exp(-kt) voltage as function of time; we’re switching from high to low at 0
            Vs=required voltage difference for behavior of junction to switch
            c/f, where f is frequency, is the time we need the switch to happen in; i.e. V(0)-V(c/f)>=Vs.
            So Vdd>=Vs/(1-exp(-kc/f)).
            1/(1-exp(-1/x)) is basically x+1/2 for reasonably large x.

            • bwcbiz
            • 7 years ago

            I DO have an EE background (although not specialized in Chip design) and the basic power formula is P=IV^2. This is where the power proportional to the square of the voltage comes in. For CMOS IC technology current (I) only flows when a transistor is switching states, so the circuit uses zero power when voltages are in steady state (although with shrinking circuit features at smaller process nodes, leakage current when the circuit is “stable” becomes more of an issue). This means that current only flows on the clock edges and for any state switches triggered on that clock pulse. This is where the Power proportional to frequency comes in. The higher the frequency the more clock pulses and state changes there are in a given amount of time, hence more current.

            The relationship between frequency and voltage occurs because the clock pulses and state changes in the CPU are not performed by perfect square waves. They have noise and delay at the pulse edges. Increasing voltage basically allows circuitry to reach the threshold voltage for the digital 1 or 0 more quickly by overdriving the voltage past the required value to set the circuit’s digital state. So the increase of voltage with frequency is mainly relevant to overclocking existing circuits and less relevant to chip design. When overclocking, the clock waveform is optimized to achieve the manufacturer-specified clock frequencies, so in order to achieve higher frequencies, you have to get lucky with a chip where the wave forms are nearly perfect, or you have to overdrive the chip by increasing the voltage.

            For a manufacturer starting from scratch, chips can be designed (up to a point) to optimize the squareness of the waveform at a given frequency so that voltage doesn’t necessarily have to increase linearly with frequency. Much of the increase in voltage to achieve higher frequencies is also mitigated by the fact that voltages are typically lowered as process nodes and circuit features decrease in size.

            • NeelyCam
            • 7 years ago

            You’re absolutely correct. One thing, though: circuit could be designed to have very square-like waveforms by reducing the ratio of load capacitance and drive strength through driver size increase (chip designers call that ‘fanout’) but this essentially translates to a need to have more buffers everywhere, increasing the power consumption significantly.

            The goal of optimization is to use just the ‘right’ amount of buffering everywhere to remove weak links, allowing all of the chip to run at the target frequency without having unnecessary powerhogging buffers here and there.

            • jensend
            • 7 years ago

            [quote<]the basic power formula is P=IV^2.[/quote<]I may not have majored in EE but I did take a [i<]lot[/i<] more physics than one needs to know that much 🙂 [quote<]The relationship between frequency and voltage occurs because the clock pulses and state changes in the CPU are not performed by perfect square waves. They have noise and delay at the pulse edges. Increasing voltage basically allows circuitry to reach the threshold voltage for the digital 1 or 0 more quickly by overdriving the voltage past the required value to set the circuit's digital state.[/quote<]Clock signal rise and fall times should fit the basic idea I just described, no? And even with a very simple circuit and perfect square wave input, a quick look at the differential equations will show exponential decay-type responses.

            • seeker010
            • 7 years ago

            [quote<]I DO have an EE background (although not specialized in Chip design) and the basic power formula is P=IV^2[/quote<] wait, seriously? I must have really fallen asleep during physics class.

            • jensend
            • 7 years ago

            You didn’t fall asleep, though many of the rest of may have been a little asleep at the wheel when we read bwcbiz’s post. (Not paying attention -> reading what we thought he meant rather than reading what he actually said).

            Of course it’s P=IV. As he says, if you neglect leakage, in CMOS charge only flows during a switch, so the current ie charge transfer per time is frequency*charge transfer per switch. The other factor of V comes in because the amount of charge that flows is C*V. Not every gate switches every cycle, so people throw in a factor alpha for the proportion that are switching. So the usual CMOS formula is P=alpha*f*C*V^2.

            • jensend
            • 7 years ago

            Why the downvote, and why the upvotes for codedivine’s ridiculous claim? He says[quote<]So if you can clock higher without increasing voltage (or with very little increase), power is only going to increase mostly linearly.[/quote<]but anybody with half a brain knows that if the manufacturer could increase clocks without increasing voltage [b<]they're doing it wrong[/b<]. If they could increase clocks without increasing voltage and have things stay stable, that means they had been leaving a bunch of performance or power savings on the table. They would never do that. They're not stupid. Clocks/vdd will always be at a pareto optimal point on the power/performance curve for that particular chip.

            • Duck
            • 7 years ago

            Because you are wrong and codedivine is right.

            Go look at the datasheet for an Intel CPU. There’s a whole bunch of specifications in there. Its not just a matter of increase the MHz and check if it’s stable. You have to revalidate a whole bunch of stuff and they would never risk shipping a CPU that doesn’t meet ALL the specifications.

            • jensend
            • 7 years ago

            Bunk. Of course they have to validate things and they’ll be careful about it rather than pushing the very edge of stability, but that doesn’t change the fact that there’s a basic tradeoff here. Higher frequencies require higher voltages. If you reduce the frequency you can generally reduce voltages and save power.

            Beyond the fact that every single chip’s product line validates that claim (go look at the voltages of the differently-clocked models), that’s the whole point of why anybody bothered putting dynamic voltage scaling into CPUs in the first place.

            codedivine is ignoring that tradeoff, acting as though they can vastly increase frequencies without touching voltages.

            • Duck
            • 7 years ago

            Nope.

            [quote<]So if you can clock higher without increasing voltage[/quote<] The key word there is "if".

            • jensend
            • 7 years ago

            That “if” is basically never fulfilled to any significant extent.

            codedivine’s post was criticizing grantmeaname’s [i<]fundamentally accurate assessment of the normal situation[/i<] by discussing [i<]an empty hypothetical which never applies in real life[/i<]. It's not much better than saying "You're wrong, because if we all had our own pet unicorns and ate frozen rainbows for breakfast, then processor power would scale linearly with frequency."

            • NeelyCam
            • 7 years ago

            If you don’t have an EE background, it would serve you well not being so arrogant about this.

            Codedivine is absolutely right, but the ‘if’ is a key element there. It works also in reverse – if you can’t scale down the voltage when you lower the frequency, you get a linear power scaling.

            You and grantmeaname are partially right that in a voltage-minimized conditions, effective frequency-power relationship is not linear, as you need more voltage to get more frequency. The paper you linked clearly shows that, but the max-frequency isn’t really a linear function of a voltage – in Figure 3 it looks closer to to a quadratic function (or, voltage needs to increase with a square root of the frequency). That makes the power-frequency relationship quadratic instead of cubic, and I wouldn’t call grantmeaname’s assessment [i<]"fundamentally accurate"[/i<] There is a flip side to all this; if you look at IvyBridge overclocking results, the chip hits a 'wall' at around 4.5GHz - the power consumption goes through the roof because the voltage needs to be hiked significantly for a small increase in frequency (if goes beyond even a linear voltage-frequency relationship). It's more complicated than just saying "it's a quadratic/cubic" relationship.

            • jensend
            • 7 years ago

            I already mentioned in my let’s-look-at-the-math post that the linear relationship would only hold for a portion of the frequency-voltage curve and would of course break down at the extreme high and low ends, where other physical effects dominate. And of course the vast complexities involved in microprocessors, the noisy thermal environments they create, etc mean no simple formula will be an exact fit.

            But that paper shows a linear fit holding very well across the entire range of frequencies Intel ever used for the Pentium M, from the highest clocks to (that model’s) minimum SpeedStep. I don’t have the data handy for other processors, but the other info I’ve seen, whether from manufacturers’ own voltage settings or overclockers’ tests, gives the impression that the points at which this relationship breaks down are roughly the points beyond which you would never go. The entire useful range of clocks basically seems to fit this pattern.

            [quote<]in Figure 3 it looks closer to to a quadratic function (or, voltage needs to increase with a square root of the frequency)[/quote<]Bah. Have you actually looked at best fit polynomials and residuals, or did you just eyeball the figures? The linear residuals are miniscule, and if you fit frequency as a function of voltage and allow a quadratic term, not only does the fit not improve much, the leading coefficient of the best fit quadratic is *negative* which doesn't mesh with your claim at all.

            • NeelyCam
            • 7 years ago

            [quote<]I already mentioned in my let's-look-at-the-math post that the linear relationship would only hold for a portion of the frequency-voltage curve and would of course break down at the extreme high and low ends[/quote<] The linear relationship breaks at the extremes, but the quadratic relationship holds in the middle portion of the range. [quote<]But that paper shows a linear fit holding very well across the entire range of frequencies [/quote<] No it doesn't. It breaks at the extremes because of those "extreme" effects at the low/high end. [quote<]I don't have the data handy for other processors, but the other info I've seen, whether from manufacturers' own voltage settings or overclockers' tests, gives the impression that the points at which this relationship breaks down are roughly the points beyond which you would never go.[/quote<] Show me the data or it's BS. [quote<][quote<]in Figure 3 it looks closer to to a quadratic function (or, voltage needs to increase with a square root of the frequency)[/quote<] Bah. Have you actually looked at best fit polynomials and residuals, or did you just eyeball the figures?[/quote<] "Bah"? You're right - I just eyeballed the results, but your a-hole comment made me look at the fit more closely, and apparently my eyeball was better than yours. Why don't [i<]you[/i<] look at the "fit" a bit more carefully before posting like an arrogant ass, and try to exclude the extremes this time

            • jensend
            • 7 years ago

            By excluding the “extremes” you’re saying “hey, if I throw out all the data except for three points I like (those at 1330, 1460, and 1600, I’d guess) a quadratic polynomial gives me a great fit!”

            Well, for any arbitrary three points there’s some quadratic polynomial that gives an exact fit. That tells us nothing about the data. By postulating a complicated relationship based on very little data, you’re overfitting, and throwing out more than half the data just because it doesn’t fit your model is pretty biased.[quote<]and apparently my eyeball was better than yours.[/quote<]I wasn't relying on just my eyeball, I looked at the least squares best fit polynomials and the residuals. The linear fit has a [url=http://en.wikipedia.org/wiki/Coefficient_of_determination<]R^2[/url<] of more than .991. Another way of saying this is that over 99.1% of the variance in the voltages is explained by a linear model.[quote<]BTW, if you have a link to some additional data that disputes the result of that paper you linked earlier, please let me know. I'd be interested to see that, and if it shows something different, I'd happily say I was wrong and you were right.[/quote<]Well, since that paper was about the Pentium M, [url=http://www.thinkwiki.org/wiki/Pentium_M_undervolting_and_underclocking#Tested_frequencies.2Fvoltages<]here's[/url<] a list of best stable frequency-voltage combinations achieved by people undervolting their Pentium Ms. For each of those five laptops, the R^2 of a linear fit is over 96.5%. None of those laptops exhibit the little bump (noise, not some square-root relationship) seen in the middle range in the paper.

            • jensend
            • 7 years ago

            BTW, even if the data were quite different from what they really are, if instead of backing grantmeaname up they did show voltage increasing as the square root of the frequency or whatever, his claim of a linear relationship would have been a better first approximation than codedivine’s assumption that they’re independent variables. I can’t fathom why you are arguing for that kind of silliness, unless it’s just more arguing for argument’s sake.

            • NeelyCam
            • 7 years ago

            Ah – I see you [i<]did[/i<] take another look at the data [b<]you[/b<] eyeballed, and realized you were wrong. grantmeaname's 'approximation' was cubic, but the data [i<]you[/i<] linked to support his/her (and your) view showed a quadratic relation. Codedivine limited his/her comment to a specific case (for which it's [b<]very[/b<] accurate) and argued that cubic wasn't really that accurate. I think you're the one being silly here continuing your argument. I think you should just admit that you were wrong and let it be. The fun part is that if you hadn't been such an arrogant ass about it, I wouldn't have bothered to say anything, but you just had to be such a jerk so I had to say something.

            • jensend
            • 7 years ago

            As far as the “arrogance” and “a-hole” and “arrogant ass” and “jerk” personal attacks go, here’s the pattern:

            1. Somebody makes a thoughtful post.
            2. Somebody else attacks their post, tossing in a few common biases, a half-truth, or a wrong idea that has some “truthiness” appeal, and the groupthink rewards them and punishes the thoughtful post.
            3. I defend the first poster, showing how the facts back him up and the attacks weren’t justified. I state strongly that the second poster was wrong but unlike some people I do not resort to personal attacks.
            4. People become very angry with me for upsetting their biases. Somehow my trying to bring facts and logic into the discussion means I’m an “arrogant ass” – it’s just too darn uppity. If I could only be satisfied to make crap up and shout opinions rather than try to give reasons, I’d be a paragon saint of humility like you.

            Good day.

            • NeelyCam
            • 7 years ago

            [quote<]BTW, even if the data were quite different from what they really are[/quote<] BTW, if you have a link to some additional data that disputes the result of that paper you linked earlier, please let me know. I'd be interested to see that, and if it shows something different, I'd happily say I was wrong and you were right

            • Duck
            • 7 years ago

            [quote<]That "if" is basically never fulfilled to any significant extent.[/quote<] Probably virtually everyone that buys a K series CPU will try it to some extent. Which is a significant amount of people. To say it never happens is absurd.

            • jensend
            • 7 years ago

            If you have any success overclocking your chip without increasing voltages, either you’re eating into your safety margins or you have a chip which did slightly better than the company managed to predict with their binning process.

            Neither of those circumstances is something that is relevant to the company’s frequency-voltage tradeoff- they can’t afford to eat into their safety margin, and they can’t test every chip.

            Besides, the gains you get from overclocking without increasing voltage are almost always pretty minimal, especially on chips from the last decade.

      • MadManOriginal
      • 7 years ago

      AMD is great at competing in the future.

        • BobbinThreadbare
        • 7 years ago

        They have some cool ideas, but don’t have the resources to execute them.

        It’s a shame really.

      • aestil
      • 7 years ago

      There are some use cases where I’d rather have the A10-5800 vs a similarly priced Intel chip. My work computer is a Athlon II dual core @ 3.0Ghz with HD4200 graphics and it runs my 1920×1080 desktop for all normal business tasks without any issues.

      I’m becoming somewhat disillusioned with the constantly better chips that don’t seem to change anything.

      My home computer is a i5-3570k @ 4.5ghz and to be honest there isn’t a noticeable difference in the day to day from the i5-750 @ 3.6ghz i was upgrading from.

        • NeelyCam
        • 7 years ago

        [quote<]My home computer is a i5-3570k @ 4.5ghz and to be honest there isn't a noticeable difference in the day to day from the i5-750 @ 3.6ghz i was upgrading from.[/quote<] True statement. I think the biggest gains are had in power efficiency and, consequently, improved battery life in mobile systems. In mobile systems performance improvements can also be significant. My CULV Core2Duo is showing its 'age' - it struggles when scanning viruses on a regular basis

      • dpaus
      • 7 years ago

      Interesting, every one of you assumed I wanted more CPU power with the extra 25W; nobody even considered using it for a beefier GPU. AMD has shown that they have the tech needed to make a powerful GPU that can scale back it’s power use based on the graphics load, and I’d like to see what they could put into an ‘Asomething’ package using it.

      Put another way, as many here have commented, the CPU power is ‘good enough’ for almost all purposes, so I’d like to see them capitalize on their strengths and put the extra effort into a more-powerful, dynamically scalable GPU that would be ‘good enough’ to run all games on a 1080p screen (with the really high-end games set to lower detail levels as required, but still looking good and giving an enjoyable gaming experience).

      Even at 125W TDP, [i<]that[/i<] would be a chip they could compete against Ivy Bridge with.

        • FuturePastNow
        • 7 years ago

        The “K” APUs are unlocked, so you can overclock the GPU if you would like. Some people got the A8-3870K’s GPU up to 900+MHz.

        But such an increase is worth only a few FPS; memory bandwidth is the real limiting factor on these graphics cores. Haswell is going to have an on-chip L4 cache for its graphics, and AMD is going to have to do the same thing to stay competitive.

        [url<]http://www.techpowerup.com/162605/Haswell-to-Use-4th-Level-On-Package-Cache-to-Boost-Graphics-Performance.html[/url<] That'll help the GPU a lot more than burning an extra 25W.

          • Chrispy_
          • 7 years ago

          This.

          Despite their shoddy drivers and historically lame performance, Intel IGP’s are progressing far, far faster than anything AMD or Nvidia can do.

          That 800lb gorilla in the corner is making AMD work for their IGP crown at the moment.

          • dpaus
          • 7 years ago

          Wow, again the obsession with overclocking what’s there. What I meant was that I wished AMD had designed the A10 chips with some Southern Islands GPU goodness in them. I mean, consider the A10-5700 – it has a 65W TDP, and with 4 cores at 3.4/4.0 GHz, I suspect it’s CPU performance is just fine for most tasks. OK, so if I’m allowed a 125W TDP, how much Southern Islands GPU can I squeeze in there? Hell, CPU coolers are pretty good, let’s say a 150W TDP when running full-out; you could put some serious graphics horsepower in there with 85W of TDP to play with. Of course, it doesn’t have to consume 85W all the time; their throttling technology could let it run in 5 – 10W most of the time, and only ‘fire up’ when the task needed it. The 7750 draws 135W under load, but that’s counting a lot of stuff on the PCB that wouldn’t be required when integrated (or, more correctly, is somewhere on the SB chipset). How close could AMD get to 7750 levels of performance? Imagine an ‘A12’ or whatever chip that combined 4 Piledriver cores and 7750-level graphics in a 150W TDP. I think such a chip would buy them considerable marketing cachet, and give them the time they need to incorporate further process improvements into the Steamroller CPUs to better compete with Intel.

          Maybe they can include that in their October ‘respinning’ ?? I know, I know; ‘fat chance’. And yes, I know the Steamroller versions of the APUs are intended to include Southern Islands GPUs. I just can’t help but wonder ‘what if….’

            • Chrispy_
            • 7 years ago

            I know what you’re saying – give us more shaders, like 512 instead of 384.

            I’m just not sure an IGP has access to enough memory bandwidth to make extra GPU logic worth putting in. Llano underperformed compared to the same GPU core in discreet varieties with GDDR5.

            If you extrapolate far enough, you’ll have a 7970 IGP with low-clocked DDR3, and that’s going to be incredibly wasteful.

        • grantmeaname
        • 7 years ago

        That’s pretty funny, actually. We started a gigantic discussion about something you weren’t intending to talk about.

        And I’m with you. I would LOVE that chip.

    • burntham77
    • 7 years ago

    A friend of mine wants to build a budget gaming machine and I told him to keep an eye on AMDs APUs, because they should get him in the door and at least get him gaming with reasonable performance. These chips right here might just be exactly what he needs.

      • Damage
      • 7 years ago

      Do him a favor and recommend a discrete graphics card. Even on a budget, avoiding an IGP is wise. The constraints on power, transistors, and memory bandwidth in a CPU socket are just too tight. I like AMD and all, but there’s no reason to pretend a discrete GPU won’t do much, much better.

        • khands
        • 7 years ago

        Yeah, unless he likes playing at 720p he should aim a little higher, they’re not quite there yet.

          • MadManOriginal
          • 7 years ago

          I’d generally agree but there are a few things to consider, such as types of games played – older titles would perform better – and total price depending upon how ‘budget’ the build needs to be.

            • khands
            • 7 years ago

            He would probably do better buying an $80 CPU and a $100 GPU than buying a $180 APU, just saying.

        • alwayssts
        • 7 years ago

        While I can’t argue the merits of Trinity versus a cheap cpu and a lot of cheap cards out there, even when combined with the possibility of adding a 6670 for like $50, I wonder if we’ll be saying the same thing about Kaveri + 7750 in a year.

        Think about that for a sec. Both cpu/gpu will be on 28nm, and 20nm will still be a ways off. 7750 should be cheap as dirt. Kaveri should be a good-enough cpu, and priced to deal with probably lower-end Haswell. A discrete graphics experience to trump it will probably be a 7850, which may hold it’s value once it falls below around $200 and becomes the new Juniper.

        I’m spit-balling, but when you start figuring in Kaveri and 7750 could be overclocked at least a reasonable amount, probably to around 1ghz on the gpu + something on the cpu, that at least makes you wonder. I think the value proposition of the package versus other value cpu + gpu alternatives becomes very much a possibility. This is not to mention if their HSA some-how magically allows a higher-end card to tangibly use the IGP in a crossfire configuration…which certainly seems to be happening given the all-but-confirmed rumors of the PS4.

        Maybe it would not be great for 99th percentile frame times, but these people would probably give less of a crap. The possibility of an APU being the foot in the door is there, especially with game requirements reaching stagnation until the new consoles hit. I know that’s a lot of ifs, and requires a perfect storm, but these are the areas that not only AMD can and has to capitalize in, but they can very much DECIDE to create.

          • Damage
          • 7 years ago

          Time will tell, but we’ve been on the cusp of IGPs becoming “good enough” for ages. Funny thing: hasn’t happened. Instead, the usual constraints have pretty much kept IGPs at about the same place as ever, relatively speaking.

          I’ve not seen plans for 3-4 memory channels connected to an APU socket, so I remain skeptical.

            • Geistbar
            • 7 years ago

            [quote<]Time will tell, but we've been on the cusp of IGPs becoming "good enough" for ages. Funny thing: hasn't happened.[/quote<] While I agree that it hasn't happened, I do think the extent to which IGPs aren't good enough has been very steadily decreasing. The newest games you can feasibly play now are much more contemporary than they were with IGPs just five years ago. Definitely still not there, but it's also definitely getting closer.

            • swaaye
            • 7 years ago

            You can also game on a 8800GT pretty well today. It’s the PS360 phenomenon, which is keeping game graphics complexity at a near stand still. But if new consoles hit the reset button on that, current low end and old GPUs are likely to become very uninteresting.

            • Duck
            • 7 years ago

            Everyone repeats this crap over and over. New games launch all the time that push hardware. You think Modern Warfare has the same system requirements as Modern Warfare 2? Or there’s lazy console ports with high system requirements like Max Payne 3.

            The PS360 phenomenon as you put it, is a myth.

            • swaaye
            • 7 years ago

            What you see in improvements is mostly about development tools, development methods and visual techniques evolving continuously so they get more from the same level of hardware than before. Of course this could be an argument for hardware not needing to progress because there is room for even D3D9-spec stuff to improve for years on end.

            Yes I am aware that there are exceptions to the usual developed-primarily-for-PS360 multiplatform game. But I also have actually tried quite a few recent games on an 8800GT out of curiosity.

            Anyway, this year’s E3 showed some interesting stuff that might be an example of what game companies are planning when their console hardware spec jumps up a bunch in the next year or two.

            • brucethemoose
            • 7 years ago

            [quote<] You think Modern Warfare has the same system requirements as Modern Warfare 2? Or there's lazy console ports with high system requirements like Max Payne 3. [/quote<] Uhhh... Ya. As far as I know, a PS3 or 360 are the system requirements, and that hasn't changed. The efficiency and complexity of game engines has evolved, and there are more fancy graphics settings than ever to tax a PC. But the minimum amount of CPU/GPU power required by most games have changed about as much as the 360/PS3 have.

            • Duck
            • 7 years ago

            No, you’re wrong. The requirements have changed. MW1 calls for a pathetic 2.4GHz P4 and 512MB RAM. That’s barely good enough to run Windows XP and Firefox. MW2 ups that by some 50% and 100% for CPU and RAM respectively. While Max Payne 3 is a much more modern 2.4GHz Core 2 Duo, 2GB RAM and a 8600GT.

            The minimum requirements have been increasing exponentially. The 2.4GHz C2D machine is some ~400% the performance of the 2.4GHz P4 machine.

            • swaaye
            • 7 years ago

            Minimum requirements are also more realistic today than they used to be. They used to mean “it runs and looks like shit, but it runs”. A Core 2 Duo with a 8600 GT will run current games a lot better than that P4 with 512MB and a GeForce 6600 ran anything in 2007. Lots of people still have Core 2 as their main setup.

            Hell, Raven Software required a P4 and 512 MB with a GeForce 3 for Quake 4. That’s ridiculous, but I imagine it would run.

            • travbrad
            • 7 years ago

            Yep I remember in the late 90s even some games where my PC exceeded the “recommended” requirements still didn’t really run very well, and the minimum requirements were a joke. Technically you could “run” games with the minimum requirements but it was basically unplayable.

        • rechicero
        • 7 years ago

        IMHO That would depend on the budget, the monitor and the games they are going to play (if they’re on a tight budget for the rig, probably they won’t buy newest titles).

        For some needs the IGP can be just the best choice, because it’s “free”.

        • pogsnet
        • 7 years ago
          • Duck
          • 7 years ago

          [quote<]The GPU of trinity APU is at the levels of HD 5700 performance[/quote<] lol no. 5750 has 720 shaders and a GDDR5 memory interface. The Trinity GPU performance is probably going to be similar to a 4650 but a bit more crippled because of the system RAM for memory - and that's for the top spec A10 Trinity. The more regular A8 Trinity should be noticeably slower than this.

            • mczak
            • 7 years ago

            HD4650 typically used ddr2 and hence was quite slow. Llano A8 was quite close to HD5550 with ddr3 memory already (provided you gave it dual-channel ddr3-1600, system memory or not that’s nearly twice the bandwidth the hd4650 had). Desktop Trinity is definitely going to be faster, it should be right around the level of HD 5570 (or HD4670 if you want which has similar performance overall). If you want to compare with something newer a HD 6570 (not the gddr5 version aka review edition which is the only one which has reviews but the one with ddr3 which is the only one you can actually buy) is also going to be very close. You’re certainly right though Desktop Trinity won’t compete with 57xx series.

            • Chrispy_
            • 7 years ago

            Actually, no – he’s right, the benchmarks put the 5750 at only 30% faster than an A10 and whilst it’s hard to find apples-to-apples tests for 4650, it’s going to be much slower.

            Trinity’s GPU uses VLIW4 shaders which makes them roughly 25% more potent than the ones in the 4650. The A10 is running at 384 new shaders at 800MHz, and the 4650 is running 320 old shaders at 550MHz. Even if you ignore the difference between VLIW4 and VLIW5 architecture, the A10 would be 75% faster.

            As it turns out, Diablo III, Bastion’s keep 720p runs at around 25fps on my DDR3 4650 laptop (which isn’t CPU limited with an i7 in it) and the [url=http://www.tomshardware.com/reviews/a10-5800k-a8-5600k-a6-5400k,3224-18.html<]A10 is running almost 3x faster.[/url<]

        • vargis14
        • 7 years ago

        Totally Agree damage. I was a AMD fan for many years with my last gaming build a 4800×2 and crossfire x1800xts….But now with the power you get from a inexpensive Intel I3 1155 setup dual core with HT for no more then 120$ sometimes less. I just can;t see AMD competing even with a similar clocked quad core. Sandy and Ivys design is just so much better per core. Benchmarks prove it with discrete cards, to be blunt AMD cpus bottleneck fast GPUs capping there average and max FPS in almost every review that includes CPU comparisons….let alone a crossfire or SLI setup, Its worse.

        Liano/Trinity may be Ok for a cheap laptop or possibl a tablet, BVut i just do not see a place for it on the desktop with intels domination.

        I am very disappointed in AMD for the last few years……..But i am still rooting for them.

        Intels 1155 cpus give awesome performance AMD just cannot touch for A hell of alot cheaper then they were a few years ago. I would not recommend anyone to build a AMD desktop anymore:(

        • ET3D
        • 7 years ago

        And Sandy Bridge Intel Pentium processors, which start at just over $60, are pretty decent for gaming. Couple one of them with a discrete GPU and you got a decent experience.

        • Chrispy_
        • 7 years ago

        If you are going for discrete graphics, why even pay attention to an A10 which is part GPU he won’t need, and part bulldozer/piledriver with low IPC?

        Surely budget sandy-bridge Pentium or much cheaper Phenom II is the way forwards….?

      • FuturePastNow
      • 7 years ago

      A benchmark leaked months ago that showed performance basically identical to a FX-4100 with a Radeon HD 5670. Looking at these specs, I’m going to call that an accurate assessment of how the 5800K will perform.

      You can figure out if that would be enough for your friend.

        • Arag0n
        • 7 years ago

        It sounds better when you say that it should be like a GT 8800 DX11 enabled and a Core i5 750 +/-… but still, it will all depends on pricing. A 130$ CPU with 100$ or less mobo that works at that performance point might be a great seller.

Pin It on Pinterest

Share This