45W Richland APUs leaked via CPU support list

AMD’s desktop-bound Richland APUs debuted last month with TDPs in the 65-100W range. Now, it looks like AMD is prepping a couple of 45W members of the family.

TweakPC spotted the chips in a CPU support list on MSI’s website. The A10-6700T and A8-6500T are listed on the support pages for several of MSI’s Socket FM2 motherboards, including this one for the FM2-A85XA-G43.

Although the list doesn’t provide a full accounting of specifications, it does contain a few details. Here’s how the new APUs stack against the existing members of the Richland family.

Model Modules/

Integer

cores

Base core

clock speed

Max Turbo

clock speed

Total

L2 cache

capacity

IGP

ALUs

IGP

clock

TDP Price
A10-6800K 2/4 4.1GHz 4.4GHz 4 MB 384 844MHz 100W $142
A10-6700 2/4 3.7GHz 4.3GHz 4 MB 384 844MHz 65W $142
A10-6700T NA 2.5GHz NA 4 MB NA 720MHz 45W NA
A8-6600K 2/4 3.9GHz 4.2GHz 4 MB 256 844MHz 100W $112
A8-6500 2/4 3.5GHz 4.1GHz 4 MB 256 800MHz 65W $112
A8-6500T NA 2.1GHz NA 4 MB NA 720MHz 45W NA
A6-6400K 1/2 3.9GHz 4.1GHz 1 MB 192 800 MHz 65W $69

Only the base clock speeds are present on the list, and they’re quite a bit lower than on the 65W parts. The A10-6700T’s base frequency is down 1.2GHz from the standard A10-6700’s 3.7GHz. The A8-6500 has been trimmed even further; its base clock has dropped from 3.7GHz in the standard model to 2.1GHz in the T variant. Unfortunately, there’s no word on the peak Turbo frequencies of either CPU.

Core counts aren’t listed on the CPU support page, but I suspect the new A8 and A10 models have dual modules with four integer cores. The IGP ALU counts probably haven’t changed from those of their non-T counterparts, either, though the graphics clocks have fallen to 720MHz.

The list also mentions a few other Richland chips, including B-series variants that appear to have identical specifications to current models. Low-end A4-6300 and A4-4000 APUs are listed, as well. Those slot in below the A6-6400K and are tagged with 65W TDPs.

Among the apparent new models, the T-series entries are definitely the most intriguing. I’m curious to see how their Turbo clocks and prices stack up. At the very least, their lower TDPs should be appealing to folks building home-theater PCs or small-form-factor systems.

Comments closed
    • swaaye
    • 6 years ago

    Anyone own an APU notebook and see thermal throttling in games? I’ve been reading about Llano and Trinity notebooks that do this and the performance of course just implodes.

    • ronch
    • 6 years ago

    The thing I don’t like most about the fundamental Bulldozer architecture is its inferior performance/ watt. Now it’s not necessarily bad compared to, say, a Pentium 4 or Pentium D, and hey, even if my FX-8350 consumes more power than my old Phenom II it’s still a big step forward in terms of aggregate performance/watt, but it really hurts its chances against Intel’s comparable models. To deliver similar performance it would need to use more power. Case in point, the much ridiculed FX-9590. It wouldn’t be a bad CPU at all if it weren’t for the horrible TDP. Inversely, to deliver low TDP levels AMD needs to clock them very low, which is what these 45w models are doing. Given these, one has to wonder whether AMD really met its goals in designing Bulldozer to ‘naturally’ run at high clock speeds.

      • FuturePastNow
      • 6 years ago

      I think AMD has mostly given up on efficiency now that they have no hope of competing on the nanometer front. Short of having Intel fabricate their chips, they just can’t come close on performance-per-Watt, so the best they can hope for is to be competitive on performance-per-dollar… which they still are, barely.

        • Anonymous Coward
        • 6 years ago

        The good news is that Intel is making it easier for AMD to compete on performance per dollar by changing neither the performance nor the dollars of their products.

    • sschaem
    • 6 years ago

    No wonder AMD was able to shave 20watt … The 6700T is *50%* slower then the 6700.

    I have a feeling its cake to take a A10-6800k, undervolt and underclock and reach a 45w TDP.
    The 6700T is not a cream of the crop harvested part, those go to mobile.

    So nothing even remotely interesting going here.

    • drfish
    • 6 years ago

    Awesome! As long as the ALU count is the same of course.

    • Chrispy_
    • 6 years ago

    That’s just “mobile silicon” for laptops with the letter T on the end.

    I read a review of an MSI laptop with the 2.5GHz A10 in it a few days ago. As expected, single-threaded performance is dire at those speeds :\

    Edit:
    [url=http://www.anandtech.com/show/7111/amds-a105750m-review-part-2-the-msi-gx60-gaming-notebook<]Here we go[/url<]. The mobile equivalent appears to be the A10-5750M....

      • slowriot
      • 6 years ago

      WHOA. That performance difference is far bigger than I expected. I was expecting somewhere around 15%-20% slower with Richland + HD7970M versus Ivy Bridge + HD7970M. I wonder just how much worse this situation is made by MSI choosing to only provide a single-channel memory configuration (WHY IN THE WORLD? IN A $1200 LAPTOP!).

      I nearly bought the previous revision GX60. I ultimately went with MSI’s GT70 series though, mine is a Core i7 3620QM + GTX660M, because despite being 17″ it was still significantly lighter.

      Can I rant about gaming laptops for a moment? MSI has awful build quality. My GT70 feels like it might snap if I hold it wrong. Clevo units are a higher build quality, still not good enough for the prices. Your also limited to buying from businesses that make you question trusting them with your money. Asus doesn’t offer a laptop with something more powerful than a GTX765M and then puts it in an over sized chassis. Alienware still puts an alien head on your laptop. Worse though is the amount they charge for upgrading a component, such as moving up from a GTX765M to a GTX770M. Toshiba and Samsung have a few units that fall into the “gaming laptop” realm but the options are slim, prices not so hot,

        • Mr. Eco
        • 6 years ago

        Gaming laptops do not make sense.

          • Chrispy_
          • 6 years ago

          Totally untrue; I have a laptop for gaming (and working) and there’s no way in hell I could make do with even a super-compact mITX desktop. I live in a city where the only way to get around is by bike or public transport. There’s essentially a £10 ($15) fee to drive a car each day (enforced by CCTV with a £100 ($150) if you fail to pay in advance. That’s in addition to the problem that it’s near-impossible to find a parking space and street parking costs are typically £30($45) a day.

          Gaming laptops may be niche products but they [b<]absolutely[/b<] make sense. Making them as ugly, cheap, and ill-configured as MSI does on the other hand, [b<]THAT[/b<] makes no sense.

    • OneArmedScissor
    • 6 years ago

    Athlon II X4 620e – 2.6 GHz at 45w

    A10-6700T – 2.5 GHz at 45w …with “modules” and 32nm HKMG

    Yet again, Bulldozer / Piledriver fails to make the case that there is something about the architecture which allows higher clock speeds.

    It is sad to think that all these years later, AMD could have just kept tweaking their way up to a Phenom IV or even Phenom V by now, and it would have kept up better.

      • sircharles32
      • 6 years ago

      Sorry, but you forgot that there is a video card sandwiched in there, as well.
      With that said, I wouldn’t mind utilizing one of those 45W Richlands for a home server. I would remove the excess power draw from the north bridge or a dedicated video card.

        • dragosmp
        • 6 years ago

        Yes you do have an extra GPU on die, but the performance is the same or lower in games with discrete GPU on a more advanced fab-process. So what they gained in TDP with the reduction in size, they lost by adding the GPU. In the process the single-threaded IPC tanked. All I’m saying is a 28nm Phenom II would probably consume some 80W and have the same performance as the 100W APUs without any architectural improvements. Add power gating and smart Turbo and the Phenom 3 could consume 65W and perform as well as a 100W APU. Yes, no GPU, but since HSA is still in the future we don’t need that GPU anyway, it could be optional. Since a 400 cores discrete Radeon consumes some 20-30W with ram, power ICs and the rest, it seems unlikely the iGPU takes more than 15-20W.

        • maxxcool
        • 6 years ago

        As one-arm mentions, like intel did this with Banias and now we have the ‘core’ family.. then intel got lazy. Amd should have done the same damn thing because it clearly works.

        I’d love to see a alternate universe where the Israelis that built banais worked on the phenom-ii core as well.. i’d wager they be more than competitive.

        • OneArmedScissor
        • 6 years ago

        I specifically mentioned that they cut out other parts, shrank it, and dramatically reduced the leakage, which more than leaves enough room for a super low clocked, low leakage GPU.

        The only thing I forgot just makes it even worse:

        Trinity / Richland also have resonant clock mesh, on top of all those other advantages! Hey, wasn’t that supposed to account for another 10% or so “free” clock speed?

        The point is not that this is some isolated incident, but a [b<]consistently recurring trend[/b<] throughout the entire line of Bulldozer and Piledriver variations. This has been going on for, what, 2 years now? If you want apples to apples, it just gets worse. Compare AMD and Intel at 32nm, 95w TDP, 8 threads, and 8MB L3: FX-8300 - 3.3 GHz base, 3.9 GHz turbo, 2.2 GHz L3 i7 2700K - 3.5 GHz base, 3.9 GHz turbo, 3.5-3.9 GHz L3 ...plus a GPU and ring bus Conclusion: Bulldozer / Piledriver has lower clock speeds than darn near everything.

          • Rza79
          • 6 years ago

          Don’t think that GF’s 32nm node is as good as Intel’s. The members of the Common Platform committed to gate-first manufacturing on 32nm and 28nm parts. They’re switching to gate-last for 20nm. Intel made the right choice to go for gate-last for their 32nm process. But that’s not the only advantage Intel holds at 32nm but still a major contributor to why AMD chips have higher leakage.
          A process node is not just a number. It’s much more than that. But people seem to be very confused by this.
          AMD also uses much less manual tuning of the silicon compared to Intel because of lack of funding. Even compared to AMD’s own previous offerings, AMD used a lot less manual tuning. This is also one of the reasons why the Bulldozer gen of products look kind of worse compared to the Phenom gen.
          About the L3 cache: Intel is just uber pro at designing fast SRAM and interfaces to it. Throughout the history, Intel CPU’s have always had faster caches than AMD equivalents. AMD will sadly never catch up on this department.

          • NeelyCam
          • 6 years ago

          [quote<]Trinity / Richland also have resonant clock mesh, on top of all those other advantages! Hey, wasn't that supposed to account for another 10% or so "free" clock speed?[/quote<] The main benefit of the resonant clock mesh is power consumption reduction in clock distribution. It might reduce the overall power consumption by some 10% or so [/estimate]

      • tfp
      • 6 years ago

      Unfortunately Anand doesn’t have many A10 or A8 chips but these comparisons are interesting. The A10 (5X00) has similar performance as a similarly clocked Phenom II X4 and A8 (5X00) has simlar performance as a similarly clocked Athlon II X4

      AMD Athlon II X4 635 – 2.9GHz – 2MB L2 vs
      AMD A8-3850 – 2.9GHz – 4MB L2
      [url<]http://www.anandtech.com/bench/Product/122?vs=399[/url<] AMD Athlon II X4 645 - 3.1GHz - 2MB L2 - 0MB L3 vs. AMD A8-5600K - 3.6GHz - 4MB L2 [url<]http://www.anandtech.com/bench/Product/188?vs=676[/url<] AMD Phenom II X4 980 BE - 3.7GHz - 2MB L2 - 6MB L3 vs. AMD A10-5800K - 3.8GHz - 4MB L2 [url<]http://www.anandtech.com/bench/Product/362?vs=675[/url<] I think some of the upcoming changes will finally get them where they need to be. (Improved dispatch, improved cache, etc.) Great AMD CPU is only a release away (like normal).

        • cosminmcm
        • 6 years ago

        The newer parts have turbo, and I see Phenom II winning almost across the board.

          • tfp
          • 6 years ago

          Close enough.

          • OneArmedScissor
          • 6 years ago

          They also have twice the L2 and an integrated PCIe controller. Wtfbbq, AMD.

      • Alexko
      • 6 years ago

      As was mentioned, Richland includes a pretty beefy GPU, plus I don’t think this specific chip is binned all that strictly, since AMD also makes the A10-5750M, which will do 2.5/3.5 GHz Base/Turbo within 35W.

      That said, I don’t think anyone is claiming that Bulldozer/Piledriver doesn’t have issues.

    • maxxcool
    • 6 years ago

    45-Watt throwdown time …

    • MadManOriginal
    • 6 years ago

    This just in! Power draw (or TDP) can be greatly decreased by dropping frequency and voltage. More at 11!

      • bittermann
      • 6 years ago

      My A10-6800K draws about 97 watts at full load, which is slightly under the normal rated 100 watts. The problem I have is not the power but that they throw off a lot of heat! I have a TT 120mm closed loop water system and it just can’t keep up with it under heavy load. I can only imagine what the stock heatsink/fan does. Keep in mind that it does run at 4.1 up to 4.4Ghz turbo though! Most of the time under load mine runs around 4.2 – 4.3GHz…

        • Airmantharp
        • 6 years ago

        Just remember, if you couldn’t get that heat out, it’d run even slower!

          • tfp
          • 6 years ago

          And if you can’t get the heat out it will run even hotter!

            • ronch
            • 6 years ago

            Good observation! 😉

            /sarcasm

        • BaronMatrix
        • 6 years ago

        Seems like you should trade names with someone here… They all sound bitter…

        • clone
        • 6 years ago

        almost any water cooling system shouldn’t have a problem with anything under 150 watts.

        somethings up, I had a system turn to crap just because one tube was a smidge shorter than the other.

      • jihadjoe
      • 6 years ago

      It can also be greatly increased by upping frequency and voltage! (See FX-9590)

        • MadManOriginal
        • 6 years ago

        True. AMD is giving customers a real choice when it comes to power draw. Thanks, AMD!

      • Airmantharp
      • 6 years ago

      To be fair, at least AMD is validating and selling the SKU. It’s nice to have a part ready to go for a particular job than having to shoe-horn one in by adjusting variables in the UEFI and hoping that stability isn’t somehow compromised.

        • Spunjji
        • 6 years ago

        Pretty much how I feel about this. All joking aside, they’re addressing a market segment that they weren’t before, no matter how silly the tricks to get them there may be…

    • Lans
    • 6 years ago

    I would love to see A10-6700 and A8-6500 undervolted.

      • faramir
      • 6 years ago

      Would undervolted A10-6800K do ?

      [url<]http://www.brightsideofnews.com/news/2013/6/5/amd-one-ups-their-apus-with-richland---a10-6800k-review.aspx[/url<]

    • Star Brood
    • 6 years ago

    With AMD moving quad core options to compelling TDP’s, I hope Intel realizes what kind of market share they can gain were they to release quad-core models cheaper than the i5.

      • tfp
      • 6 years ago

      Does Intel need more market share at the cost of lower margins per CPU? I don’t think so.

      • MadManOriginal
      • 6 years ago

      Why would or should they when a 2c/4t i3 is generally the same speed as all of these on the CPU side (special instruction sets aside) and draws a lot less power under load?

        • shank15217
        • 6 years ago

        Even the 45w one?

          • chuckula
          • 6 years ago

          The higher-wattage parts can have competitive performance and even get wins in some workloads thanks to having massively higher clocks while drawing a lot more power.

          That 45 watt part at 2.5GHz with no turbo ain’t gonna be winning any performance benchmarks compared to i3s that actually have higher clocks though. The tradeoff is that its power consumption is hopefully lower (see TR’s review where the power consumption of the 6700 looks awful high for a part that is purportedly only running at a 65 watt TDP…)

            • derFunkenstein
            • 6 years ago

            We don’t know that it doesn’t have turbo; we only know that MSI’s CPU compatibility list only has base clocks.

            • dpaus
            • 6 years ago

            <snicker> You said ‘purportedly’ <snicker>

            • LukeCWM
            • 6 years ago

            Geoff is rubbing off on us! Gah!

            • dragontamer5788
            • 6 years ago

            We don’t know if they disabled turbo yet though. Low TDP parts are where turbo does best… and its competing against the i3-3250T, which is 3GHz no-turbo. If the A10 can turbo up to 3.5GHz when it needs to (like the A10-5750M), it might actually compete favorably against the i3-3250T.

            At 2.5GHz base clock though, I do see your concerns.

      • maroon1
      • 6 years ago

      Believe it or not. Clock for clock 3rd gen i3 is faster than Trinity/Richland even in heavy multi-threaded benchmarks.

      The only reason why A10-5800 is slightly faster than i3 3220 is because of clock speed advantage. Intel only need to boost their desktop i3 clock speed to match or surpass A10 APU in multi-threaded app.

      On the other hand, Quad core intel is on whole different level when compared to AMD quad core chips.

        • dragontamer5788
        • 6 years ago

        Indeed. However, the A-series of chips aren’t competing on CPU power… but instead on GPU power. HTPC users will enjoy the use of madVR (or other GPU-accelerated decoders).

        The HTPC market is currently ruled by low-power Celerons and Pentiums. Some people have liked the 65W TDP versions of AMD’s chips, but low-power celerons and Pentiums drop the TDP even lower… closer to 35W.

        So really, this chip here doesn’t need to beat the i3 in performance… it just needs to be better than the even slower chips, and offer compelling enough GPU performance for madVR. HTPCs don’t need that much CPU power.

        Apparently, the A10-6700 is handling 4k video decoding currently. If the 45W versions can also handle 4k video, then that covers almost all the bases for a good HTPC build.

          • MadManOriginal
          • 6 years ago

          You’ve got to be careful with using TDP and actual power draw interchangeably. Intel CPUs pretty much always draw well under their TDP in actual use, AMD’s usually draw as much or greater than their TDP.

            • dragontamer5788
            • 6 years ago

            Fair enough, but HTPCs concern is not actually with energy used, but really thermals. Better thermals mean slower fans, which is less noise. Thermals are of course related to power consumption, so maybe your point is still valid anyway.

            • MadManOriginal
            • 6 years ago

            It is true that for DIYers HTPCs are the ideal use for these CPUS. It’s just not that hard to get great acoustics even with a chip that’s middle of the road power-wise if you select a good cooler.

        • Airmantharp
        • 6 years ago

        If you’re going to compare slower parts, you should really ground that comparison to price/performance for implementations that are not power/noise limited, and ground it to watts/unit of work for power limited implementations.

        Clock for clock is relatively meaningless except when comparing architectures; it’s cool to look at and a useful and needed reference point, but it’s not a primary point of comparison for shipping SKUs.

      • maxxcool
      • 6 years ago

      No, they don’t since AMD does not have a true quad core part either.

        • dragontamer5788
        • 6 years ago

        No true Scotsman fallacy: [url<]http://en.wikipedia.org/wiki/No_true_Scotsman[/url<] Kabini / Temash are "true quad core" parts. And for that matter... a number of ARM chips are also true quad cores. Thats just how silly the "true quad core" marketing scheme is frankly. "True Quad Core" is totally a marketing term. Richland offers higher performance than Kabini, despite being this weirdly complicated 4-integer cores / 2 fp cores / 2 decoders thing-a-ma-bob. And at the end of the day... that is all that matters. Unless you're writing low-level code optimizations, you don't really care if its quad-core, big-little, double-module, hyperthreaded superscalar or any of that stuff. The consumer only cares about performance, power usage, and other metrics like that.

          • NeelyCam
          • 6 years ago

          [quote<] And at the end of the day... that is all that matters.[/quote<] That, and power consumption

            • maxxcool
            • 6 years ago

            65 watts vs 45 … meh. It is going to get a third party cooler and it will sound no louder, run cooler and cost me pennies more a month.

            • NeelyCam
            • 6 years ago

            I wasn’t talking about a particular comparison – I was talking in general. You know, like how power consumption – not just performance – matters a lot for laptops.

            • Airmantharp
            • 6 years ago

            You’re right, but so is Neely. We also shouldn’t be comparing the performance of particular SKUs using rated TDP only; we know that number is less of a depiction of power draw and thermal output/cooling requirements than it is a general head’s up as to the ‘class’ of the CPU to the consumer.

            If you’re going to talk about power, you should really be talking about what a particular set of CPUs have been measured to use under the same set of circumstances that are applicable to the comparison.

            Quick example: a 45w CPU might actually regularly draw 45w and be at the high end of it’s rated thermal ‘envelope’, while a 65w CPU might not ever draw more than 50w, but be rated for that because it’s essentially a lower-binned SKU version sitting at the bottom of a particular range of CPUs that could use up to 65w.

            Point being, we can’t take the numbers that manufacturer’s put out for granted, and they’re not intended to be.

          • maxxcool
          • 6 years ago

          All I need to know : [url<]http://www.anandtech.com/bench/Product/363?vs=675[/url<] Amd 'semi-quadcore' vs intel 'quad core' at a lower frequency. and the last nail.. open up resourcemon during a cinebench run... watch those last 2 'off fake cores' go from 0% usage to 80 or so and drop back to 0.... 'real' quad cores stay pegged on all cores at all times. you lose.

            • dragontamer5788
            • 6 years ago

            What what what? My point is what is this thread’s obsession with “quad cores”?? My point is that it doesn’t mean anything.

            Intel has a IPC (instructions per clock) advantage over AMD. Period. Everyone knows this. But this has nothing to do with “quad-core” vs “modules” or whatever. (Ironic, because AMD had the IPC advantage over Intel back in the Athlon days.)

            Anyway, if you want a “true quad core”, go get yourself a AMD Kabini. It is honestly a “true” quadcore design, in every sense of the word. Its IPC sucks and it is clocked absurdly low… but yes, it is indeed a “real” quad-core design. But as you’ve mentioned, what you _really_ care about is overall performance. No one cares whether or not they get a quad-core, they care if their x264 renders or compile times go faster.

            But sure, if you want to compare $130 processors against $220 processors while arguing about the true meaning of the word “quad-core”, be my guest. I’m simply trying to push the conversation towards a better direction.

            • MadManOriginal
            • 6 years ago

            Remember when AMD fanboys (and maybe even AMD themselves?) used to make fun of Intel’s dual-die-single-package quad cores by saying they weren’t ‘real quad cores’? Now AMD has what’s more akin to a 2c/4t architecture (no, not across the board, but in the context of this article) and yet people now say ‘Ohohoh, noooo, now it’s xyz that actually matters, it doesn’t matter if it’s not *really* a quad core!’

            chucklua and NeelyCam are right, AMD apologists are really inconsistent in their logic.

            • dragontamer5788
            • 6 years ago

            [quote<]Remember when AMD fanboys (and maybe even AMD themselves?) used to make fun of Intel's dual-die-single-package quad cores by saying they weren't 'real quad cores'?[/quote<] Sure, but they were wrong then as well. And if they were consistent in their criticism, then they should be criticizing 16-core Opterons for implementing the same strategy (16-core Opterons at very least are seen as two NUMA nodes IIRC). But I think it is unfair for someone to pin me with the views of other people. I'm just trying to keep this conversation somewhat sane. It may matter in a server environment (NUMA nodes talk less efficiently to each other), but your average consumer will not be able to tell the difference. [quote<] chucklua and NeelyCam are right, AMD apologists are really inconsistent in their logic.[/quote<] I tend to agree with NeelyCam more often than not actually...

            • chuckula
            • 6 years ago

            Part of the issue is in the nomenclature that is used for these parts. If Intel were dumb enough to run around calling all of its hyperthreaded parts “8 thread” or even worse “8 core” parts, then there would be a huge uproar over it. As it stands, many people get upset that “hyperthreading” is even used at all in Intel’s chips since they see it as being a marketing feature instead of a real feature. When I run cat /proc/cpuinfo on my Haswell box, I see 8 logical CPU cores due to hyperthreading, but you’ll always hear me refer to the chip as quad-core.

            If AMD had used module numbering in its new chips instead of insisting on calling everything a core, then I think they would have ended up with an improved perception amongst many people. I know that the actual bechmark results wouldn’t change a bit, but AMD would gain a lot psychological credibility by saying that a 4 module Piledriver can compete well in performance numbers with most of Intel’s 4 core parts in thread-heavy applications.

            Instead, especially when it came to the original Bulldozer launch, we were assaulted with the term “8 cores” over & over again by AMD’s (now unemployed) marketing department. It got to the point where there were people seriously expecting Bulldozer to wipe the floor with the 6-core Sandy Bridge-E parts… after all, they only have 6 cores at lower clock speeds right?? Part of a let down is in the irrational buildup of expectations.

            Anyway, I hold little hope for AMD’s pure-CPU parts. Fortunately they still have an IGP lead and can make money selling parts with IGPs. Kabini seems like a very interesting platform too. Under Rory, AMD at least appears to be interested in selling products that make AMD money instead of foolishly engaging in kamikaze runs at Intel’s strongest market segments. It might even be that a few people at AMD have wised up to the fact that there are plenty of competitors out there who will be a whole lot nastier than Intel since AMD can’t sue them for anti-trust violations every time it needs a cash injection.

            • dragontamer5788
            • 6 years ago

            To be fair, Piledriver scales like an 8-core when it comes to integer-based singlethreaded vs multithreaded benchmarks… while Intel’s 4c/8t processors scale more like a quad core. Hyperthreading seems to be worth ~1.2 “cores per thread”, from a performance point of view. At least when you compare the i5’s scalability vs the i7’s scalability. Or to put it another way… due to efficiencies in Hyperthreading, the i7 seems to perform like a “5-core device”.

            EDIT: And by my made-up napkin math, the Piledriver Fx-8350 scales like a “7-core” device. (perhaps due to slight inefficiencies in the shared module design)

            [url<]http://media.bestofmicro.com/X/Y/357622/original/cinebench.png[/url<] As I've stated before, AMD has a major IPC problem. Its not so much that AMD isn't "really" offering 8 cores or 4 cores or whatever you want to call it... its that their cores are roughly half the speed of Intel's... even with a clock advantage measured in the GHz. I keep bringing up Kabini as an example. Kabini is a true Quad-core chip, but it's IPC count is terrible and it can't be clocked much higher than 2GHz. Its a very very slow Quad-core chip... but it is "true quad core". (its also cheap as all hell, and is designed to compete against Intel Atom... but its currently being marketed as a quad core device) For those who are more experienced with these marketing wars, we all have seen this before. As we learned many years ago... GHz alone does not determine the performance of a computer. And it has happened again, this time with Core Counts as the primary marketing term. Core Count does not determine the final performance of a computer. Its useful to see Core Counts / GHz from within a family of chips, but numbers do not "translate well" between architectures. Never buy into marketing, and always look at benchmarks *after* a chip is released. That mysterious IPC (instructions per clock) statistic changes so so much between architectures, and accounts for a significant amount of a processor's speed. Worse yet, the number is almost never released by marketing.

            • Airmantharp
            • 6 years ago

            I like your perspective of how CPU’s scale; I’m not sure I’d put it that way myself, but it does broaden one’s understanding, and I do agree with your analysis of the CPUs/cores discussed.

            And for the record, I want to see more from AMD. I’ve liked the ‘Bulldozer’ concept since it was first unveiled, and I think that the architecture shows tremendous promise as an alternative to Intel’s two-threads-per-core strategy in many situations, including gaming and other high-end desktop computing applications.

            • Anonymous Coward
            • 6 years ago

            [quote<]And if they were consistent in their criticism, then they should be criticizing 16-core Opterons for implementing the same strategy (16-core Opterons at very least are seen as two NUMA nodes IIRC).[/quote<] AMD's evil knows no bounds. Not only are these so-called 16 core processors based on two chunks of silicon, but the so-called cores aren't really cores! Its not even a true 8-core processor! SHOCK HORROR

            • Airmantharp
            • 6 years ago

            In addition to dragontamer’s* response, I’ll point out that poking fun at an Intel dual-package quad core design as an ‘AMD’ person is kind of funny; the first of these were Core 2s I believe, on 65nm, and the Core 2 made AMD’s architecture largely irrelevant. It did everything better, and it got Intel back into the game, even if it was a departure from their ‘Netburst’ initiative, which I wish they’d kept and developed on new process nodes, it was actually good for certain things. Going back to the P6 architecture that they introduced with the Pentium Pro and then developed into the Pentium II, III, and M (and something else that went into s370) really saved Intel in the desktop space, and it was hilarious to see an Intel CPU outrunning a higher-clocked AMD CPU while having two separate die, which were separated from each other and the RAM through the FSB and the northbridge, while AMD had a single die solution and an integrated memory controller. Once Intel moved to the same setup with the i7-860 generation (forgot the name), AMD was permanently fixed in Intel’s rear-view when it came to outright performance with full-fat desktop parts.

            Obviously the server space with Xeons and Opterons is different, and AMD still has significant advantages here, and of course AMD has a new architecture coming that promises to address the shortcomings of the Bulldozer/Piledriver generation, if not bring them back into competitive space with Intel across the board- or hopefully exceed them and make them sweat a little!

            *I like this guy, I hope he sticks around and keeps up the good work.

            • Spunjji
            • 6 years ago

            Slight correction – the first Intel dual-die quad-core designs were actually the Netburst based Pentium D series. They were hilarious because it seemed someone at Intel decided that the only thing better than one hot, slow, clock-hungry chip could be 2 of them choking on the same TDP…

            By the same logic AMD’s current 8-module server chips are nearly equally hilarious, just with the added tinge of tragedy.

            MadManOriginal: for the record, not all AMD fans are inconsistent in their application of scorn. 😉

            • maxxcool
            • 6 years ago

            Because it is a damned lie. False. Misleading. Malicious in its intent, insidious. That is why.

            If it had 4 true cores. It would be faster. AMD is foisting onto you a dirty lie, and your buying it.

            [url<]http://www.anandtech.com/bench/Product/675?vs=362[/url<] And to your point. If your going call it a 'quad core' then compare it to a real quad core. Then you get the results already known. A real quad core will take kabini to the woodshed every time.

            • dragontamer5788
            • 6 years ago

            We seem to be in agreement, and yet your posts are so hostile to me. I’m worried that you aren’t reading anything I’m saying.

            • Airmantharp
            • 6 years ago

            I think his account has been hijacked. He’s usually on the level, from what I’ve seen.

          • maxxcool
          • 6 years ago

          “”Unless you’re writing low-level code optimizations, you don’t really care if its quad-core, big-little, double-module, hyperthreaded superscalar or any of that stuff.””

          Except that is ALL that matters .. and is EVERY problem with the “shared front end” architecture. If it had ”4 full int cores” instead of pairs of 1” full core + 80% of another” + 2 schedulers it *could actually* execute more parallel code.

          As it stands it can’t, even after the magic patches, core-parking tricks and power profile tricks.

          the sad part is amd HAD a good quad core part. They just let it die..

            • Airmantharp
            • 6 years ago

            If you’re going to go on about this, try putting your points in perspective for the rest of us. I mean, I agree with you, but I don’t know exactly how you’re trying to relate your points to publicly available knowledge on these processors.

      • ronch
      • 6 years ago

      Honestly I don’t think that would be good for consumers. You know what’s gonna happen. They can sell their i5 and i7 models for less than $150, wait for AMD to die, which probably won’t take long, then jack up prices to like, $330 for a Pentium Dual Core. $550 for a 2-core i3, $850 for a 4-core i5, $1,200 for a 4-core i7, $1,800 for an LGA2011 i7. The heck with those cheap, piss-slow ARM chips. They’re not made for real computers..

Pin It on Pinterest

Share This