Leaked slides spill more Trinity details

AMD’s next-generation Trinity APU is coming soon. If you’d rather not wait for the official unveiling, Chinese site EXPreview has posted a collection of presentation slides that purportedly detail the chip’s particulars. The slides look pretty official to me, and the details within match the few bits of information AMD has released to the public already.

We know Trinity will be based on an updated Piledriver version of the Bulldozer CPU cores that anchor AMD’s FX desktop processors. According to the slides, these cores will crunch more instructions per clock while leaking less power. Dual- and quad-core configurations will supposedly be available clocked at 2.0-3.8GHz. AMD already has a quad-core, Bulldozer-based FX-4170 clocked at 4.2GHz, but that chip has a 125W thermal envelope. It looks like Trinity’s TDP for desktop parts will top out at 100W.

Lower CPU clock speeds aren’t entirely unexpected given the fact that Piledriver shares the procesor die with a beefy Radeon GPU based on AMD’s 6000-series architecture. Bulldozer has no graphics sidekick, let alone one with DirectX 11 credentials.

Trinity’s integrated Radeon purportedly features up to 384 ALUs and can run at speeds as high as 800MHz. The graphics component looks like it takes up close to half of the die. If the slides are legit, that die measures 246 mm², making it a little larger than Llano and quite a bit bigger than Ivy bridge. Ivy has more transistors, but it’s fabbed on a smaller 22-nm process, while Trinity will be built using 32-nm technology.

There are other interesting snippets in the slides, like mention of an AMD Accelerated Video Converter and a new version of Turbo Core. Looks like the processor’s dynamic clock scaling mechanism is capable of juggling CPU and GPU clocks based on the nature of the workload. Of course, we’ll have to wait until the official launch date for the full story.

Comments closed
    • ronch
    • 7 years ago

    In all honesty though, I have to say Trinity looks pretty good. If you look at slide # 3, AMD’s changes to Trinity are extensive. From L2, to L1, Branch, schedulers, to the FPU… everything seems to have received changes.

    But what about performance? I’m not sure what AMD means by ‘26% better for desktop’, but if they’re comparing it to the top of the line Llano (A8-3870K) which runs at 3.0GHz, and the ‘normal’ Trinity runs at around 3.5GHz, then yes, IPC seems to have improved. How much?

    3.0GHz x 1.26 = 3.78 // The performance delivered by Trinity is equivalent to a 3.78GHz Llano.

    3.78GHz x 1.25 = 4.725 // 4.725GHz is how much you have to clock Bulldozer cores to match a 3.78Ghz Llano, assuming Bulldozer delivers 80% of Llano/Husky’s IPC.

    4.725 / 3.5 = 1.35 // Trinity delivers the performance of a 4.725GHz Bulldozer while running at just 3.5GHz. That extra 35% must come from IPC improvements.

    AMD’s performance estimates are usually a bit on the optimistic side, but if that 26% performance uplift (AMD talk!) is true across the majority of desktop applications, that 35% extra IPC coupled with improved clocks and (in Vishera) a memory controller dedicated to CPU cores (unlike Trinity’s IMC shared between CPU and GPU), should let AMD inch a lot closer to Sandy Bridge in terms of overall core performance. Let’s hope AMD pulls a rabbit out of its hat this time.

      • ronch
      • 7 years ago

      What I really want to see is an 8-core FX with PileDriver cores running at ~4.5GHz, with each PD core delivering 35% better IPC over Bulldozer. That would make it around 68% faster than an FX-8150. I think per-core performance should be up against a 3.3GHz SB core, but since there are twice as many cores, it’d be about twice as powerful as a 2500K! Give this to us this year, AMD! And oh yeah, give it a 95w TDP!

        • khands
        • 7 years ago

        I think they’d have to capture and hold an intel 22nm fab to do that :/

          • NeelyCam
          • 7 years ago

          Maybe they could just raid an Intel warehouse, relabel the chips and sell them as their own..?

    • Sam125
    • 7 years ago

    As a happy Brazos user for almost a year now I can honestly say Trinity is great news for me. Yeah, I probably sound like a fanboy or whatever but my E-350 can play L4D2, DOTA2, and every Total War except for Shogun 2. Needless so say, I’m pretty impressed with the gaming performance of a laptop that can otherwise get anywhere from 5-7 hours of battery life. So yeah, a laptop with the 17 or 25w variety sounds simply awesome. : )

      • srg86
      • 7 years ago

      I tried running my main Windows PC on Brazos, replacing a Dual core Atom. Definitely better, but still too slow at code compiling, so I replaced it eventually with a second hand Core 2 Duo E8400 and G41 integrated graphics, much better for my needs. I still have the E350 or other uses though, it’s still a nice little machine.

    • Chrispy_
    • 7 years ago

    384 ALUs @ 800MHz with DDR3 – about the same as Turks Pro (480 ALUs @ 659MHz with DDR3)

    That sounds respectable for a 100W TDP, but how gimpy is the GPU in the 17W part going to be?

    With Ivy’s HD4000 being half-decent, Intel have an attractive CPU+IGP package for the first time; Unless Piledriver pulls a miracle out of the bag, AMD are working with a hot, slow, inefficient CPU core and that does leave much room in a 17W thermal envelope for an HD4000-beating GPU.

    I really want a Trinity laptop, but only if it’s better than IVB.

      • OneArmedScissor
      • 7 years ago

      [quote<] Unless Piledriver pulls a miracle out of the bag...[/quote<] Do you consider basic physics a miracle? 4+ GHz at 1.4v ≠ 2 GHz at 0.9v The GPU is not what drives the TDP up. They run very low clock speeds and voltages. Even extremely high end GPUs are often in the 0.9v range. You compare to Ivy Bridge's GPU, and yet you ignore what it is - a considerably more powerful GPU that came with a [i<]lower[/i<] thermal envelope. And it even runs a completely ridiculous clock speed for a GPU, almost double Trinity's. So long as there's a relatively functional dynamic clock speed shift between the CPU and GPU, the 17w version of Trinity shouldn't take much of a hit.

      • chuckula
      • 7 years ago

      Good news: A trinity notebook with a 35 watt part will have a better GPU than the HD 4000.
      Bad news: The same notebook at 35 watts will definitely have a weaker CPU than an Intel notebook.

      Ultrabook News: I’ve predicted this several times and AMD’s own numbers seem to be hinting that I’m right. The 17 watt Trinity will likely have a GPU that’s on-par with the HD 4000, but don’t go looking for miracles on the CPU side of things. There will likely be situations where the GPU on Trinity beats the HD 4000, but also situations in games where the decent HD 4000 + better Ivy Bridge CPU will give you better results.

        • BobbinThreadbare
        • 7 years ago

        “Bad news: The same notebook at 35 watts will definitely have a weaker CPU than an Intel notebook.”

        How bad is this news? What do people do with Ultrabooks that they need CPU power for?

          • travbrad
          • 7 years ago

          [quote<]What do people do with Ultrabooks that they need CPU power for?[/quote<] What do people do with Ultrabooks that they need more GPU power for? The current integrated GPUs can accelerate flash perfectly fine (or intel quick sync for other video decoding). Are people really buying Ultrabooks (with no discrete GPU) just for gaming? If so it seems an awfully high price to pay for such poor performance.

            • Chrispy_
            • 7 years ago

            I want something that I can slap into a messenger bag, cycle over to a mate’s house and play Starcraft II, Diablo III, or some of the less hardcore co-op fps games (sanctum, L4D2, etc)

            Ideally, I can do this for a few hours on batteries and not need to bring a power brick too.

            • khands
            • 7 years ago

            Yeah, this is what a lot of pc gamers have wantec for a long time.

        • NeelyCam
        • 7 years ago

        [quote<]but also situations in games where the decent HD 4000 + better Ivy Bridge CPU will give you better results.[/quote<] No. Intel graphics drivers are destined to suck until the sun goes supernova

      • jensend
      • 7 years ago

      You’re forgetting the difference between VLIW4 and VLIW5; shader capacity wise Trinity should be better than Turks Pro. It has the same number of SPUs (96=480/5=384/4), and a VLIW4 SPU @800 MHz should handily outperform a VLIW5 SPU @660 MHz.

      Of course, in a lot of situations bandwidth rather than shader power will be the limiting factor for Trinity so which is faster will likely depend on the game, the resolution, etc.

    • maroon1
    • 7 years ago

    AMD marketing slides are very misleading. Don’t believe anything AMD says, they will always attempt to make their CPU’s look way better than they are by cherry picking benchmarks that heavily favor their products. Wait for more reliable independent reviews.

      • BobbinThreadbare
      • 7 years ago

      Next you’re going to tell me water is wet, and the sky is blue right?

        • Chrispy_
        • 7 years ago

        In the UK, the sky is always grey. Even when it’s blue, it’s only pretending to be blue so that it can surprise you with grey when you put on shorts.

      • ronch
      • 7 years ago

      I think we all need to give AMD a break. We need AMD. Don’t kick the poor guy when he’s already down and out for the count.

      • rrr
      • 7 years ago

      You mean other companies’ marketing slides are any better?

      • BaronMatrix
      • 7 years ago

      Wow, you guys are such suckers. Good luck with your anti-trust company….

    • tbone8ty
    • 7 years ago

    looks like they tweaked almost ever part of the cpu and gpu

    fma3 instead of fma4

    1W idle is nice!

    cant wait for ultrathin/books

    • tootercomputer
    • 7 years ago

    I just want AMD to build a kick-butt fast CPU that will challenge Intel and give me a reason to build an AMD system again. I started building systems in 2002 with an AMD XP+ Palomino chip and continued to build AMD systems through their first few years of dual-core chips. I I stopped building AMD systems with the advent of the C2D and have been Intel ever since. But we need AMD because without it, I believe we would not have the great Intel chips that we have seen now for, what, 5 years. So I would love to see AMD really jump back in and build genuine competitors to chips like the SB i5 2500K. I’m always for the underdog, especially when they keep the big guys’ feet to the fire.

      • Tristan
      • 7 years ago

      Forget it. AMD is rolling down

      • BobbinThreadbare
      • 7 years ago

      I want a 6 core AMD chip with 50% higher single thread performance than my X4 at 3.2ghz that I can drop in my AM3+ board. I want it to cost $200 or less.

        • jensend
        • 7 years ago

        I want a pony.

          • BobbinThreadbare
          • 7 years ago

          It sounds like a lot, but it’s what Intel has accomplished with a Core i5 Sandybridge..

            • rrr
            • 7 years ago

            Did they improve their single threaded performance in SB by 50% vs Nehalem? I really don’t think so.

            • FuturePastNow
            • 7 years ago

            Not really.

            [url<]http://www.anandtech.com/bench/Product/287?vs=99[/url<]

            • BobbinThreadbare
            • 7 years ago

            Single thread performance is about 50% higher than a Phenom II,

            And I’m willing to wait for Piledriver as long as it’s AM3+ compatible.

            Edit: Also I specified a specific speed of Phenom, which is far from the fastest one they made. You can buy an X4 at 3.7 ghz, I’m not asking for 50% increase from that, I’m asking for a 50% increase from 3.2 ghz.

            • rrr
            • 7 years ago

            But you expect AMD to pull that off in ONE generation? Not gonna happen, as FuturePastNow’s link proves, even Intel can’t do that. You could say they only did it once with Netburst > Core, and even then there were serious caveats:

            1)Netburst sucked clock-for-clock as we all know,
            2)Core 2 actually was evolving all along at slower pace – from mobile chips like Banias and Dothan, which in turn evolved from P6.

            • BobbinThreadbare
            • 7 years ago

            I didn’t expect it in one generation. Piledriver will be 2 generations.

            Also, they had 3.7 ghz Phenom II X4s. That’s already 15% faster than my 3.2ghz. So they only had to increase performance another 35%.

        • rrr
        • 7 years ago

        And I want Maybach 57 for $100 or less. Companies are not tooth fairies, y’know.

      • ronch
      • 7 years ago

      [quote<] So I would love to see AMD really jump back in..[/quote<] We all do, don't we? Who wants to live in a world where only Intel supplies PC processors?

      • TheBulletMagnet
      • 7 years ago

      How is AMD going to do that if you and everyone else keeps buying Intel? AMD doesn’t have the massive money pile that Intel has to pump into R&D so its not going to happen.

        • tootercomputer
        • 7 years ago

        But it did happen, back in the day, with their 64 chips, the Venice chips, their early dual-cores and others. AMD were the hot chips. They just could not sustain.

      • Unknown-Error
      • 7 years ago

      AMD will be soon quit the Tech business and start a much more profitable Lingerie business.

      ** Let the ‘thumbs down’ reign in :p

        • rrr
        • 7 years ago

        Funny, how some post inflammatory/retarded message and think that appending “waiting for thumbs down” suddenly gives that message a free pass to being meritable.

      • ImSpartacus
      • 7 years ago

      Ivy Bridge is absurdly tiny. If Intel felt threatened, they could throw A LOT of GPU on the die before it become unprofitably large.

      Don’t get me wrong, I want to see competition just as much as you (and everyone else). I’m just not convinced AMD can get the job done. I honestly think Intel feels more threatened by its (future) ARM-based competitors.

    • Unknown-Error
    • 7 years ago

    Everything looks nice on paper, but…..
    Have to wait until May 15th (I believe) to get the real picture.

    ** Biting Finger Nails **

    • xeridea
    • 7 years ago

    To bad it won’t be out May 12th, in time to order and play Diablo III for those looking to upgrade.

    • Bensam123
    • 7 years ago

    “It’s coming…”

    • Tristan
    • 7 years ago

    Next ‘succesfull’ product from AMD workshop. They have 26% more abilities ? Yeah, only when you measure it in perf/W on idle…

    • phez
    • 7 years ago

    So when can we expect a gpu-less piledriver part?

      • xeridea
      • 7 years ago

      Q3 2012 from what I have read.

      • bcronce
      • 7 years ago

      In time for a 2013 HyperV home Win8 file server and CentOS for CS/TF2/MineCraft/Murmur/etc.

      I hope AMD makes it competitive because I’m willing to support them if they can offer nearly the same price:performance.

      I just want something with decent power-draw and full virtualization hardware support and lots and lots of ECC memory. AMD has always done well in this department.

        • ermo
        • 7 years ago

        Agreed 100%

        [quote<]I just want something with decent power-draw and full virtualization hardware support and lots and lots of ECC memory. AMD has always done well in this department.[/quote<] Intel has nothing with which to counter this particular niche application. And in addition, AMD seems to be committed to offer good support on the linux gfx side. EDIT: However, there's a distinct possibility that Trinity -- like Llano -- won't support ECC memory, since it is clearly not aimed at the server market. I was of the impression that a relatively powerful GPU like the one rumoured to be in Trinity will require a relatively large amount of bandwidth as well, which is not typically a quality associated with ECC RAM.

          • bcronce
          • 7 years ago

          I was responding more to the “gpu-less piledriver”, but I agree with you to 🙂

    • ronch
    • 7 years ago

    In the 6th slide..

    [quote<]1.303B transistors[/quote<] Better double check that, AMD.

    • jensend
    • 7 years ago

    Anybody heard anything about Kaveri desktop socket compatibility? I’ve read that it’ll require new infrastructure on the mobile side, but if buying a Trinity (VLIW4) APU+mobo now leaves an option open for upgrading to a GCN-based APU in the future that would be neato.

    • ronch
    • 7 years ago

    [quote<]based on AMD's 6000-series architecture[/quote<] Wasn't Trinity supposed to use GCN cores?

      • BobbinThreadbare
      • 7 years ago

      There were rumors that it might, but that was all.

        • jensend
        • 7 years ago

        The vast majority of sources have always said it’d be VLIW4. (Note that “6000-series architecture” is too vague- the VLIW4 6900 was quite different from the rest of AMD’s VLIW5 6000-series lineup.)

        Those who have speculated that it’d be GCN either aren’t aware of the many sources which say it’ll be VLIW4 or think those sources are outdated.

        I think GCN has always been pretty much out of the question. GCN wasn’t finished until Trinity was pretty far along. All the existing GCN chips have been 28nm while Trinity is 32nm so it would take a lot of redesign work. Finally 384 GCN cores would take a *lot* of die area at 32nm; a GCN core’s much higher performance is accompanied by a rather larger transistor count than a VLIW4 core.

        The next-generation APU, Kaveri, should basically catch up to the discrete cards, since it’ll be 28nm and based on GCN+”HSA enhancements” just like Sea Islands.

      • Unknown-Error
      • 7 years ago

      Nope. Steamroller/Jaguar based APU will have GCN. APU’s won’t have the latest GPU tech. Last year was VLWI4 but Llano had VLWI5, this year GCN but Trinity will have VLWI4 and so on, but Trinity will have a few additional features (similar to quick-sync).

        • shank15217
        • 7 years ago

        How do you know that? You don’t have much history to back that up. Its very likely AMD will move toward a unified apu and gpu architecture.

    • Duck
    • 7 years ago

    I don’t like how CPUs are devoting so much die space to the GPU. I really want to see integrated memory controllers and x16 of PCIe 3.0, but why do they not have the GPU portion on package but not on die? That seems to me to be very cost effective (in terms of yields) and very easy to make SKUs without the GPU at all. They could even pair up the CPU with a 7770 die. Put at least 1 chip of GDDR5 on package too while they are at it to use like a very large L3 cache.

    There still needs to be that killer app/game that renders on a discrete graphics card but can leverage the iGPU for all physics calculations for example. But until then, I really don’t like how CPUs are devoting so much die space to the GPU…

    edit: [url=http://techgage.com/reviews/intel/westmere_launch/intel_westmere_exploded_view_thumb.jpg<]On package but not on die example[/url<]

      • Alexko
      • 7 years ago

      There are plenty of reasons, but one of them is that the sort of dynamic power management done by Trinity between CPU and GPU cores would be very difficult—if not impossible—without on-die integration.

      It has other substantial benefits, such as sharing the same memory hierarchy (memory controller, sometimes an LLC) making packages simpler and cheaper, etc.

      • OneArmedScissor
      • 7 years ago

      [quote<]why do they not have the GPU portion on package but not on die?[/quote<] Then it would have a heck of a trip to the memory controller. [quote<]That seems to me to be very cost effective (in terms of yields)[/quote<] I doubt they're all that concerned with GPU yields. It's a bunch of little copied and pasted parts they can easily disable. [quote<]and very easy to make SKUs without the GPU at all.[/quote<] Totally pointless. These are mostly for laptops. They'll have a very small amount with bad GPUs that they can sell to the one or two dingdongs who care to spend money on that for a desktop. It's designed to conserve power, not go as fast as possible. [quote<]Put at least 1 chip of GDDR5 on package too while they are at it to use like a very large L3 cache.[/quote<] Then it wouldn't be a cache at all, but exactly like the memory on a graphics card. To use that huge amount of bandwidth, you need a dedicated, high speed memory controller, with multiple channels, just like on a graphics card. And then you'd just have a graphics card soldered to the motherboard that's stuck switched on at all times. That doesn't get anyone anywhere.

        • Duck
        • 7 years ago

        [quote<]I doubt they're all that concerned with GPU yields. It's a bunch of little copied and pasted parts they can easily disable.[/quote<] Good point. Performance seems already to be heavily bandwidth limited. The iGPU could do with it's own memory controller. So it hardly seems like sharing the CPU's memory hierarchy is a benefit. As for better turbo boost, they could just let the CPU portion turbo as normal except when the GPU is idle (shouldn't be too difficult to detect). Then the CPU could turn up the clocks even further. This would work pretty well IMO. Oh and don't forget the i7-3770k has a significant amount of die space devoted to the GPU, upping the cost if nothing else. It's not just Trinity I'm taking about here.

          • OneArmedScissor
          • 7 years ago

          [quote<]So it hardly seems like sharing the CPU's memory hierarchy is a benefit.[/quote<] Again, that's not purely for a performance benefit, but a performance compromise. It otherwise would not be possible to come close to lower end discrete card performance and maintain the status quo of IGP power use. Discrete cards turn laptop batteries into the equivalent of a portable UPS. Of course it would be ideal to give the IGP both its own controller and memory, but that would put the power use through the roof. Look at how graphics cards with performance beyond the range of IGPs are 150-300w TDP. You literally can't even put a typical CPU heatsink on that, so having it on package is impossible. It will take much more developed chip designs and manufacturing processes to reach that point. They're working on it, which you can see with Haswell's "L4" cache that likely just acts as a frame buffer, as is done with console GPUs. It has to be done in baby steps. Dedicated GDDR5 shouldn't be necessary, as DDR4 will be here by that point. [quote<]Oh and don't forget the i7-3770k has a significant amount of die space devoted to the GPU[/quote<] Ivy Bridge is also mostly for laptops. What you are asking for is SB-E or Bulldozer, but those aren't actually smaller chips. I'm not sure what you really expect out of that very small GPU die space. More cache would just increase latency and more cores tend to go to waste.

      • DPete27
      • 7 years ago

      If you don’t want the iGPU then you’ll be buying AMD FX processors or something like an Intel i5-2550K / i5-2450P / i5-2380P. To each their own.
      Also, what do you mean about “on package, but not on die?” That sounds like going back to the days of graphics being on the chipset which was less cost effective and even less performance effective. With more and more GPU computing being adopted, having a GPU on the CPU die can have tangible benefits over a discrete GPU if utilized properly. Not to mention wonderful things like Lucids Virtu that came about as a result of iGPUs.

      • ronch
      • 7 years ago

      That GPU portion needs to be big. It’s Trinity’s main selling point.

      That GPU portion has to be on the same die as the PD cores. It’s what an APU is all about, isn’t it?

        • Tristan
        • 7 years ago

        Yes, Trinity is GPU with some CPU – designed to be cheap for causal games

      • Sahrin
      • 7 years ago

      >I don’t like how CPUs are devoting so much die space to the GPU. I really want to see integrated memory controllers and x16 of PCIe 3.0, but why do they not have the GPU portion on package but not on die?

      Cost savings come from efficiency. In no one’s book is two dies, two interconnect regions, two caches, etc, more efficient than one of each of those things.

      Yes, the die is bigger, but that’s why we spend so much money being able to fab chips well.

      • heinsj24
      • 7 years ago

      I read some speculation that the cpu, of an apu, could eventually off-load floating point calculations to the gpu. I agree, it would be nice to take advantage of the integrated gpu for something, now, if a discrete graphics card was installed.

      • bcronce
      • 7 years ago

      Die space is cheap, but die space is expensive to power. Adding in specialized transistors like a low-power GPU is easy and opens the doors for new lower latency medium throughput number-crunching like your “physics”.

      Anyway, there is little demand for more cores, so adding cores is a waste as not much software can/will make use of them right now and most software that could make use of more cores, would be better off with an integrated GPU instead of more cores.

    • gamoniac
    • 7 years ago

    The referenced Chinese site says that in general, the desktop version is 26% higher in ‘productivity’, while the mobile version has a 29% improvement over their Llano counterparts.

      • chuckula
      • 7 years ago

      “productivity” == an AVX enabled benchmark that takes advantage of features in piledriver not present in Llano.

        • gamoniac
        • 7 years ago

        Right… AVX1.1 and AES support added. It will be interesting to see on 5/15 how it all pans out.

    • chuckula
    • 7 years ago

    Nordic hardware had a more complete set of the same slides with an English language writeup.

    Edit: [url<]http://www.nordichardware.com/news/69-cpu-chipset/45809-amd-trinity-and-piledriver-detailed.html[/url<]

    • odizzido
    • 7 years ago

    I never expected to think highly of this sort of thing, but considering how happy I am with how my C50 laptop runs stuff I would absolutely consider an AMD chip like this for a casual gamer rig.

      • sweatshopking
      • 7 years ago

      great! cause the performance of trinity should be slightly faster than your c-50!

        • ronch
        • 7 years ago

        Not as quick as you are to put Trinity down, though.

          • sweatshopking
          • 7 years ago

          are you saying it’s slower than my hour and half later response!? cause it likely will be! seriously now, amd should get out of making processors, because they’re simply too far behind intel. not only are they 18 months behind in manu process, their new cpus, bulldozer based, are slower than the thuban x6’s launched in april 2010!!!! that’s a total of like 4 years (if you disregard real science). I like amd, i do. and i like being able to recommend their stuff to people. but if I, me, myself, was buying a cpu, brazos aside, there isn’t an amd chip i’d recommend. And yea, you can say “look, you even recommend one! how can you say they should get out!?!?”, and that’s fair, but really, it’s just a matter of month until medfield, then i don’t know why you’d buy brazos. the power difference is going to be too massive.

            • ronch
            • 7 years ago

            [quote<]are you saying it's slower than my hour and a half later response[/quote<] We'll have to wait for official benchmarks of Trinity. In the meantime, benchmark yourself.

            • NeelyCam
            • 7 years ago

            18months? More like 30months…

            • sweatshopking
            • 7 years ago

            lol, yeah. idk what they think they can do. at this point, they’re too far behind. it sucks, but it’s the truth.

        • DancinJack
        • 7 years ago

        I thought this was hilarious. Nice work!

    • dpaus
    • 7 years ago

    Looking interesting… I wonder if they’ve done anything to address the cache latency issues.

    When is the official release date?

    EDIT: one of the slides touts a ‘26% performance increase over Husky’ Aside from the sled-motivation organism, what’s a Husky?

      • Hattig
      • 7 years ago

      Husky is the core used in Llano.

      • codedivine
      • 7 years ago

      Husky = The CPU core in Llano.

      • gbcrush
      • 7 years ago

      +1 for sled-motivation organism.

      • faramir
      • 7 years ago

      Well, they got rid of L3 which was said to have been rather poorly implemented, especially when it came to running multiple tasks (and it obviously wasn’t scheduler’s fault because gains with “fixed” scheduler were miniscule to nonexistent).

      • ptsant
      • 7 years ago

      Apparently they improved the areas that sucked, namely branch prediction, TLB hits and L2 cache. Together with a more reasonable power envelope this could give Bulldozer the lifting it needs to at least become competitive with older AM3 chips (speaking as someone who has a 965 BE and sees no reason to upgrade to 8150…).

      BTW, forgot to add that lucky AMD users have had a superb upgrade path thanks to socketAM2-3. I think the socketAM3 user base might be able to use Piledriver. We’ll see..

        • chuckula
        • 7 years ago

        1. The socket AM3 base cannot use Bulldozer now. This isn’t speculation, but a known fact.
        2. What makes you think that Piledriver will magically bring AM3 back to life when Bulldozer didn’t work with it?

      • jensend
      • 7 years ago

      Not much gets said “officially” but the rumors say May 15th.

Pin It on Pinterest

Share This