Leaked slide spills more Trinity details

The Turks over at Donanim Haber have come upon yet another official-looking AMD slide detailing one of its future products. This time, that product is Trinity, the next-generation Fusion APU that will meld Piledriver CPU cores with an integrated GPU code-named Devastator.

Piledriver is an update to AMD’s current Bulldozer cores, and the slide says performance is 20% better than Llano with a "digital media workload." The IGP is claimed to be faster than its Llano predecessor by an even greater margin: 30%, according to the slide. Interestingly, the slide mentions a video-compression engine associated with the UVD video decoding block. Intel’s QuickSync transcoding tech may soon get some direct competition.

When we saw Trinity demoed at IDF in September, AMD wouldn’t confirm whether Trinity would slip into Llano’s Socket FM1. The slide mentions Socket FM2, suggesting backward compatibility may be out. Support for DDR3 memory speeds up to 2133MHz is apparently in, however.

At IDF, AMD told us Trinity should be out before Intel’s next-gen Ivy Bridge CPUs hit next spring. A separate set of slides published by TechConnect pins the mass production of mobile Trinity APUs for January of next year, so that certainly sounds plausible. The first wave of mobile chips will be 35W models, but the production schedule teases a 17W version that’s supposed to sample late this year and be ready for production in February. Ultrabooks with decent graphics, anyone?

Comments closed
    • shaq_mobile
    • 8 years ago

    You gotta hand it to AMD. They have some pretty sweet names for their products. Devastator? 🙂

      • Yeats
      • 8 years ago

      Too bad AMD’s most recent CPU wasn’t named “Screwdriver”. I’m sure you can figure out the rest… 😉

    • Abdulahad
    • 8 years ago

    ADVANCED MASS DISASTERS…. in the making.

    My lap is getting fried everyday with a moblie Turion II, waited for a Llano upgrade but still no trace of it in South Africa.
    Tried to import one but with shipping cost, its what someone would pay in the near future for a 5th Gen Core i7.
    So I guess I’ll wait for an eternity for Trinity.

      • khands
      • 8 years ago

      Well, unfortunately with supply as tight as it is right now I don’t see this changing any time soon 🙁

      • Yeats
      • 8 years ago

      [quote<]My lap is getting fried everyday with a moblie Turion II[/quote<] AMD, slowing the world's population growth one cooked testicle at a time?

    • FuturePastNow
    • 8 years ago

    Although Llano isn’t possessed of an overabundance of processor power, I think a 30% improvement to the graphics component (a claim which I do find believable) is worth a small sacrifice to the processor part of the APU. This isn’t going to be selling to the HPC market, anyway. It’s going to go in low- to mid-range laptops and cheap desktops.

    What I’d really like to see is a small bit of GDDR5, 64 or 128MB, on the same package as the APU. That would make up for a lot of the memory-bandwidth problems a faster GPU will encounter.

      • khands
      • 8 years ago

      I don’t think they have space to do that, or it would be quite the massive die.

    • forumics
    • 8 years ago

    will they be dropping the FPUs since there is a GPU on board?

      • [+Duracell-]
      • 8 years ago

      No, not just yet. Right now, I imagine they’re still working out the kinks of having the GPU on board.

      • mesyn191
      • 8 years ago

      No. You’d need a software rewrite to use the GPU in that manner. No hardware solution is going to do that for you automagically.

      That is why the whole APU thing AMD is pushing is mostly joke right now and for the forseeable future. They should’ve been pushing for more GPGPU software and compilers years ago if they wanted it to take off any time soon. They twiddled their thumbs and half assed their support while nV developed and pushed CUDA which is why you see more GPGPU apps for nV instead of AMD’s stuff.

      Now when they really could use some sort of an edge for general purpose performance they’ve got next to nothing.

      • khands
      • 8 years ago

      That little transition is going to take a while, hopefully we’ll be there by the time AMD hits 22nm.

    • ronch
    • 8 years ago

    Uh oh. Donanim Haber rumor mill spinning FUD again.

    The fun never stops, I guess.

    Edit – I can see there’s simply no room for error allowed for one’s posts here. Sorry folks, I’m not a marketing guy nor do I manage to always post perfectly accurate info or perfectly agreeable comments here, like Vulk does. TR is an excellent review site read by many arrogant know-it-alls, apparently, and they’re too blind to acknowledge it.

      • Yeats
      • 8 years ago

      Leaked slides, FUD not found.

      • Vulk
      • 8 years ago

      FUD stands for Fear, Uncertainty, and Doubt. How is this any of that?

        • khands
        • 8 years ago

        I thought it was “F’d Up Drivel”

      • theonespork
      • 8 years ago

      The arrogance can just as easily be found in the assumption that because YOU make a statement, that statement should not be questioned. What is wrong with the slide? What is wrong with the concept of a more successful chip from AMD? What is wrong with healthy competition? Whether you are or are not an AMD fan, AMD is good for Intel. Not from Intel’s perspective, mind you, but from EVERY consumer’s perspective it is so.

      This silliness of AMD sux and cannot execute, Intel sux because they do not love me as a consumer, or whatever else sux just because I feel a need to espouse an opinion is beyond boring. It is getting as bad in the chip world as it is in the US political world. I am becoming increasingly convinced that any from of “competition” with fewer than 3 parties is doomed to devolve into an ugly, senseless, unintelligent, ceaselessly mindless exercise n D***swinging.

      • FuturePastNow
      • 8 years ago

      What incorrect information have they posted in the past?

        • NeelyCam
        • 8 years ago

        This. DH was the first to call BullDozer performance promised BullSh*t, and was right.

      • NeelyCam
      • 8 years ago

      I don’t get your edit.. What is your point? Are you upset about thumbdowns..?

    • supercomp
    • 8 years ago

    How can they make 35W/17W Trinity chips based on the same technology as Bulldozer ? Sure it’s been updated, but Bulldozer has a crazy inefficient power/processing ratio, and this chip has GPU on it as well. I don’t get it?

      • mesyn191
      • 8 years ago

      Clock speed will be very low most likely. Much like how it is for current Stars cores to reach their TDP’s.

        • chuckula
        • 8 years ago

        Totally agree about the low clock speed. Trinity takes the Llano concept to an extreme: It’s a GPU with enough CPU bolted on so that you’ll hopefully not be CPU bound when running a game, which is Trinity’s target market.
        The “up to 20%” faster claim is going to be subject to pretty intense scrutiny since Trinity is running with 4 quasi-cores at about the same clock that quad-core Llanos run at now. All the advantages that Bulldozer had on the desktop (more cores + faster clocks + bigger caches) are gone. Basically, Trinity has AVX and the newer instruction sets that can give the “up to 20%*” boost in targeted applications, but outside of that your CPU performance is going to go sideways and likely regress for some applications. But, going back to the first point, Trinity is basically a GPU, and hopefully the CPU part will be good enough to let the GPU do its job.

          • khands
          • 8 years ago

          Pretty much this, which is why it’s great for an HTPC and most laptop users that would like to do a little gaming on the side.

          What’s interesting to me is how these chips keep looking more and more like console SOCs.

          • drfish
          • 8 years ago

          [quote<]Trinity is basically a GPU, and hopefully the CPU part will be good enough to let the GPU do its job.[/quote<] This. And thats all I want in a laptop.

            • Chrispy_
            • 8 years ago

            This. If you need a powerful laptop for non-gaming, you’re sitting in a very tight niche. Enough CPU to let the GPU do it’s job is exactly why AMD were smart to buy ATI

          • OneArmedScissor
          • 8 years ago

          The “advantages” you note for the desktop Bulldozer are actually disadvantages in reality, even in desktop use where more power to burn is available. It doesn’t have bigger caches (plural), but one extra stage, which is extremely slow. Its power use went through the roof because they effectively pre-overclocked the core clock, but ignored the pitiful L3. And I’d even venture to say that having 8 cores was just wasteful, as its “speed” comes from gating them and running 4 much faster.

          Trinity vs. Llano is pretty much the inverse of Bulldozer vs. Phenom II. It’s not going to beat the pants off of Llano clock to clock, but Llano’s core clocks are shockingly low, as is the bar it set.

          • NeelyCam
          • 8 years ago

          [quote<]The "up to 20%" faster claim is going to be subject to pretty intense scrutiny since Trinity is running with 4 quasi-cores at about the same clock that quad-core Llanos run at now. [/quote<] Maybe they are comparing a two-module Trinity to a dual-core Llano...?

          • Anonymous Coward
          • 8 years ago

          Those are words of wisdom, I think. Sounds like a fine product for my purposes.

        • link626
        • 8 years ago

        yep. this will def have low clocks like my A6m, and clock for clock performance will be lower I bet.

        20% increase on a selective cpu task….. boy, that sounds depressing

      • BaronMatrix
      • 8 years ago

      They got 8 cores in 125W with 8 MB L3, 16 MB total. There’s no L3, less L2 and the clock only needs to be as high a current mobile\desktop Llano. Especially if its 20% faster.

      The GPU is a 28nm design made at 32nm. Bulk – SOI is good for up to 40% power savings. Not to mention HKMG and the clock gating.

        • NeelyCam
        • 8 years ago

        [quote<]Bulk - SOI is good for up to 40% power savings. Not to mention HKMG and the clock gating.[/quote<] Any data to support the 40% power savings claim? That sure isn't obvious with Phenom->BD transition...

          • willmore
          • 8 years ago

          ’cause they’re both SOI?

            • NeelyCam
            • 8 years ago

            My god, I messed up. Should I go and clean up the traces…?

            ..no, that would be pathetic.

        • chuckula
        • 8 years ago

        BaronMatrix!! You mean you didn’t commit suicide after Bulldozer was released? Anyone at AMDZone predicting the Rapture or anything?

        You do realize that Trinity is intended mostly for *mobile* applications right? That means 35 Watt TDP for the *big* chips. The GPU is not 28 nm at all, the entire Trinity chip is a single 32 nm design on the exact same process as Llano & Bulldozer. The fact that AMD is outsourcing 28 nm GPU designs to companies that aren’t Global Foundries has zero to do with Trinity.

        Trinity’s CPU has to have a *vastly* lower TDP than Bulldozer since it has to run at most within a 35 Watt TDP and because AMD probably wants most of the TDP going to graphics which are its one and only strength made through buyouts. Trinity *will* lose badly to Ivy Bridge (dual core chips included) at any CPU bound task. AMD is banking on graphics and hopeful power savings vs. Llano to sell Trinity, and no amount of AMDZone propaganda will change that.

      • Vasilyfav
      • 8 years ago

      Expect CPU performance to be awful. Unlike intel mobile quads, you probably won’t be able to stream HD from your laptop with AMD.

        • NeelyCam
        • 8 years ago

        Hardware engines can easily do that at low power levels. Zacate can already do that (except for Netflix).

          • Vasilyfav
          • 8 years ago

          I meant encoding and streaming from my laptop. Not just streaming video, as is obviously already possible with sub 10 W chips

            • khands
            • 8 years ago

            [quote<]Interestingly, the slide mentions a video-compression engine associated with the UVD video decoding block. Intel's QuickSync transcoding tech may soon get some direct competition.[/quote<] Hopefully this helps in that area.

    • DPete27
    • 8 years ago

    Yes, Trinity, you are what my HTPC longs for. Can’t wait!

      • sweatshopking
      • 8 years ago

      your computer longs for a cpu that’s hot and slow?

        • supercomp
        • 8 years ago

        It may be slow, but it’s not hot at 17/35W.

        I’ll upgrade my AMD Athlon 2850e / 785GM based HTPC if this chips CPU part is faster.

        • DPete27
        • 8 years ago

        HTPC’s dont need alot of horsepower (Atom and Zacate CPU’s are the norm in pre-built small form factor HTPC’s) and my current rig is running on an AMD 754 socket with AGP graphics and a single core processor, DDR1 RAM, and a motherboard that has a couple bursted capacitors….aka its in desparate need of an upgrade.

        Not to mention that AMD’s integrated graphics are currently much better suited for HTPC use than Intels offerings. Depending on the time between Trinity and Ivy Bridge releases I might wait to see what Ivy Bridge has as far as improvements on the integrated graphics front. (I’m an Intel boy at heart)

        • theonespork
        • 8 years ago

        mmmmm, hot ‘n slow.

          • NeelyCam
          • 8 years ago

          Lol @ guttermind. +1

    • Arclight
    • 8 years ago

    [quote<]Support for DDR3 memory speeds up to 2133MHz is apparently in, however.[/quote<] Is that gonna solve anything? Regarding the leaked slides, i take them with a mountain of salt.. I'll pass judgement after i see the products reviewed by reputable sites after the NDA expires.

      • Yeats
      • 8 years ago

      [quote<]Is that gonna solve anything?[/quote<] Referring to what? It's just a typical incremental change, surely not intended to "solve anything".

        • Arclight
        • 8 years ago

        Referring to the quote above my question which states that the integrated memory controller will support DDR3 with speeds up to 2133Mhz.

        What i was trying to imply is that BD architecture didn’t show any real improvements when using official 1866 supported DIMMs in comparison to slower 1600Mhz DIMMs, so will 2133 RAM make a huge impact on performance? More likely it won’t. After all SB’s memory controller supports officially only 1333 RAM and it still beats BD.

        TL;dr BD’s support for high frequency RAM was not something that needed improved, they have other far more important issue that they should have addressed, like cache latencies and so on.

          • Goty
          • 8 years ago

          It’s more a requirement for the GPU than the CPU for Trinity.

            • OneArmedScissor
            • 8 years ago

            Yes, that would be for the desktop version, which should be able to step up to a decent resolution now, requiring a lot more bandwidth.

          • Yeats
          • 8 years ago

          I know what you meant, and I agree that 2133mhz isn’t going to have much of an effect. I’m simply saying that the addition of official support for 2133Mhz RAM isn’t intended to “solve” anything, it’s just a typical bump that comes with the territory of overall chip improvement. You are asking a question – “Is that gonna solve anything?” – for a circumstance that isn’t [i<]intended[/i<] to "solve" anything.

      • Ryhadar
      • 8 years ago

      On that same point, Trinity also lacks L3 cache so maybe the faster memory will help it in areas where it could have used some L3 cache.

        • OneArmedScissor
        • 8 years ago

        Since there are only two low clocked modules, L3 cache would actually increase memory latency, not the other way around. It would also hurt laptop battery life because high level shared caches can’t (yet) be gated.

        AMD themselves said the L3 doesn’t accomplish anything even for the desktop Bulldozer. It’s really there for multi-socket servers, which have up to 16 cores/8 modules per socket, and need to go out of their way to avoid piling up very slow memory misses as the CPU sockets talk back and forth.

        Intel used Sandy Bridge’s L3 for the GPU, but that seemed to be a stop gap while they were still using an outdated GPU. Ivy Bridge’s more powerful GPU has its own cache, as is typical. The advantage there is really the ring bus.

        • sschaem
        • 8 years ago

        Intel gets better performance out of plain DDR3-1333 then AMD does out of its Bulldozer L3 cache…

        So its not a big miss.

          • khands
          • 8 years ago

          Hell, that might help.

          • Waco
          • 8 years ago

          Which is amazingly sad in many many ways.

      • mczak
      • 8 years ago

      It won’t do much if anything for the cpu, but the gpu will benefit.
      Not that it’s really a big deal, since budget cpus and high-end memory aren’t a good match. Best you can hope for (at least for oem systems) is probably ddr3-1600, and if you’re really thinking about spending $$$ for getting 2133 memory to make that igp faster you should probably save that money and opt for a discrete gpu instead…

    • OneArmedScissor
    • 8 years ago

    The most useful information is that it states 4MB of L2 cache. That would mean the CPU cores should be smaller than quad-core Llano, and if the proportions are accurate, despite being faster, the GPU isn’t really much bigger (though it ends up a larger proportion of the entire chip).

    Maybe they can have decent yields this time, and even better battery life.

      • chuckula
      • 8 years ago

      [quote<]The most useful information is that it states 4MB of L2 cache. That would mean the CPU cores should be smaller than quad-core Llano, [/quote<] How exactly? In the current design of Bulldozer there is exactly the same amount of L2 cache per core that has been announced for Trinity, and Llano has smaller L2 caches than Bulldozer. Do you mean L3 cache instead?

        • OneArmedScissor
        • 8 years ago

        Llano has 1MB of L2 cache per core, for 4MB L2 total. Since it’s not any larger, you can compare the size of the rest of the CPU cores apples to apples. Two Bulldozer/Piledriver/whatever modules is smaller and less complicated than four Llano cores.

      • codedivine
      • 8 years ago

      Well, a little smaller but don’t expect miracles. Llano each core (including L2) was about 17-18mm^2. 4MB L2 in Trinity suggests the same 2MB L2 per module as in bulldozer, and each bulldozer module was around 31mm^2. Thus, trinity CPU cores should possibly be 62mm^2 total, and Llano quad-cores were about 70mm^2 total. So a little smaller, but not massively so.

      The big question is how well have they laid out the chip and how they have designed the uncore. Bulldozer is particularly bad at this with more than half the area (and more than half the transistors) dedicated to the uncore 🙁

      Presumably they don’t have that L3 cache in Trinity and hopefully they have learnt their lesson about the rest of the uncore too.

        • OneArmedScissor
        • 8 years ago

        [quote<]Thus, trinity CPU cores should possibly be 62mm^2 total, and Llano quad-cores were about 70mm^2 total. So a little smaller, but not massively so.[/quote<] Right, but you have to consider that the small amount of saved "space" is a very rare instance where the most complicated transistors are being simplified. Chips are normally cut down by lopping off repetitive circuits like cache and GPU blocks, while the CPU becomes ever more complex over time, so this is a bit apples and oranges. Llano's manufacturing issue was [i<]allegedly[/i<] the CPU cores. With four of them in one monster of a chip that was meant as a mass produced, low power, mainstream laptop CPU - and on a brand new manufacturing process - I find that believable. They stacked the deck against themselves. Intel also has similarly complex Sandy Bridge quad-cores for laptops, but they're a higher priced, higher power niche, made on a well tested process. A dual-module Trinity is not quite a dual-core, but at least more of a middle of the road approach, which probably makes more sense for AMD. Their market is smaller and their sub-$100 will be covered by a large range of Bobcat chips.

        • Vulk
        • 8 years ago

        It was a server chip. Most of that Un-core was interconnect for multi-socket PC setups. Without all crossbar support for multiple HT interconnects, I’d expect it to be a lot better. But that’s just my two cents.

          • khands
          • 8 years ago

          I wonder why they didn’t cut a lot of that out for the desktop versions, it’s not like they haven’t been shipping Interlagos to at least a few partners for a while now…

    • chuckula
    • 8 years ago

    [quote<]but the production schedule teases a 17W version that's supposed to sample late this year and be ready for production in February. Ultrabooks with decent graphics, anyone?[/quote<] That's the part to look for. The question is not whether a fully operational Trinity will have better graphics than Llano or Ivy Bridge (it will). The real question is: does the version of Trinity that runs at 17 watts still have much better graphics than Ivy Bridge at 17 watts? A very interesting question that we'll find the answer to next year.

      • khands
      • 8 years ago

      I’m pretty sure Ivy Bridge will still be below Llano, though not by much (maybe 10% tops), I expect Intel will still be playing catchup in the last area they have to catch up.

      • NeelyCam
      • 8 years ago

      So true.

Pin It on Pinterest

Share This