AMD to base its first Fusion chip on a 40nm TSMC process?

Could AMD enlist the help of Taiwanese contract foundry TSMC to make its first Fusion hybrid processor? That rumor surfaces every now and again, and TG Daily is the latest to revive it. Grounding its report on claims from “industry sources,” the news site says TSMC will produce the microprocessor/graphics processor chimera using 40nm process technology.

TG Daily refers to the chip as Shrike, although that’s actually what AMD called its 2009 mobile platform when discussing its roadmap last year. AMD then said the Shrike platform would feature Swift, a 45nm chip with three K10-class processor cores, one GPU core, and an integrated DDR3 memory controller. TG Daily alleges that the CPU part will in fact include two cores, while the GPU part will be based on AMD’s next-gen RV800-series architecture.

Whatever Swift ends up containing, AMD would need to architect it with bulk silicon in mind for TSMC to be able to handle manufacturing. That could mean making some serious tweaks to the CPU cores, since AMD currently produces all of its microprocessors using silicon-on-insulator tech. Still, TG Daily says AMD will extend its reliance on TSMC as time goes on—supposedly, the Taiwanese foundry will produce AMD’s first CPU based on the next-gen Bulldozer architecture using 32nm SOI tech. Perhaps that’s all part of AMD’s elusive asset-smart plan.

Comments closed
    • dragmor
    • 11 years ago

    Everyone is over reacting. The first shrike will be dual die. An Phenom X2 CPU and a GPU on the same package connected via a HT link and if we are lucky with a ram chip as well.

      • Joel H.
      • 11 years ago

      Dragmor,

      I suppose it’s possible that AMD could move a GPU on package instead of on-die, but doing so would almost entirely negate the benefit of putting the two together. If AMD wants to tie an integrated GPU to its CPU, it can already do that with an existing HT link. The latency cut from moving the GPU on package would be minimal.

      Edit: Did some digging around, and this is most definitely an unsettled topic. There are some good arguments for on-package construction, but some of AMD’s diagrams (and other information) could be used to argue the other way. It still seems to me that on-die is a better option than on-package, but the former may not be an option due to manufacturing costs.

      As for an on-die RAM chip, again, that’s a potentially huge cost for virtually no reason. AMD isn’t trying to integrate a high-end solution here, it’s trying to cut costs and drive prices lower.

      Think about it. AMD can’t integrate a GPU-flavored frame buffer on die, because no one wants to pay for 8MB – 16MB of RAM in an integrated GPU solution. The company could theoretically package the frame buffer off-die, but that narrows the bandwidth and slows the speed to the point that there’d be no benefit compared to the DDR2 sitting right over there in DIMM 0 & 1.

      • Stranger
      • 11 years ago

      That doesn’t make sense to me. if you’re not going to put it on the same die you might as well stick it on the NB and skip adding an extra interface to the CPU in addition you don’t have to make a multichip package.

        • dragmor
        • 11 years ago

        I’m sure they will move to one die eventually just not as first. Whats TSMC highest clocking chip? Less than 1ghz? TSMC are all about bulk not performance. Besides the fact it would takes years to move a CPU from AMDs process and redesign it for TSMC.

        The convergence at the moment is all about cost (later it will be about speed ups using the GPU). Its going to need a new socket no matter what they do. First move will be GPU on the CPU package. This will allow them to sell cheap notebooks and desktops without a northbridge (since the southbridge will contain everything they need). 1 less chip = smaller 3 or 4 layer motherboard, less traces, less cost (due to no NB) and less power draw.

    • eitje
    • 11 years ago

    Do you think it’s REQUIRED to have 4 arms, if you’re able to stop time? It’s something that’s always bothered me.

      • ImSpartacus
      • 11 years ago

      I’m not seeing exactly wtf you mean…

        • ludi
        • 11 years ago

        It’s something that’s always bothered him.

    • ish718
    • 11 years ago

    Huh, whats that?
    “next-gen Bulldozer architecture using 32nm SOI tech”

    “AMD has said it will be their first substantially new CPU core since the Athlon 64 and the first complete re-design from the ground up since the K7 architecture.”

    “Bulldozer’s architecture is expected to provide the foundation for all future AMD CPU’s. Bulldozer will featuer a deeper instruction pipeline, new instructions for media processing and performance in high performance computing clusters. It will also have HT3.0, support for DDR3, and AMD’s G3MX memory extender technology to boost data and bandwidth available to the processor. At first, Bulldozer will be a 45 nm chip, but later reduced to 32 nm. Power usage is expected to be between 10 and 100 Watts.”

    Interesting…

    /[< http://www.techfuzz.com/roadmaps/2009.aspx#Bulldozer<]/

      • moritzgedig
      • 11 years ago

      isn’t that totally outdated?
      The way I understood it, Bulldozer was replaced by the Bacelona.
      At first AMD wanted to change to a 2^n scaleable architecture right from the dualcore. Then they noticed the step to be to big and the time of deepening pipelines had past. So they stopped the bulldozer-architecture and did the Bacelona Core.

    • wingless
    • 11 years ago

    Remember, the two Phenom cores that will be paired with the GPU are K10.5s. Judging from Deneb preliminary results we may even see higher IPC out of each of those cores. We’ll have two WORTHY cores paired with a GPU core to get the job done.

    Once GPGPU apps get their feet on the ground all of this will really take off. Having a CPU and motherboard northbridge capable of GPGPU will seriously allow for affordable workstations. The more workloads we can offload to the GPU the merrier. Unfortunately this fact also means that the success of Fusion is closely tied to the success and adoption rate of GPGPU.

    PS: What happened to Torrenza?

    • StashTheVampede
    • 11 years ago

    Instead of the aggressive X4 + GPU, they are going to go with X2 + GPU for initial release. Once they get their feet wet with this, they’ll be able to tweak the number of CPU/GPU cores per die.

      • tfp
      • 11 years ago

      Because of die size and cost, along with no real need this is for low end stuff. I don’t expect very high clocks, SOI is supposed to be why AMD does so well…

        • StashTheVampede
        • 11 years ago

        As we see, most of the time, high clock aren’t necessary for most people. It’s all about cost and keeping it down. OEMs that want the high end, will go for it, period.

          • tfp
          • 11 years ago

          Yeah the exact reason why they don’t need 4 cores… People don’t need it and it keeps the cost down.

          High end will not have embedded graphics or at least will not use other then for the desk top. Also high end might need 4 cores were most machines do not.

Pin It on Pinterest

Share This