AMD’s Demers reveals details of next GPU architecture

At the Fusion Developer Summit here in Bellevue, Washington this morning, AMD Graphics CTO Eric Demers made some interesting revelations about his company’s next graphics processor architecture. While he didn’t talk about specific products, he did say this new core design will materialize inside all future AMD products with GPUs in them over the next few years.

There were diagrams aplenty (you’ll find some in the image gallery below), and I expect our own Scott Wasson will have more in-depth commentary to provide soon. Demers made some some key points very clear, though.

For one, AMD’s GPUs will break free from the shackles of fixed-function designs: the next AMD GPU architecture will apparently have full support for C, C++, and other high-level languages. Making that possible has involved some re-architecting of the main processing units inside the GPU, which will now be "scalar coprocessors" (similar to the vector supercomputers of the 1980s, Demers said). The new units will mix and match elements of multiple instruction, multiple data (MIMD); multiple instruction, single data; and simultaneous multi-threading (SMT) designs. Gone will be the very long instruction word (VLIW) architecture of past AMD GPU architectures.

Now, AMD doesn’t sound bent on taking Direct3D and OpenGL to an early grave. Demers said he thinks developers will continue to use existing APIs. I’m guessing the real appeal of full C++ support will be for GPU compute tasks, not game programming.

Another point of note is the next-gen GPU architecture’s support for x86-64 memory addressing, which will enable the unification of memory address space across the CPU and GPU. According to Demers, this change will, among other things, eliminate the "glitching" players might sometimes experience when games load textures as they go over the crest of a hill. Developers will be able to use "true virtual memory," Demers noted.

Demers’ keynote was an interesting postscript to yesterday’s big-ticket item, the announcement of Microsoft’s C++ AMP, which extends C++ with support for heterogeneous processors. I heard at least two speakers at the Fusion Developer Summit emphasize that GPUs and CPUs are in no danger of merging. However, GPUs are well on their way toward becoming generic parallel coprocessors. We’ve seen past architectures (such as Nvidia’s Fermi) make strides in that direction, and AMD’s next-gen GPUs look set to tread further down that path, as well.

Comments closed
    • Krogoth
    • 8 years ago

    [url<]http://files.redux.com/images/ccd64dce0cd555719d85f69b9424d6c4/raw[/url<] /thread

    • Arclight
    • 8 years ago

    [quote<]Now, AMD doesn't sound bent on taking Direct3D and OpenGL to an early grave. Demers said he thinks developers will continue to use existing APIs. I'm guessing the real appeal of full C++ support will be for GPU compute tasks, not game programming.[/quote<] I don't understand why they push such arhitectures for home users. We want video cards for video playback and games not for GPGPU, i don't want them to allocate sillicon real estate for other purposes. I understand the use but we aren't/shouldn't be the target for such features. They can do whatever they want for the proffesional market but for us they just compromise performance. I didn't liked it when nvidia did it and i don't like it now that AMD wants to do it as well. Make different arhitectures for different markets (this should have been done from the very beggining instead of rebadging normal videocards) and stop throwing unecessary features down our collective throughts.

    • michael_d
    • 8 years ago

    This shift in GPU architecture corresponds to APU approach where CPU/GPU share memory. Plus GPU C++ support is an advantage over Nvidia’s CUDA since this is the most common language developers use.

    Does Nvidia support C++?

    On the long run AMD might end up a big winner, once they put powerful CPU/GPU on the same die.

    • VILLAIN_xx
    • 8 years ago

    BLAST. This wont be ready before BF3 comes out. Sigh.

    πŸ™

    • anotherengineer
    • 8 years ago

    So is this new architecture going to be in the ‘southern islands’ gpu that is supposed to be released in the ‘near’ future?

    Or is this for further down the road beyond southern islands??

    Oh NeelyCam you are the master baiter!!

      • NeelyCam
      • 8 years ago

      I’ve never seen a -50 before.. it’s pretty cool!

    • kamikaziechameleon
    • 8 years ago

    hmmm… interesting I’m on the crest of getting a 6970 and this is just interesting news, good for them.

    • TaBoVilla
    • 8 years ago

    but CAN IT RUN CRYSIS?

    sorry, had to do it, no new gpu thread is complete without asking this =P

    • ronch
    • 8 years ago

    They’re really serious about melding a CPU and GPU together. Many years from now, if all these heterogeneous computing initiatives are successful, we’ll all look back and realize the times we live in today is probably one of the most exciting turning points in the history of computing. And three companies made it possible: Intel, AMD and Nvidia.

    • JMccovery
    • 8 years ago

    I just read this over at PCPer: [url<]http://www.pcper.com/reviews/Graphics-Cards/AMD-Fusion-System-Architecture-Overview-Southern-Isle-GPUs-and-Beyond[/url<], and they are saying that the Southern Islands architecture will be the first iteration of FSA.

      • dragosmp
      • 8 years ago

      Very informative, looks like this architecture is the really what they meant by Fusion when AMD & ATI merged.

      On topic though FSA doesn’t look more alike to Nvidia’s architecture then Xeon is to Opteron. FSA is hugely more complex than Nvidia’s current MIMD which will certainly evolve by the time AMD releases a FSA-based GPU.

        • JoshMST
        • 8 years ago

        When first presented, I thought it was looking more like NVIDIA’s architecture, but the more you look at it and understand… it is very, very different. We still don’t know a lot of the details about how it works, how work flow inside the CU happens, or even how wide the four vector units are (eg. can each do a vec4 product plus the scalar?). I have seen musing that it essentially is a 16 wide unit (4 x vec 4 per clock), but then we start talking waves in flight, what exactly is a wavefront, and a dozen other questions right off the bat.

        We will get more info the closer we get to Q4, but we were told very directly that we will see the first FSA being released this year.

    • HisDivineOrder
    • 8 years ago

    Makes sense, especially for AMD.

    It’ll take longer to get to Cell-like computing, but it’ll come eventually. A few high performance cores, a lot of parallel low performance cores, with no line that says, “This is a GPU, this is a CPU and by their powers combined they are APU!” Instead, you’ll just have a processor that has different cores that share a common, singular memory space.

    This will finally and completely to do away with the nonsense that is the video memory. That has needlessly been dragging up the costs of 3d gaming for years. Eventually, it’ll be nice.

    Meanwhile, I think Intel too sees this future, but I don’t think nVidia cares in the long run. They’ll probably have left the x86 Windows market by the time such dreams come to us. They’re betting the farm on Project Denver to keep them going.

    This does help AMD to blend CPU’s and GPU’s together, though. And for them, that’s the bet they made would happen and so far, they’re right on the money. Even Llano with its crappy Phenon II-based CPU architecture is an amazing product because of the level of gaming performance you get for the low cost and low power utilization. Imagine what Trinity will be able to do with its Bulldozer-like CPU design and a 6xxx-series GPU design built wtih the above in mind (hopefully).

    I think Intel is going to be throwing a lot of money catching up to the value proposition that AMD will be offering next year.

      • maxxcool
      • 8 years ago

      Reading the 1ST paragraph made me think “wonder twin powers! Activate!”

    • xeridea
    • 8 years ago

    Nice to see. Should be better performance with Compute heavy tasks in certain games/settings. I just hope it doesn’t suffer the same severe inefficiency for games as Fermi does.

    • tejas84
    • 8 years ago

    So AMD have admitted that Nvidia’s Scalar arch approach to GPU’s with G80 all the way through to Fermi was the best way to build a gpu.

    Nvidia shoud be feeling pretty happy at the vindication of their GPU’s by their competitor. VLIW is too inefficient for GPU’s

    Nvidia have offered all this stuff with GF100/GF110 already and AMD is only now getting it.

      • Game_boy
      • 8 years ago

      AMD have had higher performance per mm^2 since the 4870. Does that not mean their architecture was more efficient design-wise?

        • swaaye
        • 8 years ago

        Higher potential performance but it has been more difficult to tap into that performance. I mean Cayman has monster potential performance numbers but it isn’t soundly destroying Fermi across the board now is it.

          • Goty
          • 8 years ago

          I think the point is that AMD does so well with a much smaller die.

            • swaaye
            • 8 years ago

            I’ve been under the impression that the primary reason for ATI’s perceived greater performance per area is that even Cayman is still designed primarily for graphics. Fermi is a chip that is more of a mix of GPGPU and graphics ideologies and the result is less efficiency per transistor for graphics. NV has added many transistors that don’t do a thing for graphics. Most people judge these chips based on their gaming performance and that does not directly equate to similar levels of GPGPU performance.

            However, Cayman’s VLIW4 is a direct result of their desire to improve GPGPU performance. That’s what just about every review says and I believe I read that Eric Demers stated that as well. I read somewhere that they essentially admitted that VLIW5 was better for graphics (over on B3D, I think).

            But it’s obviously difficult to judge ATI against NV for GPGPU because OpenCL and DirectCompute haven’t really happened yet and Stream went almost nowhere while CUDA is somewhat popular.

            • mczak
            • 8 years ago

            IIRC he didn’t quite say that VLIW5 was better for graphics but that it was about the same – that is if graphics were the only concern there would have been no incentive to switch to VLIW4.
            It’s somewhat surprising though AMD switched to VLIW4 for just this one chip.

            • Silus
            • 8 years ago

            And isn’t that a hint that VLIW4 wasn’t exactly what they wanted (if performance wasn’t enough to get there) ? Cayman for AMD, is the same as GT200 was for NVIDIA. Although it seems that GT200 was at least in more cards, than Cayman will ever be.

            • swaaye
            • 8 years ago

            There has been some talk, again over on B3D, about how weakly Cayman seems to scale with the extra shader group being enabled on 6970 vs. 6950. So maybe the architecture hit diminishing returns and that is why they are moving to this rework presented here. Maybe the writing was on the wall back with Cypress.

            • cegras
            • 8 years ago

            [quote<]Although it seems that GT200 was at least in more cards, than Cayman will ever be.[/quote<] Now there's a conjecture that is based upon nothing.

            • swaaye
            • 8 years ago

            I read that Trinity, the Bulldozer + GPU CPU, will have a VLIW4 GPU. I’m not sure if that was a rumor or if AMD presented info about it at some point.

            • Silus
            • 8 years ago

            Precisely! Thumbs up!

            • cegras
            • 8 years ago

            Right, but

            [quote<]So AMD have admitted that Nvidia's Scalar arch approach to GPU's with G80 all the way through to Fermi was the best way to build a gpu.[/quote<] Is not a real statement of the issue. The correct statement is that nvidia's approach is better ... for building a massively parallel general purpose processor, and not a GPU.

        • xeridea
        • 8 years ago

        I second that. For games, Fermi is extremely inefficient. Dies are like 3-4x larger, for like 10% better performance… and suck an insane amount of power.

          • mczak
          • 8 years ago

          That isn’t quite true. Overall AMD still has somewhat better perf/area but the advantage isn’t dramatic.

            • Goty
            • 8 years ago

            Perf/sq. mm is actually quite significantly in favor of AMD. If we’re generous and call the lead the GTX 580 has over the 6970 15%, you’d get AMD having a 16-17% performance/sq. mm advantage over NVIDIA at the stated 389 and 520 sq. mm die sizes.

            • mczak
            • 8 years ago

            Depends on the chips you compare. Since 6970 is pretty much right in between GTX 560Ti (fastest GF114) and GTX 580 (fastest GF110) you could just as well compare it to GTX 560Ti. In this case you get like 15% more performance for the HD6970 for ~9% larger die size which isn’t all that much of an advantage anymore. And then you have to keep in mind the HD6970 relies on (significantly) faster memory to be actually faster than the GTX560Ti.
            I’ll admit though the lower end chips still have a die size advantage (Barts, Juniper, Turks and Redwood) in the neighborhood of 20% though they too rely on more memory bandwidth.

          • Silus
          • 8 years ago

          3-4x larger ? 389 mm2 vs 520 mm2 is 3-4 times larger ? LOL

        • Silus
        • 8 years ago

        Higher performance in what ? Games ? Maybe, but you forget that NVIDIA’s GPUs are used across all the markets they are in (like HPC and professional) and computing wise, NVIDIA’s GPUs are far more powerful than AMD’s. AMD’s GPUs are only better theoretically. In practice, they are inferior overall.

        And it’s laughable that this “perf/mm2” is always used in favor of AMD. When you go out and buy a GPU, do you ask for the GPU size in mm2 or something ? And here I thought that features and performance were the most important things about a GPU purchase…AMD fans do it differently I guess.

          • khands
          • 8 years ago

          The importance in perf/mm2 is what it means in power consumption, heat dissipation, and the physical limits of the die itself (how large it could possibly be before it is no longer feasible). I’ll agree that the last is more a theoretical bonus than a practical one though.

            • NeelyCam
            • 8 years ago

            All true, but the main reason why perf/mm2 is beneficial is the associated perf/cost ratio which has a major impact on profits.

          • Goty
          • 8 years ago

          It’s an architectural discussion, but I guess that’s a bit too advanced for you. Also, if AMD’s GPUs are inferior overall, why are they so much faster than NVIDIA in many of the BOINC projects (e.g. MilkyWay@Home) and in whatever algorithm the bitcoin generator uses? Oh, that’s right, because that “theoretical” performance can be tapped quite easily with the right programming.

            • Silus
            • 8 years ago

            Ah the typical fanboy attack “You don’t know what you’re talking about” πŸ™‚

            Also, “quite easily with the right programming” yet where are all the Radeons in HPC or in any relevant number in the professional market ? That’s right, it’s either non-existent or very small in number. Must be because no one knows what they are talking about…it’s so much better, but no one uses it!

          • cegras
          • 8 years ago

          You’re right, price/perf is the only real metric, and AMD does well in this regard. For example, anand shows the 6850 as being a much better buy than the 460.

      • can-a-tuna
      • 8 years ago

      Fermi sucked in every way. In case you didn’t notice.

      • NeelyCam
      • 8 years ago

      I’m sorry, but don’t you know it’s illegal on TR to say anything good about NVidia/Intel or anything bad about AMD? I mean, are you not paying attention..?!

      You deserve your -100.

      • Silus
      • 8 years ago

      Oh come on! You know that by saying that, you’ll get dozens of thumbs down (as I’m sure I will too), because the hordes of AMD fanboys will not accept anyone saying that AMD is copying other company’s direction πŸ™‚

      • Lans
      • 8 years ago

      With this, I would say AMD is admitting Nvidia’s scalar approach is better going forward and does not say anything about the present or past. Nvidia did not and does not dominate every benchmark and even then it is usually not a run away (>> 2x) and there are some benchmarks that flavor AMD so that is a clear indication VLIW is perfectly suited for the currently graphics focused GPUs.

      There is no merit in having the right approach at the wrong time! πŸ˜‰

      • dpaus
      • 8 years ago

      Only a -20?!? Bow before Master NeelyCam, you poseur!

    • Rakhmaninov3
    • 8 years ago

    DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS!

      • Krogoth
      • 8 years ago

      [url<]http://www.youtube.com/watch?v=KMU0tzLwhbE[/url<]

        • l33t-g4m3r
        • 8 years ago

        That’s gotta catchy tune, sticks in your head.

    • Bensam123
    • 8 years ago

    “eliminate the “glitching” players might sometimes experience when games load textures as they go over the crest of a hill. Developers will be able to use “true virtual memory,” Demers noted.”

    I think this only really applies to really slow GPUs that take a lot of time to load textures. I thought they were talking about the lag time between when something is hidden (not drawn) and when you supposedly see it (drawn). Overly aggressively done Z-Culling results in lag time between when you ‘should’ see something and when it really appears. Usually this happens in very fast moments like rounding a corner (with two people both approaching the corner fast) or on projectiles.

      • bcronce
      • 8 years ago

      The Z-Culling you’re talking about is at the CPU level by the application. GPUs do a per-pixel-per-frame Z-Culling, so there should never be a “load time” before seeing someone appear.

    • NeelyCam
    • 8 years ago

    [quote<]The new units will mix and match elements of multiple instruction, multiple data (MIMD); multiple instruction, single data; and [b<]simultaneous multi-threading (SMT)[/b<] designs. [/quote<] Power gating, Turbo, and now this... Yet another thing AMD has to copy from Intel. Can't blame them, though... with their already-low R&D budgets shrinking, you kind of have to "borrow" ideas from others.

      • Game_boy
      • 8 years ago

      Were Intel the first to implement SMT in any processor of any kind?

        • willmore
        • 8 years ago

        Intel got it from DEC when they got their chunk of IP at the funeral. They also got a bunch of process technology and design people. And StrongARM, poor, poor StrongARM.

      • Kurotetsu
      • 8 years ago

      When did Intel implement that in a GPU again?

        • NeelyCam
        • 8 years ago

        When did that matter? SMT is SMT, just like power gate is power gate, regardless of if it’s powergating a CPU or a GPU.

      • OneArmedScissor
      • 8 years ago

      Snake oil on the northbridge!

        • maxxcool
        • 8 years ago

        Snakeoil not pantz hot jajajaja!!

      • Sencapri
      • 8 years ago

      Intel did not make “SMT” they implemented the concept and designed it into a CPU and AMD did not copy intel for designing a SMT graphics processor unit GPU lol

      • ChunΒ’
      • 8 years ago

      You’re right. They should stop using x86 and make AMD64 a completely different architecture.

        • NeelyCam
        • 8 years ago

        If you’re thinking ARM or MIPS, I disagree. x86 is one of the few advantages AMD has over the mass or ARM licensees, and not using that advantage would be a waste of an opportunity.

      • Palek
      • 8 years ago

      [Admiral Ackbar] IT’S A TRAP! [/Admiral Ackbar]

        • NeelyCam
        • 8 years ago

        Too late πŸ˜‰

          • Palek
          • 8 years ago

          Yes, and apparently pointing out that you’re just trolling deserves a negative vote, too.

            • NeelyCam
            • 8 years ago

            Yes it does – this is the TRAMD forum, after all. (+1 to get you back to 0)

            I’m sort of proud of my -30… I haven’t had that much in such a short time before. I think it’s now safe to say “successful troll was successful”

      • cegras
      • 8 years ago

      Not this crap again.

        • NeelyCam
        • 8 years ago

        Yes, this crap again. I’ll repeat it as many times as AMD fanboi fools stated how Intel had to copy 64bit, IMC and HyperTransport.

          • cygnus1
          • 8 years ago

          i voted you down, just because i want to see how low it will go. but, good job πŸ™‚

            • Jahooba
            • 8 years ago

            I negated you down-vote, just because I can πŸ™‚

            • cygnus1
            • 8 years ago

            and i voted you up! i really do like the pointless voting buttons, they’re fun

          • maxxcool
          • 8 years ago

          lol… well… they did. [url<]http://en.wikipedia.org/wiki/X86-64[/url<]

            • NeelyCam
            • 8 years ago

            I know. πŸ™‚ I’m just pointing out that both “sides” are doing this, not just the “evil” Intel.

            My personal opinion is that all this is perfectly fine; good ideas [i<]should[/i<] be copied to make things better and more competitive... patent protection is a tricky concept, though - eliminating it completely would make copying easier, but it would also affect innovation negatively (if everyone's just gonna copy your great ideas, why spend the money to come up with them..)

      • JohnC
      • 8 years ago

      “Ze downvotes! Zey do nothing!”

      • sirmonkey
      • 8 years ago

      lol poor arrogant guy that knows absolutely nothing.. welcome to the industry where everything is borrowed or stolen without either company thanking the other. its not like Intel and nvidia don’t steal anything..

      the IMC in Intel’s i7 is a complete copy of AMD’s IMC that AMD has been using since socket 754.
      The QPI technology is a advanced version of the hypertransport technology used by AMD since the socket 754 days.
      Nvidia’s GDDR5 use on fermi was funded by AMD to be developed since no one else had the balls to pay for it, especially Nvidia. yet when ATI/AMD proved it worked. guess what? Nvidia jumped on the bandwagon. Nvidia’s cuda core technology is based on something AMD created years before it.

      but now, because AMD is using a similar form of power gating(not even close to the same as Intel’s), and a turbo feature(that isn’t anywhere close to the same as Intel’s) then you sit there trying to talk out of your butt about something you know nothing about. this is a case where you shouldn’t talk unless you know what the heck you are talking about. next time think about what you are posting before doing it and save yourself from the embarrassment.

        • Goty
        • 8 years ago

        You forgot to mention that NVIDIA still hasn’t been able to develop a competent memory controller that supports GDDR5, leading them to resort to wider buses to compensate for their inability to run anything at its rated speed.

        • NeelyCam
        • 8 years ago

        lol poor monkey, you don’t seem to have any idea of what I’m going for with my comment. HOWEVER, when you start talking about power gating, I’m interested. Why would you imply Intel’s and AMD’s power gating schemes are so different?

        It’s public info that Intel’s Turbo is superior to AMD’s: Intel’s is based on the actual temperature reading while AMD’s is based on some convoluted idea of past activity that doesn’t directly relate to temperature (sounds like Intel has patents on the “true” temperature-based turbo). But AFAIK power gates are pretty simple stuff (transistors on or off) – what do you believe is so different between Intel and AMD power gates?

        (Both QPI and HyperTransport are copies of previous high-speed interconnect proposals in many ways – a lot of the technologies behind them were developed by Rambus. So no – AMD doesn’t “own” the idea of a high-speed serial link. And if you think AMD somehow owns the ideas behind GDDR5, you are out of your league here – you should head back to the farm league, boy.)

          • khands
          • 8 years ago

          ^This is why you don’t feed the trolls.

            • Goty
            • 8 years ago

            Because they’re morons?

            • NeelyCam
            • 8 years ago

            No; because if you feed the trolls with incorrect and generally fanboyish comments, you might get your head bit off with facts.

            • Goty
            • 8 years ago

            So wait, we’re the trolls and you’re the fanboy? I must have missed something…

            • NeelyCam
            • 8 years ago

            You did miss something. You thought the troll(s) were morons, but actually fanboys are the morons here.

          • maxxcool
          • 8 years ago

          random comment here

            • NeelyCam
            • 8 years ago

            Your counter-argument is rather weak.. FYI: I’m neither racist nor inbred.

            [url<]http://en.wikipedia.org/wiki/Racism[/url<] [url<]http://en.wikipedia.org/wiki/Inbreeding[/url<]

            • maxxcool
            • 8 years ago

            comment 2

            • NeelyCam
            • 8 years ago

            Well, I didn’t say “farm, boy” – I said “farm league, boy”. By “boy” I meant young, inexperienced people who need to play in the farm league to gain some experience before they can play in the NHL. I used ‘monkey’ because of sirmonkey’s screen name.

            Any racism in my comment is completely unintentional, and in this case, caused by the reader reading it wrong. Trust me – I’m not racist, homophobic, xenophobic or anything else like that. I accept everybody.

            • maxxcool
            • 8 years ago

            ok! gotcha i see that.. apologies! i will retract it…

            • JoshMST
            • 8 years ago

            Everyone except somebody who likes AMD! πŸ˜›

            • NeelyCam
            • 8 years ago

            Even I have my limits πŸ˜›

          • willyolio
          • 8 years ago

          So AMD doesn’t “own” the idea of a high-speed serial link but Intel “owns” the idea of factory overclocking and partial shutdowns for power savings. gotcha.

            • NeelyCam
            • 8 years ago

            Correct. Intel invented power gates and turbo. AMD did not invent high-speed serial links.

      • can-a-tuna
      • 8 years ago

      Stupidest comment of the day award goes here. You beg us to set you a new minus record.

        • NeelyCam
        • 8 years ago

        Ok, and which part of my comment is factually incorrect, if I may ask?

          • Triskaine
          • 8 years ago

          All of it, of course.

            • NeelyCam
            • 8 years ago

            Of course.

          • cegras
          • 8 years ago

          Can you provide any evidence? You realize the burden of proof is on you. Please show us that there existed no prior art to turbo, power gating, and SMT before the commercial introduction of these technologies by intel.

          By corollary, the underlying logic of your comment flawed because intel is just as guilty as ‘copying’ from AMD as AMD is ‘copying’ from intel.

            • NeelyCam
            • 8 years ago

            To respond to your first part: no – the burden of proof on me only covers proving that AMD copied Intel (this is trivial – it’s clear AMD is using very similar Power Gating and Turbo schemes as Intel, and the new announcement on SMT points to AMD also copying that approach). I don’t need to prove that prior art didn’t exist. The burden of proof that prior art exists (in order to invalidate the claim that Intel “owns” the idea of PowerGate/Turbo/SMT) belongs to those defending AMD in this case. Standard US patent stuff.

            To respond to the second part, Intel allegedly copying something from AMD in the past is irrelevant to this case. This case is focused on if AMD copied something from Intel or not. Does AMD plead guilty to using a scheme similar to the Power Gating scheme used in Intel’s Nehalem (and beyond) class of CPUs, and a scheme similar to the Turbo scheme used in Intel’s Nehalem (and beyond) class of CPUs?

            • cegras
            • 8 years ago

            No on both counts. We are not talking about us patents, only your hurt sensibilities since intel themselves have not sued AMD yet for IP breach. Your first argument is incomplete because to prove intel copied AMD you must prove intel is the originator of all three of these concepts before any prior art.

            Secondly your hurt sensibilities are ironic in the sense that intel copies AMD as much as AMD copies intel. It’s sad because you wish to raise intel to some sort of pedestal over AMD, but in the end it turns out they are both just as unoriginal.

            • NeelyCam
            • 8 years ago

            One cannot [b<]prove[/b<] without doubt that no prior art existed without a complete analysis of everything that has been invented in the world since the beginning of time. It is completely unreasonable to require that I complete such a practically impossible task to settle this debate. A much more feasible approach is to require you to show that prior art did exist - you can do this with a single example. [b<]Sorry, but the burden of proof really needs to be on you.[/b<] Until you find an example of prior art, we need to assume that AMD copied Intel instead of someone else. On the second part, it would be ironic only if my sensibilities were in hurt as you assumed, but you assumed wrong. My original post was a response to various AMD fanboy comments about Intel copying AMD, and how Intel's behavior is evil, shameful, unacceptable and wrong, and that Intel needs to be ridiculed for doing that. My post is pointing out exactly what you are saying here that both parties copy stuff, suggesting that AMD fanboys should realize AMD doesn't really deserve the pedestal any more than Intel does.

            • cegras
            • 8 years ago

            [quote<] It is completely unreasonable to require that I complete such a practically impossible task to settle this debate.[/quote<] Don't make claims without evidence to back them up. [quote<]A much more feasible approach is to require you to show that prior art did exist - you can do this with a single example. Sorry, but the burden of proof really needs to be on you. [/quote<] This shows a complete misunderstanding of how statements, debate, and logic works. [quote<]Until you find an example of prior art, we need to assume that AMD copied Intel instead of someone else.[/quote<] Do you know anything about architecture cycles? I don't know much, and I suspect you do not know much either. Until you do, and therefore know the timeline with regards to conception of the original idea, let's not throw around unverifiable statements like copying. [quote<]My original post was a response to various AMD fanboy comments about Intel copying AMD[/quote<] lol what, none of the parent posts even mention copying. You have a persecution complex.

            • NeelyCam
            • 8 years ago

            [quote<][quote<]It is completely unreasonable to require that I complete such a practically impossible task to settle this debate.[/quote<] Don't make claims without evidence to back them up.[/quote<] My claim was that AMD copied Intel (or someone else) on Power Gate and Turbo, and there's plenty of evidence to back that up. Your claim seems to be that it's possible Intel copied someone else first on Power Gate and Turbo, and you show zero evidence of this. But somehow you think I should go out there and start searching for undeniable proof that Intel did NOT copy someone else first. No. My claim stands. Your claim has nothing to back it up. YOU go find evidence. [quote<][quote<]A much more feasible approach is to require you to show that prior art did exist - you can do this with a single example. Sorry, but the burden of proof really needs to be on you.[/quote<] This shows a complete misunderstanding of how statements, debate, and logic works.[/quote<] No - sorry, but it actually shows your lack of understanding of how logic works. See my comment above. [quote<][quote<]Until you find an example of prior art, we need to assume that AMD copied Intel instead of someone else.[/quote<] Do you know anything about architecture cycles? I don't know much, and I suspect you do not know much either. Until you do, and therefore know the timeline with regards to conception of the original idea, let's not throw around unverifiable statements like copying.[/quote<] Why do you think "architecture cycles" matter here? Are you claiming that AMD and Intel both got the idea of Turbo and Power Gates somehow independently around the same time, and Intel just happened to implement it faster? [quote<][quote<]My original post was a response to various AMD fanboy comments about Intel copying AMD[/quote<] lol what, none of the parent posts even mention copying. You have a persecution complex.[/quote<] None in this thread, but plenty in previous threads. Previous AMD fanboy comments have made them easy targets, and this piece of news gives a troll like me ammunition to attack them.

            • Fighterpilot
            • 8 years ago

            “None in this thread, but plenty in previous threads. Previous AMD fanboy comments have made them easy targets, and this piece of news gives a troll like me ammunition to attack them”.

            Admitting you are trolling the TR forums for your own amusement triggers the SW Ban hammer.
            Have a nice day.

      • dpaus
      • 8 years ago

      Bumping this up for more voting, because I’m curious to see if the TR Code Monkeys anticipated negative votes running to 3 digits….

        • cygnus1
        • 8 years ago

        I recall from the beta testing, they coded it for many digits. can’t recall how many exactly though

          • NeelyCam
          • 8 years ago

          I wonder if it fails at 256…

          EDIT: since it covers both negative and positive numbers, maybe it fails at 127 or 128 (limits of two’s complement for 8bits)..?

          Near 63/64 now… and there aren’t too many news over the weekend. Let’s see what happens.

            • dpaus
            • 8 years ago

            …or Y2K?

            • TaBoVilla
            • 8 years ago

            or IPv4?

      • indeego
      • 8 years ago

      Dude, I’m jealous. -58! You are a God amongst trolls. /me bows.

        • NeelyCam
        • 8 years ago

        I’m humbled. I didn’t expect anything like this.. I expected something like -10 max..

        I guess when the negs hit a critical mass of a sort, everyone wants to chip in. This is kind of interesting to me; a social study on “human behavior in social networking environment in presence of challenges to predisposition”. If I started trolling in a more clinical way, I could turn this into a Ph.D. thesis…

      • BoBzeBuilder
      • 8 years ago

      Loser of losers.

        • NeelyCam
        • 8 years ago

        Not following…

      • TaBoVilla
      • 8 years ago

      bet we can do -100 by this point. Where’s SSK by the way? He’s probably still doing his Africa pro-social charity work or somethin’

    • AMDguy
    • 8 years ago

    I think this is just formal confirmation of the continuation what AMD has been doing for the last several years. They’ve been moving incrementally in the direction of advanced GPU compute abilities, and this is just more of the same.

    It’s good to hear AMD is continuing with this policy, and it’s a logical move since it not only opens up the GPU to more non-graphics compute tasks, but it also opens the GPU to more gaming-related compute tasks, like physics.

    So in a way this is not really news. We’ll find out which iteration of the new architectures get which new compute features, which is the way it’s been the last several years anyway.

    • phez
    • 8 years ago

    I thought accessing system memory was too slow to be used like this?

    • crsh1976
    • 8 years ago

    This is the next-gen architecture and not necessarily the next line of video cards (Radeon 7xxx, unless they change the naming scheme), correct?

      • Game_boy
      • 8 years ago

      Yes. I’m sure he wouldn’t have said “years” if he meant the 28nm stuff that’s already taped out (and waiting for TSMC to get its process up to launch).

    • BobbinThreadbare
    • 8 years ago

    Would letting the GPU run C++ code let developers code down to the metal more than they can now? Could we theoretically see optimization approaching console levels?

      • Game_boy
      • 8 years ago

      No, there will still be too many types of hardware and combinations of hardware to allow developers to target a specific hardware profile and optimise for it like they do consoles.

      Developers could optimise heavily for the HD 6950 (say), but it would run terribly on other AMD cards, and not at on on Nvidia, Intel or pre-R600 AMD cards at all. Which would mean instant failure.

      No, consoles are here to stay like they have been since the 1980s. And it doesn’t matter because graphics aren’t what make a game fun. Even Nintendo has fallen to the omg-HD and 3D graphics hype recently, and that’s sad.

        • lilbuddhaman
        • 8 years ago

        To be fair, Nintendo NEEDED to update. HD is the becoming norm nowadays in ANY format.

        Even down to the smart phone, which is coming to the point of Wii level graphics. On a phone.

          • Game_boy
          • 8 years ago

          Wii and DS sold amazingly and profitably, with their highest selling games (Wii Sports, Fit and NSMB Wii) having PS1 era graphics or less. This shows the majority of the market are not purchasing games for graphics, and Nintendo could have made more money by keeping up that strategy. It also allowed much smaller development budgets allowing bigger margins.

          The PS3, X360 and PSP went for strategies relying on HD graphics and a necessarily smaller number of triple-A games. Well, MS and Sony are still in net negative earnings for their whole careers (MS lost more on the Xbox than they’ve made with the 360 yet, Sony lost more on the PS3 than they made on the PS1+PS2 combined), and third parties had terrible financals for a few years now.

          In short, the HD strategy was money losing, while making games with low graphics but mass appeal was profitable. Why are Nintendo adopting the losing strategy of their competitors? Why are they refusing to make 2D Mario (NSMB Wii and DS: 20m copies) when 3D Mario sold much worse (SMG: 8m, SMG2: 6m, SM64DS: 6m).

            • xeridea
            • 8 years ago

            PS1 level graphics? PS1 graphics were pure crap, I 64 was SOOO much better. It has probably similar to XBOX. “HD” graphics don’t necessarily look better, some are kind of an eyesore to look at.

            • Game_boy
            • 8 years ago

            No. I mean Wii Sports, Wii Fit and NSMB Wii did not require more than PS1 graphics to do.

            The Wii is clearly about Xbox level if you look at SMG. But SMG did not sell nearly as many consoles as NSMB Wii, or unit sales, or profits. Why did it get the sequel and the 3D iteration on 3DS while NSMB gets nothing?

            • BobbinThreadbare
            • 8 years ago

            It’s a little difficult to compare games sales numbers like that because Wii Sports was included with the console for so long.

            People might have been buying the console to get Wii Sports, or something else and just happened to get it.

            • Game_boy
            • 8 years ago

            Look at the Japan figures where it wasn’t bundled. 32% attach rate, about the same as the other two games.

        • BobbinThreadbare
        • 8 years ago

        I don’t think consoles are going away (but I do think there is going to be a convergence of PCs and consoles).

        Let me ask a slightly different question. Would this allow for more optimization than just using DX or OpenGL APIs, or will it just be for running different kinds of code?

          • Arag0n
          • 8 years ago

          With Windows8 for me it’s only one piece missing but following the Microsoft Line, I wouldn’t be surprised if for 2020 this scenario is real:

          You have your phone in your pocket and once you arrive home you plug your phone in your desk-base and it “transforms to your desktop”. Later you go to the bed to lay and you bring a touch display that is wireless connected to your desktop and you can use it as tablet to play games or surf the net. Later you go to the saloon and you have a dummy docking system connected by wireless also to your “desktop” base and has connectivity for kinect and xbox360 controlers, connected to the TV and it’s your game system. Later, the next day you go back work and your phone droped into a case, becomes your laptop.

          In all the process you only own 1 device and multiple dummy extensions.

          Your laptop, desktop, console and phone are the same device. You may think it’s crazy right now, but look to the phone area development. And think about this article and GPU capabilities for general processing. If you could harvest the processing capabilities of a processor like the tegra3 ARM-Quad Core, it would be near or over the highest core i7…. and I will assume that Core i7 is all you need for computing now and 8 years later…

      • bcronce
      • 8 years ago

      C style code is only useful for computing, not rendering. There are many many aspects of the GPU that don’t care about compute power. We are moving towards a compute only setup though. I could see GPUs becoming nothing but massive number crunchers with a video output.

        • BobbinThreadbare
        • 8 years ago

        Thanks.

    • maxxcool
    • 8 years ago

    This sounds like suicide. Yes, i get the drive to get to a truly unified gpu/cpu that a OS would not see any difference in and all the code would go to the proper execution unit for accelerated processing….

    but develop that on the side..

    siphon R&D costs from retail..

    build, distribute ala-intel developer kits and get feedback…

    Don’t just completely destroy and alter your #1 or #2 cash cow on a gamble. Remember Nvidia’s Cinefx ? It was a train wreck… and ran normal direct-x games horribly because of the generality of the core and instruction set. it put them back almost 2 entire product cycles.

    this just sounds horrible. As always I will hold final judgement for retail reviews… but … wow….. i have a BAD feeling about this.

      • Game_boy
      • 8 years ago

      If you read the slides, a key motivation is to make the drivers simpler to code. You might lose 5% in theoretical performance by going more general with the shader units, but that will be more than made up by not being so uneven in driver optimisation. It won’t be amazing in the five games people bench and then suck at everything else like current cards.

      Nvidia’s already done this and worse, like adding an L2 cache to Fermi. It cost them quite a bit in area and power but they don’t seem to be dead yet.

      • dragosmp
      • 8 years ago

      This isn’t quite the same as Nvidia’s Cinefx. That was an evolution from a simpler architecture in Gf 4 to a more complex in Gf5. This looks more like a complicated 5/4 way VLIW-to-scallar; in a way as Game_boy said this should make things easier for the driver team, but only after the first drivers are out – the beginning is always tough.

        • maxxcool
        • 8 years ago

        yeah, you are spot on. I just feel that they are in a more precarious position than normal with them mid launch of new mobile parts, new thin client parts and soon to be new server and high end desktop parts. it just seems crazy to me to revamp pretty much everything at once… to much opportunity to fail and that makes me a sad panda.

    • l33t-g4m3r
    • 8 years ago

    When does this come out? If anything, this announcement makes me want to hold out on making any new purchases. I was considering a 6970 to replace my 470, but not as much now.

      • Stargazer
      • 8 years ago

      Probably going to take a while.
      I’m pretty sure they’re not talking about Southern Islands (it’s the next gen cards, but not the next gen *architecture*).

      • xeridea
      • 8 years ago

      I would just wait for Southern Islands then wait and see what this brings. It is most likely not coming anytime soon.

    • flip-mode
    • 8 years ago

    Last time the Radeon had a dramatic architecture change was R600. Hopefully this will go better.

      • Waco
      • 8 years ago

      I miss my 2900XT 1 GB. That thing ran hot and loud but damn was it fast without FSAA. πŸ˜› Kept it all the way till I got my 4870X2 (which I also still have).

      • BobbinThreadbare
      • 8 years ago

      Once the 3000 series came out things looked pretty good.

        • swaaye
        • 8 years ago

        Yeah R600 was time to market critical. Get a DX10 part out the door ASAP. They really fast tracked that tech and it was all sorts of non optimal. RV770 was a real pipe cleaner, with its massive increase in density and much smarter use of memory bandwidth and better texturing hardware.

    • gbcrush
    • 8 years ago

    Oh man, how soon is this new architecture coming out?

    I wonder if I can make my Radeon 7500 last until then πŸ˜€

      • xeridea
      • 8 years ago

      That so reminds me of my 9800 Pro which totally dominated everything else at the time (other than the XT), played any game smooth as glass, and held its own for several years.

        • SebbesApa
        • 8 years ago

        I’m with you on that!

        • gbcrush
        • 8 years ago

        I’m hanging my head in shame.

        I was trying to remember the first ATI card I purchased. I kept remembering 7, something, 7 something. Thanks to your post, I remember now that it was a 9700 pro (RAWK!).

        The 7500 was…heck, I don’t know where that came from πŸ™‚

          • khands
          • 8 years ago

          The [i<]very first[/i<] ATI cards were the 7000 series, we're about to go full loop.

            • Elsoze
            • 8 years ago

            This first of that labelling yes, but there were ATI cards before the original 7000 series πŸ˜‰

            • cynan
            • 8 years ago

            Yes. I’ve heard they were all the rage.

            • A_Pickle
            • 8 years ago

            YOU DON’T EVER GO FULL LOOP

    • TheEmrys
    • 8 years ago

    Can’t wait to read the more in-depth article.

    • Damage
    • 8 years ago

    Tasty. And a little bit Fermi?

      • thesmileman
      • 8 years ago

      I was going to say these sound like the majority of things Nvidia has added with their Fermi line.

      I just hope they don’t have the same problems and delays Nvidia has with Fermi and changing the core architecture.

        • Game_boy
        • 8 years ago

        I’m waiting for Charlie’s article where he calls this a revolution that will earn AMD billions and cure cancer.

      • Triple Zero
      • 8 years ago

      A little bit Fermi you say? As in implementing a major graphics architecture update on a new process technology? (GF100 on 40 nm). As I recall that didn’t work out so well for NVIDIA. Apparently AMD has decided to pay no heed to NVIDIA’s recent experience.

      Just sayin’…

      EDIT: damn typos!

Pin It on Pinterest

Share This