Rumor: AMD may pull Vega GPU forward for an October launch

AMD roadmaps from the Capsaicin event in March showed its mid-range Polaris GPUs launching in mid-2016, while the presumably higher-end Vega GPU was supposed to follow sometime in 2017. Rumors flying around the web now indicate that AMD has advanced Vega's launch to mid-October after Nvidia's GTX 1080 announcement.

A lot of sites are reporting the news, but TR did some deep digging to figure out what exactly is going on. We traced the source of these rumors to a couple of forum posts on German site 3DCenter's forums, where user R.I.P. says he expects the new chips to launch alongside Battlefield 1 in October, and cites a post on SemiAccurate claiming the new GPUs have already been shown behind closed doors. 

Those are slightly shaky grounds on which to report this news at all, but popular tweaking software AIDA64 was updated with a new beta version today, and the release notes list "preliminary GPU information for AMD Vega 10 (Greenland)," a release that might lend some credence to the rumor. AMD's roadmap suggests Vega will be equipped with HBM2 RAM. The expense of that RAM may indicate that Vega will be a higher-end part that serves as AMD's answer to the the Pascal GPU in the GTX 1080. If those rumors are true, we'll just have to wait until October and see what AMD has in store.

Comments closed
    • vargis14
    • 3 years ago

    If the Polaris chip is not on the Higher end of the Mid power cards with its best version…at least come close to a GTX 1070 in performance I think AMD will Hemorrhage possible sales…the lateish release is bad enough.

    • ronch
    • 4 years ago

    I guess this somehow dampens expectations that Polaris was meant to steal the performance crown from Pascal.

      • ImSpartacus
      • 4 years ago

      Who had these expectations?

      We’ve had several official amd sources that effectively confirmed the strategic placement of their 2016 Polaris lineup months ago. If necessary, I’m happy to list them out (as I’ve done like three times on tr alone).

      Polaris simply doesn’t have a high end part and absolutely no one should be surprised.

        • ronch
        • 4 years ago

        Oh, I just knew someone would react right away to my post. ๐Ÿ™‚

        Look man, don’t kid yourself. We’ve been stuck with 28nm for SO LONG that I bet more than a roomful of people were hoping Polaris to be a high end part. After all, we want Nvidia and AMD to bring it all out after years and years of being stuck. The arrival of 14/16nm is like opening the floodgates, prompting the two companies to put out their best, to have the best product on the market.

        After all, weren’t people all over the place wondering which to get, Polaris or Pascal? Weren’t people advising those who were/are itching to upgrade their graphics card to wait and see how Polaris and Pascal will do against each other? Again, that’s EACH OTHER, not what they’d do to each company’s previous generation.

          • JonnyFM
          • 4 years ago

          A lot of us don’t buy the fastest card on the market. A lot of us buy the best card at a particular price or power consumption point. Nvidia’s current Pascal offerings are off the table for me because even the 1070 takes too much juice and way too much money.

            • ronch
            • 4 years ago

            I fully understand AMD’s decision to target the midrange market first because that’s where the meat is, but I really believe they also need a halo product to improve their brand perception as a purveyor not just great-value products, but the BEST products. IIRC Lisa Su herself (or some other AMD guy) once said that they don’t wanna be seen as a value player anymore. They have been perpetually perceived as the cheaper alternative and they want people to ditch that thought. They no longer wanna be the Suzuki of the semiconductor world; they wanna be one of the more prestigious and more respected names that doesn’t play on pricing or excuses to sell their stuff. And how can you do that?

            Nvidia understands this, and I bet AMD does too, which is why they’re now scrambling to push Vega’s launch forward because they seem to have miscalculated their competitor’s next move with Pascal. That’s kinda strange though, since it’s quite easy to predict Nvidia: they usually go big and all-out. Why didn’t AMD do that in the first place too? Polaris is probably a modular design that’s easy to copy and paste to make the biggest die and pixel-pusher possible on a given process node.

            • _ppi
            • 4 years ago

            Most people have budget. When you’re buying a car, what brands are you looking at?

            If all AMD does now is releasing cards, that will own the <$350 space (at least till nVidia moves in), they will do perfectly fine. And it will give them chance to dodge nVidia.

            AMD’s strategy is also much more conservative in a way that they start from small chips, that will have lower defect rates and all. And once technology matures, they would release bigger chips.

            • Ninjitsu
            • 3 years ago

            [quote<] When you're buying a car, what brands are you looking at? [/quote<] [spoiler<]Maruti-Suzuki![/spoiler<]

          • DoomGuy64
          • 3 years ago

          Dude, you’re annoying. Tone it down please.

          Polaris and Pascal are products competing in different market segments. Esp polaris 11. Completely ridiculous to compare them, unless they’re being offered at the same price point. Vega wasn’t “pulled forward” either. The gpu roadmap clearly shows it being late 2016 or early 2017.

          I also find it funny that it’s taken this long for nvidia to release a 8GB midrange card, while AMD has been offering them since the 390. It’s so obvious Nvidia has been playing it’s customers with the bare minimum memory configurations and missing dx12 features for planned obsolescence. I’m glad I skipped maxwell for hawaii because of that, as future games are going to start using more than 4GB ram and supporting async. 980TI users got it worse than 970 users, that’s for damn sure. I’ll never buy a TI product again after the 780 driver fiasco, and I’d recommend everyone else to do the same. It’s not worth it in the slightest.

    • chuckula
    • 4 years ago

    Here’s another rumor, and it’s nowhere near as positive: [url<]http://www.guru3d.com/news-story/polaris-validation-failed-might-launch-in-october-now.html[/url<] I'm not saying I believe the Polaris failed validation & October launch part without more substantive proof. However, I'm inclined to believe the rumors that Polaris won't be on display at Computex since it's in less than 3 weeks and there would be plenty of buzz already about public displays of Polaris by this point.

      • Mat3
      • 4 years ago

      The article says: “…some reports say Polaris 10 can’t hit 850 MHz reliably…”

      Sounds like a load of crap to me.

        • chuckula
        • 4 years ago

        Send your samples over to TR for an early review.

      • Leader952
      • 3 years ago

      If AMD has to do a respin on the silicon then the Oct time frame line up.

    • Chrispy_
    • 4 years ago

    Well, the biggest thing to happen to desktop graphics in the last few years is adaptive sync, and right now the Nvidia cards are at a huge disadvantage with their overpriced, potentially obsolete tech.

    The tech may even be superior to the VESA standard, but their refusal to conform to standards makes Vega’s release even more interesting for people waiting primarily on a screen upgrade and holding off until the silly G-Sync/Freesync war ends.

      • ImSpartacus
      • 4 years ago

      Yep, that’s me. My 290 does fine at 1440p60 and probably will continue to do so for another year or two.

        • killadark
        • 4 years ago

        + been rather satisfied with my 290 also and kinda got a good chip does 1170 on core so lucky me ๐Ÿ˜€

      • the
      • 4 years ago

      The big thing here isn’t AMD but rather Intel’s adoption of FreeSync. We have yet to see Intel chips ship that support it. That is when nVidia will feel real pressure to adopt the VESA standard.

        • HisDivineOrder
        • 4 years ago

        I agree. As long as Intel’s support is just promises in the wind, AMD’s supporting an “open standard” means little. It might as well be proprietary like nVidia’s if no one else is using it.

    • Tristan
    • 4 years ago

    Vega won’t help AMD. GF 1080 is perf king, and Vega may reach that level and even surpass it, but at expense of large chip and HBM2, probalbly with water cooling and high energy consumption. Situation is similar to CPU, where big, power hungry bulldozers can’t match to Intel cores, on perf and power.

      • Ninjitsu
      • 4 years ago

      Yes, and all the benchmarks agree. Every single one of them. It is true.

      [spoiler<]That was sarcasm.[/spoiler<]

        • BurntMyBacon
        • 4 years ago

        It’s true. Every benchmark I’ve seen says the same thing.

        Like this one: [url<]http://www.Not.actually.a.link.com[/url<] and this one: [url<]http://www.Nothing.to.see.here.net[/url<] and this one: [url<]http://www.Are.you.getting.the.point.yet.org[/url<]

          • ronch
          • 4 years ago

          Hey! I clicked on those links and it says the sites can’t be reached.

          /smirk

            • Mr Bill
            • 3 years ago

            still under NDA ๐Ÿ˜‰

      • jts888
      • 4 years ago

      Nvidia isn’t really competing with AMD at this point, they’re competing with their installed Maxwell base.

      The 1080 clearly isn’t really 1.5x or whatever stronger than 980 Ti in every regard, but they can further that perception by pushing software vendors to cater to Pascal’s exact balance of resources in a way that leaves Maxwell bottlenecked.

      I’m not exactly a fan of Nvidia’s M.O. on this, but it’s a lot better to shift the industry towards being more general compute heavy than just choking everything with absurd levels of tessellation like last generation.

        • the
        • 4 years ago

        It isn’t even that. The big thing to get people to upgrade this generation is advent of consumer VR. Pascal has a few optimizations specifically for it that’ll give it a clear edge over Maxwell and previous generations of cards.

      • BurntMyBacon
      • 4 years ago

      [quote<] Situation is similar to CPU, where big, power hungry bulldozers can't match to Intel cores, on perf and power.[/quote<] Bulldozer/Piledriver were built on a 32nm process. Ivy Bride/Haswell were built on 22nm finfet processes. Broadwell/Skylake were built on 14nm finfet processes. Just moving to a new node without making much in the way of architectural changes will buy you a smaller die and if done correctly will also buy you faster, lower voltage transistors. This gets you smaller, faster, cooler, and/or lower power depending on how you choose to use the gains. Architecture can make up for a lot of this, but there is a reason Intel tries to stay ahead in the process side of the house. Also, according to Intel, finfet is worth about a process node itself. Piledriver is still competing (well, existing) with Skylake as AMD's "High-end". This puts it at a 2 or 3 effective process node disadvantage depending on how you look at it. Bulldozer is at a 1 or 2 effective node disadvantage to Ivy Bridge. By contrast both Vega and Pascal are fabricated on roughly equivalent processes 14nm/16nm finfet. I don't see this as even close to the same situation as bulldozer. Besides, if nVidia is already pushing the maximum silicon size that can be fabricated with GP100, how exactly is AMD going to get "big" by comparison.

        • Tristan
        • 4 years ago

        Nehalem on 45nm, and Sandy on 32 nm are much better than Bulldozers.

      • Matrikz
      • 3 years ago

      Tristan, you do realize that both Polaris & Vega are on 14nm dies, NOT the current 28nm dies. Same for Zen APUs & CPUs

      • Matrikz
      • 3 years ago

      Also if reports are correct, the RX 480 is barely scratching the 100W TDP (even though it’s TDP is 150W)

      • JayJayGolden
      • 3 years ago

      …You do realise that with the rx 480 AMD is suddenly doing really well on the power usage aspect.
      And I also hope you realise that bulldozer and it’s nephews were big and power hungry because Intel monopolised the more modern and advanced production technologies so AMD was stuck with 32nm for a long time up until recently where they succesfully sued Intel?

    • kuttan
    • 4 years ago

    I think AMD underestimated Pascal GTX1080 performance expecting Polaris 10 will compete just fine. Some 5 months delay for AMDs reply to GTX1080 is disappointing to people who are eager to buy high end GPU this year. But those who looks for mid range graphics card, Polaris 10 seems like a good little chip consuming tiny amount of power with likely having good 1440p performance costing around $300.

      • muxr
      • 4 years ago

      > I think AMD underestimated Pascal GTX1080

      I find this hard to believe honestly. Because initial impressions and Nvidia themselves stating 25% faster than 980ti and the founders pricing, really isn’t something I would call an unexpected giant leap, especially considering the shrink to finfet.

      The clock speeds are something that caught everyone off guard, so that’s the one area where I’d say Nvidia made a huge leap. But if those rumors of 10-25% performance boost over 980ti are true at a $699, it would be more likely that AMD has now realized they can charge more for Vega and they can afford to start production sooner even if the yields are still low for the bigger Vega chips.

        • nanoflower
        • 4 years ago

        I agree that the performance boost of the 10×0 family shouldn’t be a surprise to anyone. The solution Nvidia has to speed up VR might be a surprise but that solution only works for that market. Holding off release of Polaris for 5 to 6 months isn’t enough time to do anything significant in terms of improving performance so I see this rumor as being completely wrong. Perhaps the rumor came about because of the rumor that Vega was moving up the release date so someone thought all of the Polaris line would release in October?

        • JonnyFM
        • 4 years ago

        Now this makes a lot of sense. Yield determines cost, and launch is when cost and planned price point meet. If AMD has suddenly learned that they can get away with a higher price point, that gives them an opportunity to move the schedule up. Bonus: having their halo product out months before Nvidia does.

    • sluggo
    • 4 years ago

    Vega? Unless AMD can figure out how to make it burn a quart of oil per week, they should think about renaming it.

      • Meadows
      • 4 years ago

      I don’t think stars work by burning oil.

        • JustAnEngineer
        • 4 years ago

        A family in itself:
        [url<]https://www.youtube.com/watch?v=f-w4g90EnTU[/url<]

      • JMccovery
      • 4 years ago

      Am I the only one that got this reference?

        • Pitabred
        • 4 years ago

        Nope. I think the audience here is mostly younger, though

          • JMccovery
          • 4 years ago

          Thing is, I was born way after the Vega was 86’d, but I did know people that (painfully) owned one.

            • nanoflower
            • 4 years ago

            I didn’t recognize the Vega name but just the reference to burning a quart of oil a week made it obvious it was referring to a car.

    • clocks
    • 4 years ago

    Wasn’t Vega always a posibility in 2016? Look at the Powerpoint graph above, and it clearly shows Polaris mid year, and Vega in Q4 of 2016. Therefore this rumor doesn’t seem all that shocking.

    See this:
    [url<]https://youtu.be/nZ7VCOgMZ9I?t=10m58s[/url<]

      • cygnus1
      • 4 years ago

      I read that graph as Polaris Q4 2016 or even Q1 2017 and Vega as mid 2017

        • BurntMyBacon
        • 3 years ago

        Depends on if the dates mark the beginning of the year or the midpoint of the year. I read it as the beginning as that jives better with my mathematics courses and I’m more of a math person than a marketing person. That said, I’m pretty sure it wasn’t the math guys who made this chart.

          • cygnus1
          • 3 years ago

          It definitely takes a different skillset to read between the marketing lines. I generally just go with the worst possible interpretation. My logic is, if it’s not the worst case they would have explicitly pointed it out. Vague-ness is so they can get hopes up, but still have wiggle room.

    • Welch
    • 4 years ago

    I sure hope this is true, but have serious doubts. It would give AMD the perfect time frame for those Christmas video card splurges and would match up with a Zen release. Always thought it a bit odd that they were holding out for a Q1 release, it’s not like they have seas of last gen cards they can peddle easily for the holiday season.

    • anotherengineer
    • 4 years ago

    “Rumor: AMD may pull Vega GPU forward for an October launch”

    I think we should start calling it Radeon Technologies Group?

      • nanoflower
      • 4 years ago

      Or DRTG for Damage’s Radeon Technologies Group.

    • Meadows
    • 4 years ago

    I don’t think AIDA64 is tweaking software as much as it is diagnostics and benchmarking.

    • UnknownZA
    • 4 years ago

    And just in time for Pascal Titan to be released.

      • muxr
      • 4 years ago

      Followed by a 1080 ti right after.

    • muxr
    • 4 years ago

    When it comes to a die shrink, it’s always smart to wait and see what the other company does. H2 2016 is going to be pretty exciting.. Pascal, Big Pascal, Polaris, Vega and Zen.. all dropping around the same time.

      • Ninjitsu
      • 4 years ago

      Big Pascal probably won’t, Vega is a rumour.

        • muxr
        • 4 years ago

        edit:

        actually never mind.

      • Voldenuit
      • 4 years ago

      [quote<]When it comes to a die shrink, it's always smart to wait and see what the other company does. H2 2016 is going to be pretty exciting.. Pascal, Big Pascal, Polaris, Vega and Zen.. all dropping around the same time.[/quote<] No idea why you're being downvoted, that's sensible advice for consumers. Unless ppl think you mean AMD waiting for nvidia, or vice versa. Which, given the development time, foundry deals, AIB deals, stockpile building, RAM procurement etc involved in getting a new part to market, is not something that can be turned on a dime. And I don't think you meant the latter.

        • muxr
        • 4 years ago

        Yup, I meant the genuine consumer advice from the shoppers perspective.

        • ImSpartacus
        • 4 years ago

        I think the problem was that big Pascal, vega and zen weren’t rumored to drop until 2017, making his statement misleading (though obviously vega was on the fence before and now there’s a chance it might launch on 2h 2016).

      • Piiitabyte
      • 4 years ago

      Screw whoever down-voted your post. +1 from me.

    • Bensam123
    • 4 years ago

    We’ll have to wait till Oct? Polaris is launching before then… I assume we’ll have some idea of what vega is when polaris launches.

      • kalelovil
      • 4 years ago

      Probably not. It would be a pretty bad marketing move: ‘Here’s Polaris, but hear about these additional features our product in 4 months will have’

    • tviceman
    • 4 years ago

    Nvidia’s release of GP104 does not in anyway impact Vega’s development. If Vega was ready in October, AMD wouldn’t have artificially delayed it until January with their roadmaps.
    Stupid rumor is stupid.

    • Ninjitsu
    • 4 years ago

    From HWiNFO:
    [quote<] - [b<]Enhanced AMD Polaris support.[/b<] - Changed MVDDC to VDDCI on AMD Bonaire, Hawaii, Tonga. - Added support of SMSC SCH5627 and SCH5636 HW monitors. - [b<]Added NVIDIA GeForce GTX 1080.[/b<] [/quote<] Back in Feb: [quote<] - Added preliminary support of AMD Ellesmere, Baffin, Greenland. [/quote<]

    • PrincipalSkinner
    • 4 years ago

    I just upgraded my imaginary GTX 1080 to a nonexisting Vega 10.
    Very happy with power consumption.

      • sweatshopking
      • 4 years ago

      +1

        • EndlessWaves
        • 3 years ago

        +โˆš-1

      • ronch
      • 4 years ago

      I would think your imaginary 1080 consumes 0w while your nonexistent Vega 10 also eats 0w. You’re happy to get the same power consumption? Doesn’t seem like it was worth upgrading for.

        • PrincipalSkinner
        • 4 years ago

        Saying more would breach NDA.

          • rephlex
          • 4 years ago

          Non-existent disclosure agreement?

        • tipoo
        • 4 years ago

        He said he’s happy with power consumption, that’s all. The concept of power consumption itself, not the consumption of the cards.

      • Wirko
      • 4 years ago

      You mean, you don’t feel overly exhausted after the operation?

      • The Egg
      • 3 years ago

      What are you doing with the old GTX 1080? I’ll give you $900 in monopoly money.

        • sweatshopking
        • 3 years ago

        I didn’t know you lived in Canada

    • Visigoth
    • 4 years ago

    Doesn’t really matter. Anything powerful they can muster will be crushed by GP100.

    • brucethemoose
    • 4 years ago

    This begs the question: why would it be delayed in the first place? Lack of cash to launch it quicker?

      • ImSpartacus
      • 4 years ago

      Hbm is in short supply and we are on new processes.

        • muxr
        • 4 years ago

        Nvidia asking $699 for a 980 replacement could have something to do with it too. Means AMD can afford to produce Vega on lesser yields.

          • ImSpartacus
          • 4 years ago

          Yeah, I’m wondering if amd wants to get in on those margins. If these rumors are true, then I bet amd is figuring that Nvidia will still be selling the 1080 at effectively $700 in October, but by January 2017 (Vega’s original date), the $600 1080s would gain sufficient availability.

          I bet amd and Nvidia are counting on some pent demand that would “release” all at once after these launches. So they can’t sit around and expect demand to be strong in the middle of 2017.

            • Pitabred
            • 4 years ago

            I’m part of that demand. I’ve been planning to buy a new GPU this fall for a while now, my dual 280X’s are getting long in the tooth, I want to replace them with a single GPU that’ll be faster than them together, and the performance of the 980/980Ti just hasn’t been enough more that it’s been worth it. Watching this next gen with great interest, from both manufacturers.

      • kalelovil
      • 4 years ago

      Perhaps AMD were hoping to sell through their remaining Fiji supplies in the intervening time, but Pascal’s positioning has made Fiji a very difficult sell.

        • ImSpartacus
        • 4 years ago

        I think that was their exact rationale.

        Fiji didn’t exactly sell well and it’s not the kinds of product that you’d want to firesale.

        There’s probably a good reason why Nvidia stopped gm200 production and amd doesn’t want to be left holding the bag.

      • faramir
      • 4 years ago

      HBM2 availability (well, lack thereof).

      Allegedly (!) AMD’s sole HBM2 source, SK Hynix, won’t be launching HBM2 mass production before the second half of 2016, whereas NVidia’s other source, Samsung, has already launched their HBM2 production and should have mass production quantities by the mid-2016 (= soon).

        • the
        • 4 years ago

        There was a press release from Samsung on the 19th of January this year stating they just started HBM2 mass production. The implication with that is that nVidia should be receiving their first few batches of HBM2 stacks right around now for bonding to the interposer.

        This lines up with nVidia shipping the first DGX1 systems in July as they still have several more steps to produce the final product as well as needing additional testing/validation to meed that deadline.

      • HisDivineOrder
      • 4 years ago

      I imagine it’s the difference between doing a 7970 GHZ Edition-style launch and a 7970-style launch.

      One is in quantity. The other is promised wide availability with wide availability for twitch streamers and twitter magnets.

      Because, remember, AMD doesn’t think reviewers are important.

    • Kretschmer
    • 4 years ago

    Does it really matter when a paper launch occurs?

    Edit: Oh, the source of this is a random forum poster’s guess based on another forum poster’s fabrication. Must be a slow news day.

    I’d be surprised if Vega was available to build with on March 31, 2017, let alone October 1, 2016. AMD’s release surprises are rarely positive, and they’re operating with fewer and fewer resources.

      • ImSpartacus
      • 4 years ago

      Part of me agrees that this is just slow-news-day behavior. However, the other part of me appreciates tr tracing all of these sources and so cleanly explaining the rumor. That’s a luxury we don’t often have with rumors so I think it’s important to state that appreciation.

        • RAGEPRO
        • 4 years ago

        Hey, we appreciate that you appreciate it. It took me a bit to track down, especially since I don’t speak German. I do speak Internet, though.

    • jts888
    • 4 years ago

    sheer madness.

    everyone gets hyped over 1080 before any real benchmarks hit the streets, and a week later rumors surface about “the next big thing” shipping a quarter or so early?

    I wasn’t planning on upgrading until both 1080/1070 and 490(X)? reviews were out, but this will make me wait a little longer if the gossip sticks.

    Maybe a rumor about consumer GP100/GP102 next???

      • DancinJack
      • 4 years ago

      This is how the industry works. Not sure why you’re so up in arms about it.

        • jts888
        • 4 years ago

        It’s mostly hyperbole, but it does feel like being whipsawed around after the Polaris announcement conceding no HBM2 cards this calendar year.

        I was all set for this year to be about low-wattage versions of last year’s cards.

        I’m still skeptical about the 1080’s potential given its sub-980Ti bandwidth, so 384b GDDR5X or anything HMB2 coming out makes me happier.

          • Ninjitsu
          • 4 years ago

          Bandwidth on its own means nothing, ask AMD.

            • jts888
            • 4 years ago

            Yes, but if the 1080 has a more “ideal” balance of compute to bandwidth, surely the 980Ti/Titan X were defective designs by straying so far from that mark?

            • Ninjitsu
            • 4 years ago

            Nope, they’re using different architectures (Maxwell vs Pascal). They probably didn’t need much more, or they would have gained very little performance for disproportionately more power. If I remember correctly, the 980 and 980 Ti were stretching GDDR5 to its limits.

            Fiji (Fury cards) was a “defective” (rather, unbalanced) design that had tons of bandwidth and expensive HBM that didn’t lead to much of a performance increase over Hawaii/Grenada, as some of the design was unchanged (same number of ROPs, for example).

            The question that chip designers have to answer is: “can the core be fed data fast enough for most/all expected workloads?” – if yes, then excess bandwidth means little to nothing.

            • ImSpartacus
            • 4 years ago

            Yeah, I think that’s why Vega 10 is rumored to only have 4096 SPs (from the AMD engineer’s Linkedin). A mixture of architectural improvements, process-fueled clock speed increases and a more balanced overall design could pretty easily combine to make a formidable GPU even if the SP count is static.

            • jts888
            • 4 years ago

            Fiji bottlenecks aside, it remains to be seen exactly what workloads see the advertised improvements moving from Maxwell to Pascal.

            I would be shocked if the new ALU throughput and addition special acceleration logic actually makes any game released at Maxwell’s peak dramatically faster under Pascal.

            I get more of the feeling of the shift to Maxwell, where suddenly Kepler cards were bottlenecked in every new TWIMTBP title by tesselator throughput.

            • auxy
            • 4 years ago

            Even if Pascal had absolutely no architectural refinements, it still runs an insanely high clock rate. If you increase your clock rate by nearly 50% and don’t get a large performance improvement you have done something very very wrong. (*ยดฯ‰๏ฝ€*)

            • jts888
            • 4 years ago

            A modern GPU might have 3 or 4 MB or L2 cache but need to touch several gigabytes of memory for every frame rendered, so bandwidth for VRAM matters in a way it normally doesn’t for host memory.

            I’m sure that HairWorks 2.0 or whatever will be chock full of middleware shaders with higher compute:bandwidth consumption, but I think most observers are strongly overestimating GP104’s benefit to legacy software.

            (unless you think that TWIMTBP shaders were just leaving excess bandwidth on the floor for some reason)

      • ImSpartacus
      • 4 years ago

      We just got a new process. Were you expecting Nvidia & AMD to just patiently drop new products? They want to fill out their lineups as fast as feasibly possible.

    • chuckula
    • 4 years ago

    A more extensive list from that AIDA announcement:

    preliminary GPU information for AMD Polaris 10 (Ellesmere)
    preliminary GPU information for AMD Polaris 11 (Baffin)
    preliminary GPU information for AMD Vega 10 (Greenland)
    preliminary GPU information for nVIDIA GeForce GTX 1080 (GP104)

    So… there’s wiggle room for interpretation.

    Oh, and while some versions of Vega undoubtedly do use HBM2, Vega is also a general name of a GPU architecture and I wouldn’t be shocked if there’s some flavor of Vega that won’t be locked to HBM 2 support. So there’s that.

      • brucethemoose
      • 4 years ago

      A seperate GDDR5(X) bus wide enough for Vega would take up alot of die space, wouldn’t it? Hawaii’s IMC is huge compared to Fiji’s, and I don’t think they can use the same pins like the 4870/4850.

        • chuckula
        • 4 years ago

        [quote<]A seperate GDDR5(X) bus wide enough for Vega would take up alot of die space, wouldn't it?[/quote<] Sure it would. That version of Vega wouldn't have to worry about the HBM2 controller though because it wouldn't be the "big" Vega. There are two different silicon flavors of Polaris, so why can't the same thing happen for Vega? I'm not saying that there's 100% a non-HBM Vega coming, we don't have enough information. I'm just saying that it's never been written in stone that Vega is [b<]only[/b<] an HBM2 GPU architecture.

          • brucethemoose
          • 4 years ago

          The last time AMD made a “variant” of a GPU design was with the 4890, right? Even the 4870/4850 used the same silicon.

          Polaris 10 and 11 are different sized GPUs, as far as I can tell. Making 2 GPU designs (Vega HBM and Vega GDDR5X) that are indentical, bar the memory controller, is something AMD and Nvidia don’t do very often… All that extra R&D for taping out 2 similar chips at once might not be worth it.

          Now, if there really are 2 sizes of Vega, that’s a different story.

            • ImSpartacus
            • 4 years ago

            Vega 10 & Vega 11 have been confirmed. There are two Vegas.

            [url=http://www.anandtech.com/show/10145/amd-unveils-gpu-architecture-roadmap-after-polaris-comes-vega<][quote<]Meanwhile AMD has also confirmed the number of GPUs in the Vega stack and their names. Weโ€™ll be seeing a Vega 10 and a Vega 11.[/quote<][/url<] We don't know which is the bigger (or even if one is bigger than the other), but we know that there are two of them and they are called Vega 10 and Vega 11. And if you believe the rumors that Polaris 10 only has 2304 SPs (via various leaks, consistent with confirmed performance) and Vega 10 has 4096 SPs (from AMD Engineer's LinkedIn), then there's a lot of room in there for a Vega 11 at around 3072 SPs.

            • brucethemoose
            • 4 years ago

            Ah, I missed that bit of info.

            Yeah, one will probably ship with a GDDR5X bus while the bigger one gets HBM.

        • ImSpartacus
        • 4 years ago

        Not really. A 384-bit bus like Tahiti would provide 480 GB/s with only 10 gbps gddr5x, more than any other consumer gpu except fiji (512 GB/s).

        If they get ridiculous with a 512-bit bus and 10 gbps or 384-bit and 12 gbps, then we have plenty until at least 2018.

      • ImSpartacus
      • 4 years ago

      If speculation is true and Vega 10 sports 4096 SPs while Vega 11 is something closer to 3072 SPs, then 10 Gbps GDDR5X across a 384-bit bus would be plenty of bandwidth (480 GB/s) for Vega 11. That would effectively be Fiji-tier bandwidth, which we already know to be a generous amount.

      Hell, there has been speculation that it’s so generous that even Vega 10 wouldn’t need more than 2 stacks of HBM2 (512 GB/s), which is exactly what Fiji had. Put four 8Gb die per stack and you’ve got a very marketable 8GB of VRAM. And limiting yourself to only 2 stacks would probably shrink the interposer, which helps on costs.

        • BurntMyBacon
        • 4 years ago

        [quote<]... 10 Gbps GDDR5X across a 382-bit bus would be plenty of bandwidth ...[/quote<] [joke]I think there is a bigger problem than bandwidth with that bus width. Namely, how do you divide it up. 384bit, on the other hand, should do nicely.[/joke]

          • ImSpartacus
          • 4 years ago

          You haven’t heard of their new 191-bit controllers? Get with the times, dude.

            • BurntMyBacon
            • 4 years ago

            Guess I’m getting old.

      • Welch
      • 4 years ago

      Yeah, didn’t they state that both Polaris and Vega were GDDR5x/HBM2 compatible to allow for hitting specific price points? Id think in the $500 range, they would find it hard to justify the use of HBM2. On the $700+ the HBM2 makes more sense and the cost would be justified.

    • NeelyCam
    • 4 years ago

    Lol. So this came from a S|A forum member.

      • Milo Burke
      • 4 years ago

      The last bastion of truth in the universe!

        • chuckula
        • 4 years ago

        The source of the ULTIMATE PROPHECY:
        [quote<]I would ask the question in a more general sense. Will GPUs exist in 5 years. The answer there would be no.[/quote<] The Prophet Demerjian, May 11, 2010 [six years ago to the day!!] Thank you for proving to us that these GPUs are all an illusion sage one! [url<]http://www.semiaccurate.com/forums/showpost.php?p=48497&postcount=10[/url<]

          • DancinJack
          • 4 years ago

          w.o.w.

            • maxxcool
            • 4 years ago

            B.O.B.

          • faramir
          • 4 years ago

          Demented Charlie … nothing to see there folks, move along!

        • Pez
        • 4 years ago

        That’s WCCFTech ๐Ÿ˜‰

      • bwcbiz
      • 4 years ago

      Slow news day. Waiting for the nVidia embargo dates to expire is painful for everyone…

        • Milo Burke
        • 4 years ago

        Particularly when: [url<]https://techreport.com/news/30096/report-founders-edition-geforce-cards-will-be-first-to-ship?post=978066[/url<]

        • nanoflower
        • 4 years ago

        I hope it’s the May 17th date that has been rumored. A week is enough time for people to travel home and do a thorough test of the 1080 and write it up.

      • jihadjoe
      • 4 years ago

      JeffK needs to review more stuff.

Pin It on Pinterest

Share This