AMD says its Vega cards will launch “over the next couple of months”

At its Financial Analyst Day last week, AMD created a buzz by announcing the availability of its Radeon Vega Frontier Edition pro graphics card, set to launch late next month. In subsequent discussions, the company remained cagey about when it expected consumer cards powered by Vega GPUs to arrive. AMD CEO Lisa Su shed slightly more light on this issue today at J.P. Morgan's Global Technology, Media, and Telecom Conference.

In an interview with J.P. Morgan analyst Harlan Sur, Su said that would-be Vega customers will see "the enthusiast gaming platform, the machine learning platform, [and] the professional graphics platform very soon thereafter," referencing the second-half-of-June launch window for the Frontier Edition. Su went on to say that Vega cards across all of those product lines would be launching "over the next couple of months."

That statement touched off a firestorm of speculation on Reddit today after a commenter misconstrued Su's vague statement as a definite confirmation that the Radeon RX Vega would arrive "a couple months" after the Vega Frontier Edition. In reality, Su simply established a launch window for the range of Vega-powered products, which so far comprises the Radeon Vega Frontier Edition, the Radeon Instinct MI25 accelerator, and at least one Radeon RX Vega consumer card.

Given the counsel-approved vagueness of Su's statement, any Vega product could launch at any time over the next couple months, and that's all we know. More definite information, if there is any to be shared, will presumably need to wait for Computex next week.

Comments closed
    • moose17145
    • 3 years ago

    I see a lot of people bashing AMD for hedging their bets on HBM2, and it not working out as they has hoped while talking about how NVidia has been rockin’ it… but I feel that a lot of people are forgetting the GeForce FX series…

    FX was, indeed, a while in the past now… but what I am getting at is, NV made a gamble on an architecture, and it didn’t really work out for them the way they were hoping.

    Intel made a bet on Netburst back in the day… it did not work out for them in the end (although the Northwood cores on the 130nm process really were not that bad for their time, I would know, I had one.. it turned into a pretty good workhorse of a CPU for a good 7 years before a PSU failure took the system out).

    And for the love of god lets not even get into 3DFX… (so sad… what could have been…)

    Where I am going with this… ALL tech companies have to make gambles early in the game while a product is still in the early stages of development and then hope it works out. AMD is no different… it is easy to talk smack today knowing what we know… but what did things look like 3 years ago when this decision was likely made (maybe even further back than that).

    Honestly, I still find AMD to be ten times more impressive and interesting than either Intel or NVidia… really… think about it… they are standing toe to toe against not just Intel, but also NVidia… it really kinda is a case of David and TWO goliaths… and AMD, somehow, manages to always get within 90-98% of either of those two in the price backets they choose to compete in. Given the limited resources and money AMD has compared to either of those two, you have to admit, even if they do not have the BEST product every time… it is pretty impressive… and often times they do have the better product when it comes to price vs. performance. Not saying they do not have total flops for products, and that they have not made bad decisions in the past (which EVERY company has had…), but I feel that a lot of times people are really hard in AMD without seeing what they have achieved with such limited resources and what they have given the world that has enabled so much of what we have today (for instance look at their patent portfolio which even Intel makes EXTENSIVE use of).

    It sucks seeing product delays and the like… I get it… but… lets just see where this goes is all I am saying… maybe the wait till end up being worth it. For example… what if the Athlon 64 got delayed a month way back in the day because they had to work out a few things… back then everyone would have been going “OMG THE END IS NEAR FOR AMD!!!” But we look back today and could pretty safely say “yea okay… that month delay was worth it… they spanked Intel something good lololol”

      • Pancake
      • 3 years ago

      Good comments and reminders of the past. Yes, the FX was a horror-show. A Homer-mobile of epic proportions full of weird bits stuck together…

      But it’s important not to anthropomise what are just large businesses doing what they do in the glorious battlefield called capitalism. There are no heroes or villains.

      Critical opinions can be made without being labelled as “bashing”. AMD has a lot to answer for with new releases that don’t make money.

      If you look at my comment history I correctly predicted RX480 would be a commercial mediocrity if not outright failure. As a product it was completely finessed by 1060/70. With nowhere to go but lower prices to move product and high production costs it resulted in no profits or even losses which is what is required to drive new product development. You have to be critical of that. So, great. There was a sales uptick. Staff are fed, rent paid and the lights are kept on. But AMD is living from pay cheque to pay cheque. Not good.

      Combine that with AMD’s first foray into HBM with Fury and all the uncomfortable rumblings we hear about HBM2 and what can we predict? I expect consumer Vega to be smashed by 1080 while costing more to manufacture and have a whole host of driver issues. There may be an underclocked Vega Nano which will perform slightly below a 1070 to find an opportunity in that niche of performant half-sized cards. Fury redux.

      Is that incompetence? Well, let’s not go there. But I know who’s immensely competent. Insanely great master strategist. The man called Jen Hsun Huang.

      • hansmuff
      • 3 years ago

      I have been buying Intel and NVIDIA for years now. My last AMD CPU was a ThunderBird 1.4GHz, my last AMD GPU was a 5850. They were both purchases I think of fondly, but the 5850 then had more driver problems than I liked so I jumped to the NV camp for a while, and am now on a 1070 GPU with a 7700K CPU.

      I’m very happy with AMD’s performance on Zen. That’s not an architecture that seems stretched thin or somehow just hitting some mark. It performs very well for the first iteration. They got MCM packages working, HEDT in the works and coming out soon.. this is impressive.

      Their more recent drivers seem to be very much improved over the 5850 days, and I’ll be happy to give it another shake. I want a 4K solution next, one that performs well and has “fine wine technology” in it along with good drivers. I’m sure AMD can deliver that.

      On the CPU front I just want a lot of cores at decent clock and IPC and I’m sure Zen+/2 whatever their next iteration is, will deliver in spades.

      Good times, indeed.

      • Kretschmer
      • 3 years ago

      AMD has taken the HBM gamble twice in a row, and lost the market each time.

      While I’m impressed with what AMD can do on a shoestring budget, they haven’t been competitive at the high end for years, now. Going back to Netburst and the GeForce FX line is really odd in 2017. This is a very different market.

    • ronch
    • 3 years ago

    I hope Vega will quickly be available with GDDR5 support. HBM2 would be cool but if it isn’t really ready for the mainstream yet then I could live with GDDR5. AMD surely knows a thing or two about the meat of the market, right?

    • Demetri
    • 3 years ago

    If you haven’t noticed, we’re in the middle of another cryptocurrency surge (Ethereum) that is most profitable when mined with AMD GPUs. Prices for Polaris10 based cards have skyrocketed, and supply is drying up. If this keeps up until RX Vega is released, gamers may not be able to get their hands on it for months.

    I’m thinking it might be a good time to sell my RX 470 for more than I paid, and wait and see what happens over the summer.

      • strangerguy
      • 3 years ago

      Crypto has always been a “damned if you do, damned if you don’t” game for AMD. They do no favors to AMD in terms of pricing and supply volatility, but AMD can’t be picky anymore and needs to sell every card at every opportunity to maintain their much smaller market share. NV wisely ignored crypto right from the beginning for good reason.

      • Krogoth
      • 3 years ago

      It is another Crypto-currency mania that will burst hard in the near future. I doubt Vega will be lucrative for it by the time it comes in volume.

        • Firestarter
        • 3 years ago

        for most people reading this, buying the crypto directly will be a better choice than mining anyway, assuming you want to get your grubby mittens on some. The people actually making money mining are the ones with very cheap electricity

    • USAFTW
    • 3 years ago

    Meanwhile:
    [url<]https://www.techpowerup.com/233589/could-this-be-the-nvidia-titan-volta[/url<]

    • Tristan
    • 3 years ago

    no need to rush, they are in the middle of teasing phase

    • Jigar
    • 3 years ago

    AMD should delay Vega till they have NAVI ready, release both with complete new line up, VEGA taking the mid and Navi taking the high end… Start over fresh AMD.

      • K-L-Waster
      • 3 years ago

      During which time, the competition is presumably doing nothing at all?

      If you wait too long to release, NVidia will have a full line of Volta out and established in the market place.

      • ronch
      • 3 years ago

      AMD can’t afford to stand still, releasing nothing, but Polaris seems to be merely a stopgap so they need to release true next-gen parts, not rebrands or tweaked versions of the same architecture. It’s true Polaris has a non-zero IPC and efficiency advantage over earlier GCN parts but it’s also obvious they need to do much better on the efficiency front not just so they can crank up clocks higher but also get more mobile design wins.

    • ronch
    • 3 years ago

    Can’t Vega work with other memory types like GDDR5? I’m sure AMD can easily swap out the memory controller block. Chips are pretty modular these days, after all. And they really need to overhaul their lineup. I’m sure they’d love to if they could without spreading themselves too thin.

      • synthtel2
      • 3 years ago

      Easy and cheap might not be the same thing. It probably wouldn’t be too much trouble to make that switch in whatever software they’re using for layout, but the price of getting another chip set up for production is a different story.

      • the
      • 3 years ago

      They could but it would require the creation of new masks for the foundries, new packaging, larger die and it would have to go through validation again. In other words, the act of swapping out the logic block isn’t difficult but that doesn’t change the other aspects of processor design and manufacturing which is still going to be expensive.

        • ronch
        • 3 years ago

        Yeah but it’s gonna be a mainstream part. It’ll sell by the boatloads. Surely AMD can make room for that?

          • synthtel2
          • 3 years ago

          That gets into quite a bit of Vega 11 speculation. My bet is still on half of Vega 10, for 32-ish CUs, 32 ROPs, and one stack HBM. 32 CUs and 256-bit GDDR5X should also be a perfectly valid option with similar performance. Doing both isn’t much of an option, as the cost to start production on the new chip will dwarf that of just using HBM everywhere. Which variant gets picked seems to mainly be a tradeoff between an earlier release date and better perf/W. I interpret the continued delay (and V10-first launch pattern) to mean that they think the perf/W advantage is more valuable.

          I can think of two cases where the perf/W would be particularly useful. First, Apple would love it – ’nuff said. Also, if they think they can start to approach Nvidia’s perf/W, that last little bit may start to be particularly valuable. If they’ve got performance equivalent to a 130W Nvidia chip, dropping power use from 160W to 130W strikes me as more valuable than a 190W to 160W change. Obviously Vega has to be awesome for this to make any sense, but considering the DSBR and other big architecture changes, I do still think it’s worth a bit more optimism than most people are giving it.

          It’s also possible that they wouldn’t be able to launch much earlier with GDDR5X. There’s a lot of new tech here, and I’m sure their driver team is keeping very busy. Maybe the GDDR5X version could have been ready to launch right now, but with drivers at 80% of the quality they’re trying for. That could erase a lot of hardware advantage in one fell swoop, at least in the eyes of launch-day reviews.

    • Delta9
    • 3 years ago

    I just read a short article on another site that claimed AMD is using infinity fabric in Vega. This makes things so much more interesting. Obviously this means AMD could use multiple smaller GPU dies on a single chip. Considering how huge these chips are getting, this would give them the opportunity to stitch together higher yield small dies to create a monster chip. It would also allow them to scale their lineup from top to bottom without making 3-4 different variations of chips. The potential use for this interconnect in CPU , GPU, and APUs makes all the sense in the world.

      • synthtel2
      • 3 years ago

      They seem to be hyping something like that up for Navi. They’ve tried before to focus on the midrange and use Crossfire for the high-end, with predictable results at halo-level, but this should work a lot better than Crossfire.

        • Delta9
        • 3 years ago

        “Previously Raja Koduri has stated that the with changes to Moore’s Law and the sheer difficulty of manufacturing large GPU products that the industry needed to “get past Crossfire” and that the “economies of the smaller die” would become much more important.” That is what the article states above about the direction they are going.

          • ptsant
          • 3 years ago

          The bet on multiple small dies may or may not pay off.

          Indeed, AMD insists on multi-GPU cards and may have a much better multi-GPU strategy. On the other hand, nVidia bet on huge monolithic dies (Volta!) that they can (and do) sell at obscene prices.

          I am curious to see how this plays out, but AMD really doesn’t need another strategic mistake.

            • Beahmont
            • 3 years ago

            AMD’s problem if it bets the farm on NUMA strategies is that others can also implement NUMA strategies and achieve similar benefits. If AMD can’t win on the chip to chip basis they are only buying themselves an increasingly smaller time for increasingly larger costs.

            • ptsant
            • 3 years ago

            Well, in AMD’s defense, they were the first to launch crossfire over PCIe and also have much more experience in “dual cards” (295X2, Pro Duo etc). nVidia doesn’t care or need to do this, but that doesn’t mean it’s trivial. Especially on the software side of things (scheduling, synchronisation etc).

            What I’m saying is that IF for some reason nVidia is severely constrained by process tech (chips can’t get much larger than Volta without a process change), then they will have to catch up AMD.

      • NTMBK
      • 3 years ago

      I want to see if they can use Infinity Fabric to communicate with a Ryzen CPU, over the PCIe slots. The “lanes” on a Ryzen CPU are already configurable into different modes (Infinity Fabric for multi-socket in servers, or extra PCIe)… could they be configured at runtime? Detect that both ends of the PCIe link are capable of communicating with Infinity Fabric protocol, and switch over.

    • torquer
    • 3 years ago

    *insert comment that doesn’t gloss over AMD’s struggles or artificially inflate their successes*

      • southrncomfortjm
      • 3 years ago

      *insert witty comeback that supports most of your arguments, but points out a small, mostly negligible flaw that I just couldn’t help but point out. Also insert opinion asserted as a fact.*

    • TheSeekingOne
    • 3 years ago

    Mainstream Vega is [b<]not ready yet[/b<] because HBM memory supply is not sufficient for for mainstream Vega. They could've used a 384bit wide memory bus for Vega, which would result in 384 GB/s of memory bandwidth coupled with Samsung's GDDR5 memory chips. If their Vega delta color compression improvements claims are true, then 384GBs of bandwidth should be more than plenty for a GTX1080 caliber Vega product. The problem with AMD is that it's run by highly incompetent people. The most notable one to mention is that diversity hire, Raja Kdouri.

      • brucethemoose
      • 3 years ago

      The “problem” is they had no idea HBM2 would be in short supply years ago.

      They made a bet, and lost. That’s not really incompetence.

        • Krogoth
        • 3 years ago

        Nvidia faces the same issues with their GP100 and GV100 cards but the difference is that Nvidia made them low-volume, high product margin products catered to professional market so the supply issues aren’t as big of a deal.

        • TheSeekingOne
        • 3 years ago

        HBM is still quite a bit more expensive than GDDR to use. You don’t just make such risky bets when you’re losing money and market share. Using a wider memory bus (10 32-bit memory controllers instead of 8), would’ve made much more sense if Vega has similar delta color compression performance to Pascal, which is a claim AMD has made in many of their presentation that we’ve seen to date. After all, the GTX1080 operates on 320GBs of memory bandwidth.

        let’s be honest here: AMD, with their Chinese design team, can’t compete with Nvidia. They just can’t pull of what Nvidia has been doing since Maxwell.

          • Redocbew
          • 3 years ago

          Glad to see you’re not just another hater.

          • sweatshopking
          • 3 years ago

          How many times does this a-hole get to insult ethic groups before somebody does something about it?

            • TheSeekingOne
            • 3 years ago

            Whom did I insult? Didn’t AMD lose both Jim Keller and Phil Rogers when they appointed Raja Kodouri to lead their graphics division? Was it worth it?

            edit:
            Confirmed: sweatshopking is a raging SJW.

            • cegras
            • 3 years ago

            Why do you have to point out ‘diversity hire’ and ‘chinese team’ when it would simply enough to say that the leadership and design isn’t cutting it?

            • TheSeekingOne
            • 3 years ago

            [quote<]Why do you have to point out 'diversity hire' and 'chinese team' when it would simply enough to say that the leadership and design isn't cutting it? [/quote<] Because these are facts. This dangerous trend of hiring more "minorities" at all costs needs to stop. Companies should hire people based on qualification and experience, not race and gender.

            • Redocbew
            • 3 years ago

            Please keep your politics to yourself. Not trying to be a back seat moderator, but you can state your own personal opinion as fact inside the R&P section of the forum just as easily as you can here on the front page. It’s just noise to the rest of us who would care not to sift through this crap when looking to discuss the technology it’s self.

            • Krogoth
            • 3 years ago

            The practice has more to do with reducing labor costs. This isn’t the 1960s-1970s anymore.

            • Anonymous Coward
            • 3 years ago

            I bet this guy Raja has a lot more value than yourself. Trolls are pathetic.

            • Jeff Kampman
            • 3 years ago

            For the record, TheSeekingOne has been banned for this string of comments. We don’t welcome this kind of language on The Tech Report.

            • K-L-Waster
            • 3 years ago

            Hear hear.

            • Waco
            • 3 years ago

            Thank you!

            • Redocbew
            • 3 years ago

            That’s the sound of a troll being flattened by the ban hammer as surely as if they were caught in a herd of…

            … wait for it …

            BUFFALO!

            • chuckula
            • 3 years ago

            Take that Esperanto!

            Oh wait.

            • davidbowser
            • 3 years ago

            No love for Esperanto? Kia domaĝo,

            • techguy
            • 3 years ago

            Talk about heavy-handed. For all you know he’s an engineer who lost his job to an H1B visa holder who does the work for half the pay. I think in such a circumstance he would be justified in holding the opinion he espoused.

            • torquer
            • 3 years ago

            I don’t know what he originally said since things got edited a few times, but I question banning someone for holding a perhaps unpopular and/or politically incorrect view. If he used outright offensive language and then edited it out later, then I’ll withdraw my concern.

            Otherwise, do we *really* want to applaud censorship? Where do we draw the line? I totally agree politics should stick to a more appropriate forum, but be careful what speech you cheerfully silence. Today it may be a troll, tomorrow it might be you. That is why ugly speech is what requires protection, not beautiful speech.

            • Redocbew
            • 3 years ago

            In this case, yes we do. It’s been a number of years since I had any “official” duties in an online forum, and again, I don’t want to be a back seat moderator, but that dude had to go.

            In general, saying “censorship is bad” is a bit like saying “there are no stupid questions”. It’s a nice idea and a good general rule that works great with most reasonable people. However, not everyone on the Internet is reasonable.

            • torquer
            • 3 years ago

            Still doesn’t change the fact that its a slippery slope.

            We still let neo-nazis parade in this country. We let the Westboro Baptist Church protest soldier funerals. We let them because we have to let them. Freedom of speech has its limits of course, but those are generally limited to public safety (yelling fire in a theater) vs offense. Saying “censorship is bad” may come across as naive, but no more so than some unspoken assumption that freedom of speech means you are protected from offense.

            Then again its a privately owned website so everyone here has to decide whether a fairly vague and moving bar of what is considered “offensive enough” is acceptable to the readership or not.

            • Anonymous Coward
            • 3 years ago

            I dispute this “fact” you assert, that this banning represents a slippery slope.

            The aggressor in this case was throwing insults against a guy and two ethnicities. These insults furthered no conversation. Its perfectly possible to have called the targets unqualified without asserting that it had anything to do with race. Its also perfectly possible to have mature conversations about the effects of biasing hiring in favor of select groups, and about the effects of hiring cheap outsourced labor.

            Insults are worse than mere noise, they actually prevent otherwise reasonable people from cooperating.

            • torquer
            • 3 years ago

            If you’d read my original post, I stated I didn’t read what dude had said originally as it was edited multiple times before I saw it. Furthermore I stated my free speech statements were based on the now current content of his posts, not whatever may have been said and removed prior to me reading it.

            • Anonymous Coward
            • 3 years ago

            Do you assume my failure to agree with your is because I have not digested your comments? I think its more likely that I read your comments, and continued to disagree.

            Specifically, I don’t think this banning event is part of a slippery slope.

            • torquer
            • 3 years ago

            I’m not assuming anything, nor do I give the furry crack of a rat’s behind whether you or anyone else agrees with me or not. Simply clarifying my point because your response suggested you were challenging something I didn’t actually say. A common thing on the internet.

            Similarly, it is a common argument tactic to muddy the waters of an original point. My thesis statement simply surrounds what constitutes “offensive language” sufficient for a warningless ban of a commenter. I realize everyone here took this “diversity hire” comment as some wildly racist statement. Maybe it was, maybe it wasn’t. Maybe this guy is the biggest jerk on the face of the Earth and maybe he’s not. We don’t know, we will likely never know.

            Whether this guy deserved to get a warningless ban is up to everyone to figure out for themselves. I’m sure those who disagreed with his statements will cheer and say “hell yeah!” For me, I always find it a little icky to see voices being silenced, even those with whom I vehemently disagree.

            “With the first link, the chain is forged. The first speech censured…the first thought forbidden…the first freedom denied – chains us all, irrevocably.”

            • Airmantharp
            • 3 years ago

            Supporting comment of the day.

            • superjawes
            • 3 years ago

            WBC and neo-nazis are only protected for public property. No private entity is required to give them a platform. It’s not a matter of “acceptable” or “offensive enough” wrt readers. It’s a matter of TR deciding what kind of content–including comments–appears on their website.

            In this case TR decided that they did not what that kind of person here, so the hammer fell. It’s not the first time I’ve seen something like this happen, and I doubt it will be the last.

            • torquer
            • 3 years ago

            As I already stated, very clearly I think, this is TR’s website and they make the rules. As far as the readership making a decision, that means page views. This is a business after all, and as in all things if people are happy or unhappy with the product/management of said business they will vote with their wallets/clicks/page views/donations/whatever.

            So yes, my statement is factually correct. Everyone currently here has chosen to patronize this business and everyone makes that decision anytime they patronize any business. Should that change, they can always move on and patronize another.

            • nerdrage
            • 3 years ago

            Freedom of speech does not apply to speech conducted within private entities. TR can censor whatever they want, for any reason or no reason at all. Freedom of speech [b<]ONLY[/b<] applies to [i<]government[/i<] censorship.

            • adampk17
            • 3 years ago

            This is true. But private entities usually won’t want to stray to far away from the government standard.

            • Froz
            • 3 years ago

            That’s simply not correct. Business practice is far from that. Typical forum or facebook page of any product is moderated and there is no freedom of speech there. Which doesn’t mean people can’t say what they want, but I doubt any corporation would leave a post showing hate speech etc. See how companies reacted to youtube showing their ads next to questionable content videos:

            [url<]https://www.theverge.com/2017/3/24/15053990/google-youtube-advertising-boycott-hate-speech[/url<] (that's just the first result from google)

            • adampk17
            • 3 years ago

            Not wanting your advertisement to run in front of a potentially offensive video on YouTube is not the same thing banning what that offensive video says. Not even close.

            • adampk17
            • 3 years ago

            Agreed 100%.

            You don’t have to like what people say but you absolutely must defend their right to say it. Anything else is a slippery slope. This goes for offensive comments as well.

            • Froz
            • 3 years ago

            That’s the way it works in US, but not in (most of) Europe and we still have freedom and democracy. Absolute freedom of speech is not only impossible to achieve, but is not even something we should really try to aspire to. Your freedom to wave your hands ends where my nose begins. It’s similar with freedom of speech.

            • adampk17
            • 3 years ago

            Right, there are a few limitations, such as the always mentioned yelling “fire” in a crowded theater. But the limits should be kept as minimal as possible. Simply offending people should not be a consideration.

            • TheMonkeyKing
            • 3 years ago

            You forget that this is not street corner, USA. This is a business, public or private makes no difference. Censorship here does not equate to the US government standard for public speech.

            The business here retains the right to use, modify or remove any material on their website, regradless coming from themselves or our commentary.

            • destroy.all.monsters
            • 3 years ago

            No they wouldn’t. Follow the money – that’s management. You don’t blame the guy that has your old job – he’s trying to put bread on the table too. You blame the person responsible.

            • techguy
            • 3 years ago

            Bit of reading comprehension goes a long way… He did blame the management.

            • destroy.all.monsters
            • 3 years ago

            Only those of the races they purportedly disliked.

            • K-L-Waster
            • 3 years ago

            He blamed the “diversity hire”.

            It’s entirely possible to criticize Kodouri’s management without getting into his ethnicity.

            • NeelyCam
            • 3 years ago

            I find it curious that he would call Kodouri a “diversity hire”… Indians/East Asians are definitely NOT a “minority” in engineering.

            Global competition will squeeze salaries and positions in the USA. There is a continuously increasing amount of talent in India, Korea, China, Taiwan etc., and cost of living may be significantly lower in those places than in the USA. Software/hardware engineering work can be done pretty much anywhere, and companies will naturally move the work where the cost is lower.

            I’ve been looking at this debate about american jobs and H1B visas etc. My view is that I think American engineers need to accept the fact that global competition has caught up with them, and the trend towards layoffs and offshoring and lower salaries will only continue. They can cry foul (which won’t do much), unionize (and see how far that gets them), or accept and adjust.

            It’s hard to compete with that engineer in India who can do the job as well, or almost as well, for half the salary.

            • Anonymous Coward
            • 3 years ago

            Even if a person has found a justification for such crude racism as to call a high placed processional a “diversity hire”, that person should still keep his mouth shut about their opinion.

            A person who gets chased out of a job because others are cheaper has just experienced the free market and capitalism doing its magic, vote left next time, or get ready to compete.

            • tipoo
            • 3 years ago

            And if that’s what he said, that would be one matter. Instead he went on racial tirades without any such explanation.

            • destroy.all.monsters
            • 3 years ago

            Thank you Jeff.

            • Srsly_Bro
            • 3 years ago

            Chuckula next?

            • Waco
            • 3 years ago

            Yeah, you’re going to last long here.

            • K-L-Waster
            • 3 years ago

            You’re seriously suggesting that Keller and Rogers left *because* of Kodouri? (Never mind that there were a few years of overlap with the three in the company…)

            • destroy.all.monsters
            • 3 years ago

            I can’t tell if you’re a troll or you’re really this ignorant.

            Slandering people because of their race is unacceptable anywhere. You don’t have to be an SJW to have common decency.

            Koduri has largely turned his division around, hired Scott in order to help them do better and are doing spectacularly well for a company that is dwarfed by both its’ major competitors.

            Please don’t tell me “it was all Rory Read and the white people before him (and Su and Koduri had nothing to do with it”.

          • brucethemoose
          • 3 years ago

          You do know it takes years to make a GPU right? And that the engineers aren’t psychic?

          They can’t just look at the shipping 1080 and say “oh, GDDR5X works, let’s do that!”. When they were designing this thing, there was an apparent need for memory bandwidth and power savings. HBM, the thing they already invested a ton of R&D into, made sense in high end products.

          • Krogoth
          • 3 years ago

          You realize that every major semiconductor company uses multi-national talents for their silicon designs and have been doing so for last few [b<]decades[/b<]? HBM is the future. Nvidia already invested into it. The only difference is that they decided to opt for only for pure compute stuff first and wait until it matures before throwing onto their traditional GPU stuff.

          • anotherengineer
          • 3 years ago

          psssst guess what

          Jensen Huang (Chinese: 黃仁勳; pinyin: Huáng Rénxūn; born February 17, 1963)

          [url<]https://en.wikipedia.org/wiki/Jensen_Huang[/url<]

        • Pancake
        • 3 years ago

        If you’re running a business and make a bad business decision that is incompetence.

        Edit: Why the downvotes for stating what is common sense? Lots of salty AMD fans arguing against logic? I only speak as a business owner who lives and dies by its sweet syrup coated decisions.

          • Krogoth
          • 3 years ago

          Hindsight is 20/20 my friend.

          Business is all about risk assessment. Incompetence only happens if you made a very risky business decision with no back-up or recourse in advance in the event things don’t pan out.

            • willmore
            • 3 years ago

            Which is exactly why I don’t see why the consumer version of Vega with GDDR5/5X shouldn’t ship as soon as possible. Those chips aren’t going to get more valuable with time and they’re going to lose more value if they hold back the launch artifically until after HBM2 cards can ship (vs the value those HBM2 cards might suffer for having come second to the GDDR cards–something I don’t know how one could easily quantify.)

            • Pancake
            • 3 years ago

            Which seems to be how things are panning out. Using a new technology like HBM isn’t finding out at the last minute that things don’t work out. They would have been working closely with their partners throughout the development process.

            • Krogoth
            • 3 years ago

            That’s why stuck AMD with GDDR5 with Polaris line-up (Their actual bread and butter). They wanted to iron-out HBM before throwing onto their mainstream platforms. Nvidia did the same thing as well the only difference that Nvidia decided to make HBM exclusive to their pure compute designs and stick with GDDR with their tradational GPU designs.

            • Pancake
            • 3 years ago

            Well, sure. This is all speculation without facts anyway. For all we know Vega will be in good supply, high yielding and make AMD tonnes of cash. We’ll soon find out!

            • ptsant
            • 3 years ago

            Yes, Polaris was “plan B”. A shame, really, because the chip is great and the basic design could serve for much more. A bigger Polaris would be easily competitive with the 1070 and could probably fit in the 225W (1x8pin) power envelope with some tuning. Something like 3072 shaders and 8GB faster GDDDR5 would do a nice job.

            That would be much more reasonable than the shameful overvolted/overclocked RX580 rebrand.

            • ermo
            • 3 years ago

            But wouldn’t it need to have a wider memory bus too, then? Something like a 128 bit interface for each 1024 shaders and 16 ROPs?

            That adds up to 3072 shaders and 48 ROPs with 6/12 GB of RAM if we assume 2/4 GB per 128 bit interface.

            Anyway, it would’ve been a nice RX 490/X or RX 590/X series card, but I’m not sure that it’d have been easy to fit in under the Vega line of cards unless you wanted to keep clocks fairly low to better differentiate the scaled down Vega products.

            • ptsant
            • 3 years ago

            It doesn’t need to fit under the Vega line, if the Vega line is inexistent. That is exactly the situation right now.

            Vega would have made sense against Titan/Quadro/Tesla cards. Maybe also against the 1080Ti. Unfortunately, as Jeff noted in another post, AMD could not afford full development of two architectures, but a little bit more effort on the Polaris line would have probably covered the performance range of the 1070 with an RX490/590 and left the rest for Vega.

            Now, they are stuck with $200 cards and rebrands and a mythical HBM2 product that may or may not materialize in relevant quantities at some point. Epic fail.

            BTW, I agree about your 3072/384bit suggestion although I was thinking that 3072 shaders with 8GB of 10GBps RAM (instead of 8GBps) should probably cut it without having to redesign the memory controller.

            • MathMan
            • 3 years ago

            It’s not necessarily a mistake to make a chip with HBM2.

            What’s puzzling is that they decided to only use two stacks at a time when it should already have been clear that a 384-bit GDDR5X solution would offer the same amount of BW.

            With BW off the table, there aren’t that many good reasons to still use HBM2.

            • Waco
            • 3 years ago

            …except years of development.

            • psuedonymous
            • 3 years ago

            Apart from the sunk cost fallacy, if it is more expensive to use HBM2 than to use GDDR5x (or GDDR6) for the same bandwidth and capacity, then regardless of how much money AMD has (or has not, remember HBM was developed by a working group that also included Nvidia, along with SK Hynix, Samsung, and many others) pumped into HBM development then it makes no sense to throw more money onto the problem just to use a more expensive part. If you follow the logic of “but HBM3 will finally be measurable better!” then why not simply save the money you would otherwise use on rolling out HBM2 and spend that same money on speeding up HBM3 development?

            • Waco
            • 3 years ago

            You’re missing the point. The design has many years in it, they can’t just spin another with a few months notice when supplies aren’t what they hoped for.

            Simplistic analysis of semiconductor product design aside, you’re thinking on timescales that no company can work on.

            EDIT: Further, while you *can* achieve the same bandwidth with GDDR5X, you can’t do it for the same cost or power budget.

            • MathMan
            • 3 years ago

            > EDIT: Further, while you *can* achieve the same bandwidth with GDDR5X, you can’t do it for the same cost or power budget.

            Do you really believe that a 384-bit GDDR5X interface is more expensive than a 2 stack HBM2 solution?

            • Waco
            • 3 years ago

            If you consider PCB design, power, and space usage immaterial, no.

            With all of those together? An HBM design could easily be cheaper in total cost, assuming prices aren’t insane simply because of supply problems.

            • psuedonymous
            • 3 years ago

            [quote<]The design has many years in it, they can't just spin another with a few months notice when supplies aren't what they hoped for.[/quote<]One of AMD's selling-points for GCN was its modularity, including the memory controller (the Polaris unveil even had multiple slides just to tout modularity). AMD has had two years (more, since they've been working on it for longer than that) since the RX Fury to spin a parallel SKU of VEGA with a GDDRx memory controller alongside the HBM2 controller after lessons learnt with the expense of HBM1. Either they bet on HBM2 suddenly becoming a lot cheaper with little to indicate why, or they didn't want to even spend the overhead of having two die variants (e.g. GP100 & GP102).

            • Waco
            • 3 years ago

            HBM2’s issue isn’t cost – it’s availability. The cost between the two is not massively different if you factor in the reduced board and power costs.

            • ptsant
            • 3 years ago

            Well, Fudzilla was saying that 8GB HBM2 costs $160, which is huge. I can’t say if this is accurate, but the equivalent in GDDR5 would probably be in the low tens of dollars ($40?).

            Cost is not an issue in pro/prosumer. Cost is very much an issue against the 1080, which is the most likely competition.

            • Waco
            • 3 years ago

            Cost is an issue for all markets.

            In terms of raw cost they are not that far apart. Scarcity is at play here.

          • southrncomfortjm
          • 3 years ago

          Bad decisions happen all the time, not due to incompetence, but due to market fluctuations and other factors. Shoot, you can agree to sell hard drives at $100 in 6 months, then have a tsunami come through and wipe out half your production capability, meaning those drives now cost you $200 to make. Bad decision? In hindsight sure, but not due to incompetence.

            • Pancake
            • 3 years ago

            So a tsunami and “market fluctuations” is the best you got? Really? How are those things even remotely comparable to the development of a technology that is many years in the making?

            Prediction: Vega will be a failure for AMD as chronic production issues and overly high production costs will render them uncompetitive against NVidia (again). HBM will eventually mature and at that point NVidia will swoop in, buy the majority of manufacturing capacity and make a pile-o-cash as usual.

            • southrncomfortjm
            • 3 years ago

            Here’s an idea: take your hyper aggressive comment somewhere else.

            • Pancake
            • 3 years ago

            You don’t get to raise strawman arguments like “tsunami” without blowback.

            If you think my comments are hyper-aggressive I think you must have had a very difficult childhood. Sorry. Lay off the kombucha and yoga. Or maybe have more. I don’t know how that works.

            • southrncomfortjm
            • 3 years ago

            Got more? Or are you done now?

        • ptsant
        • 3 years ago

        Taking such a risky bet was, in itself, a strategic mistake. Sure, AMD has probably accumulated more experience with HBM than nVidia, but they are now unable to launch the most profitable parts because of supply issues.

        I can’t really say whether Raja knew or could have known, but I see that things are definitely not going in the good direction in the GPU front. The distance between nVidia and AMD is growing, not diminishing. Someone has to assume responsbility.

        • Mat3
        • 3 years ago

        It’s brand new tech, complicated and difficult to manufacture and not proven for high volumes yet. You don’t need hindsight to see there could very well be problems with it. They had options: keep working on Vega but also do a built up Polaris with more CUs, ROPs, and wider memory bus to compete performance-wise with the 1070 at least. Or, maybe play it safer and design the initial Vega for four stacks instead of two (then the stacks are only two high and would need to run only at 500Mhz to get same bandwidth as Fiji). But with HBM2, they plowed ahead, clearly just hoping for the best and look at the mess they’re in. Vega is so late it’s ridiculous. Not officially, but we all know it.

      • ronch
      • 3 years ago

      Would it be possible that the memory controller block of Vega can easily be swapped for, say, a GDDR5 controller? Chips are pretty modular these days.

        • Voldenuit
        • 3 years ago

        It won’t be as easy as swapping out memory types on the PCB for a GP104, because of the interposer needed for HMB2.

        I mean, you could make a new chip with the same architecture and a different memory controller, but it’d probably end up being a new die with reused IP blocks.

      • maxxcool
      • 3 years ago

      Wow….

      • setbit
      • 3 years ago

      So, you’re saying that some HR rep looked at the technical staff at a semiconductor company and said, “Uh oh, not enough Indian dudes.”

      • tipoo
      • 3 years ago

      Koduri, with 20 years of experience, working during AMDs best eras, then Apples which was left with a phenomenal SoC team, was a “diversity hire” from AMD. Right. No bias in that statement.

    • tsk
    • 3 years ago

    It’s too late to come out in Q3-Q4, one can already assume that Volta will handily beat Vega. It’s sad for AMD really, I have a freesync monitor and will personally stick with team red until Nvidia decides to support adaptive sync. I just hope AMD can stay somewhat competitive, and not fall behind in the GPU market now that they are finally closing the gap on Intels performance lead.

      • DPete27
      • 3 years ago

      + All of this^

        • Topinio
        • 3 years ago

        Ditto. I’m running a 290X still, because Fury X wasn’t enough of an upgrade and there’s been nothing better from AMD since. NVIDIA has priced itself out of a sale by not offering a card that plays with my monitor — the cost of a 1080 and replacing a 144 Hz VRR 2560×1440 display is prohibitive,.

          • Voldenuit
          • 3 years ago

          Yep. I bought a ROG Swift refurb for about $440, but a friend was able to buy a brand new LG w Freesync ultrawide (albeit 75 Hz) for $279.

      • TwoEars
      • 3 years ago

      Yupp.

      • Billstevens
      • 3 years ago

      Pretty much this. I hate being forced into one brand to get the most out of a $800 monitor I don’t want to replace. The truth is most the time even second or third rate performance with free sync is better than top performance without free sync or gsync.

      So at least this time Vega gets my money as long as it smokes a fury x and treads water in the high end pond.

    • chuckula
    • 3 years ago

    If you read between the lines carefully in some of Raj’s statements you’ll see that the main reason for the extended launch cycle is that HBM2 is presenting some challenges… meaning it’s probably not in plentiful supply right now even though Vega is probably otherwise ready to launch.

    Just like Nvidia, AMD appears to be pushing the HBM2 parts out to the very highest end first, but they don’t have a consumer-level fallback part that can use more plentiful GDDR, and thus the wait.

      • tay
      • 3 years ago

      Is a GDDR5X Vega coming? I don’t have time to parse through AMD’s slide ware. They release a lot of slides. Wish they were as good with releasing products.

        • Beahmont
        • 3 years ago

        It doesn’t appear so at this time. HBM and GDDR5X or GDDR6 require different memory controllers. AMD has given no indication yet they have the spare silicon for two memory controllers on the top end Vega.

        A little Vega might have both controllers or only a GDDR5X or GDDR6 controller, but AMD has also given no indication that Vega’s power budget can take the hit that GDDR5X or GDDR6 come with and still be competitive.

        AMD’s graphics µArches have not been the most power efficient things in the past, and that colors expectations. That being said, there are so many changes to Vega from Fiji and Hawaii that there really are no firm answers to be had, just speculations based on past performance.

          • brucethemoose
          • 3 years ago

          Depends on which “past”

          VLIW cards were very power/area efficient compared to Tesla/Fermi. GCN and Kepler were quite close. Things really started to diverge once Maxwell came out (and AMD basically stood still), which is pretty recent.

            • ronch
            • 3 years ago

            Didn’t Maxwell come out in 2014? I think it was the 750Ti. Crazy how it’s been that long since AMD lagged behind the efficiency curve.

          • ImSpartacus
          • 3 years ago

          There’s a decent chance that Vega 11 could use GDDR6 (it won’t use GDDR5X, that’s as dead as HBM1).

          [url<]https://forum.beyond3d.com/posts/1981965/[/url<] AMD has had people working on a GDDR6 controller since 2015. Considering historical moves, thru probably have it in a releasable state and are waiting for memory availability. Since Navi is effectively delayed until 2019, Vega 11 could very well be released in early 2018 with some sort of GDDR6 setup. If that DOESN'T happen, then Vega 11 is effectively guaranteed to use only 1 stack of HBM2. That would provide as much bandwidth as a 480 currently gets, so Vega 11 would perform roughly like a 480 (probably a bit better). And honestly, if Vega 10's 64 CU-2 Stack config is well proportioned (I assume it is), then a single stack Vega 11 could very well only have 32 CUs - exactly half a Vega 10. Considering higher 1.6+GHz clocks, it would probably slightly beat a 480 while using slightly less power. However, if it uses GDDR6, that opens a ton more options. SK Hynix's 12 Gbps GDDR6 in a 256-bit bus would be more bandwidth than what a 1080 gets (and roughly 75% of Vega 10). Therefore, something like 48 CUs would be entirely reasonable and it'd be competing with the 1070 (at least).

      • Krogoth
      • 3 years ago

      Which is why they released Polaris series first before Vega. The big money in discrete cards is $199-249 market. Don’t expect to see a Volta-based refresh from Nvidia in that segment until 2018.

        • Voldenuit
        • 3 years ago

        And yet the $400 1070 outsells the 480 3:1, and with better margins on every card sold. AMD ignored the upper midrange for far too long, and designing a high end card that may or may not scale down (HBM2 is expensive and risky) was not a good strategic play.

          • Krogoth
          • 3 years ago

          1070 sold as many units as 480 as this point (globally) and has completely saturated its corner of the market. The 1070’s main competition was its predecessor the 970 which outsold both 480 and 1070 combined.

          The 1070, 1060 and 480 weren’t enough of an improvement to persuade 970 and 290/390 users to upgrade.

          IMO, 970 is the new 8800GT. It will end-up being the baseline for system specs for the next five years like its 8800GT predecessor.

            • Voldenuit
            • 3 years ago

            Steam hardware survey (which is not confined to the US afaik) has 1070 user base at 3.39% and 480 at 1.10%. That’s not even close to parity.

            • Krogoth
            • 3 years ago

            Steam’s survey is not a good metric for global sales since it has a massive high-end gaming bias in its sampling. It is primarily North America with a smaller Western European/Australian following.

            • strangerguy
            • 3 years ago

            AMD fanboys: Steam HW stats are biased against AMD because of *insert conspiracy theory they can’t prove*

            • Voldenuit
            • 3 years ago

            I mean, the GeForce 750, 960, 1060, 730, 760, 950 and 1050 Ti *all* outnumber the RX 480 on steam, so *obviously* steam has a massive high end bias. /facepalm.

            • Krogoth
            • 3 years ago

            The sales figures and revenue from graphical divisions speaks larger volumes than a random survey that only exposes a portion of the total gaming market.

            • chuckula
            • 3 years ago

            [quote<]The sales figures and revenue from graphical divisions larger volumes than a random survey that only exposes a portion of the total gaming market.[/quote<] Yeah, and Nvidia -- who doesn't even sell a real CPU -- posted double the revenue of AMD's entire operation last quarter. On top of that, it's not just Nvidia that's bigger, it's that the unit at AMD that sells GPUs reported a loss last quarter. If the Rx 480/580 are already selling in the OMG PROFITS segment of the GPU market, then things aren't going that well for AMD.

            • ImSpartacus
            • 3 years ago

            The 970 (and 290) literally ARE a baseline. They are the VR min spec for Oculus and Vive.

    • DancinJack
    • 3 years ago

    Of course Reddit misconstrued the statement and blew up. Ugh. I simply can not stand that place.

      • chuckula
      • 3 years ago

      You will never find a more wretched hive of scum and villainy. We must be cautious.

        • Bumper
        • 3 years ago

        lol.

        • Beahmont
        • 3 years ago

        4Chan sees your challenge and accepts!

          • DancinJack
          • 3 years ago

          bleh, seriously. I honestly wish someone would accidentally /d their servers and backups.

            • derFunkenstein
            • 3 years ago

            stupid people gotta congregate somewhere.

            • K-L-Waster
            • 3 years ago

            Couldn’t they congregate for a long walk off a short pier?

          • brucethemoose
          • 3 years ago

          Also Yahoo Answers.

          And YouTube comments.

          And… Well, Reddit is not that bad all things considered.

        • Concupiscence
        • 3 years ago

        I approve of the Star Wars nod, but Infinitechan makes Reddit look like Club Penguin…

          • RAGEPRO
          • 3 years ago

          I always thought the Mos Eisley cantina seemed alright. I’ve been to dive bars in real life that were a whole lot scummier.

            • Redocbew
            • 3 years ago

            The music was better also.

      • Krogoth
      • 3 years ago

      Reddit has nothing on /b/ and tumblr.

      • TwoEars
      • 3 years ago

      Are you saying someone is wrong on the internet?

        • Redocbew
        • 3 years ago

        Obligatory:

        [url<]https://xkcd.com/386/[/url<] And since we're talking about a "machine learning platform": [url<]https://xkcd.com/1838/[/url<]

    • Kretschmer
    • 3 years ago

    Twist: the 295X2 is rebranded as the first Vega release.

      • swaaye
      • 3 years ago

      Give it some funky space name, shrink it to 16nm and throw in 32 GB RAM and it’ll sell.

    • chuckula
    • 3 years ago

    Dayum, if it was just a little bit later then you guys could have accurately attacked me when I said that Vega wasn’t going to launch in October.

    Of course, I said that in 2016, but I made the mistake of just saying “it’s not launching in October guys!” Next time I’ll remember to be more literal.

      • lycium
      • 3 years ago

      Do you ever post about something other than what you said and when?

        • chuckula
        • 3 years ago

        Of course.
        All of my correct predictions.

        The more important question is: How many people here have accounts that are only use to post childish whines about factually accurate statements I make?

        I can think of at least two, and that’s not counting sockpuppets.

Pin It on Pinterest

Share This