Nvidia purchases Mellanox for $6.9B

Well, Mellanox did finally get bought, but I bet that's not the buyer you were banking on being in that headline. Yes indeed, the jolly green giant has purchased the datacenter networking vendor for $6.9B American greenbacks. For those following along at home, that's about 15% more than the $6B that Intel was purportedly offering.

Nvidia is almost a household name, but some folks may not be all that familiar with Mellanox. I wasn't, for one. Mellanox is a company that provides high-end network interfaces to datacenter customers. On the face of it, that makes it a curious addition to the Nvidia stable, until you consider the latter company's heavy focus on the high-performance computing market.

Nvidia already offers some turnkey machine learning compute servers, like the $399,999 DGX-2. With Mellanox's expertise, the company could start engineering and selling whole clusters of such machines, offering a staggeringly simple path to massive multi-machine processing power for anyone looking to build their own Daedalus or SkyNet. Nvidia says it expects the acquisition to complete by the end of the year.

Comments closed
    • pikaporeon
    • 6 months ago

    Nice.

    • gerryg
    • 6 months ago

    Wondering what this means to Intel. If they were unable to win the bid, and they needed the technology and talent at Mellanox, then now what? If they couldn’t develop the skills and technology in-house, or just not fast enough because they didn’t spot the need soon enough, then their only options are to wait until they’ve developed something, or go buy somebody else. Or not play in that market directly. Hmm….

      • cygnus1
      • 6 months ago

      Intel has nothing to compete networking wise faster than 10G. Mellanox is so far ahead of them it’s a joke. Intel is going to have to either buy Broadcom or someone or spend a boatload licensing IP from one of the actual network device leaders.

        • gerryg
        • 6 months ago

        Which they should probably do. They finally poached some talent to attempt to improve their integrated graphics and might catch up there after years and years of being behind. Guess they’re spending so much time and energy on that they fell behind in networking…

          • gerryg
          • 6 months ago

          Guys, tongue was firmly in cheek there, not serious.

        • Waco
        • 6 months ago

        Um…you mean except from their 100 Gbps RDMA fabric? Omni-path? Remember they bought Qlogic back in the day to get their Infiniband stuff.

        Not that I love it, but we have a few couple thousand node clusters with it.

          • cygnus1
          • 6 months ago

          Yeah, I know about OmniPath, the fabric with the smallest market share that’s Intel has trouble giving away to customers.

            • Waco
            • 6 months ago

            Parsing this response with your above comment just caused a segmentation fault in my brain.

            • cygnus1
            • 6 months ago

            Lol, I just don’t respect it. Omni-Path tops out at 100Gb right? Mellanox has 400Gb parts coming. I don’t work with super computers or even HPC clusters so I’m mostly talking about stuff I’ve never even seen. I’ve worked with 40Gb Ethernet and Infiniband so Mellanox parts that can run both just seem much better thought out.

            • Waco
            • 6 months ago

            HPC is my day job. OPA *currently* tops out at 100 Gb just like Mellanox. That’s all I can say.

            • gerryg
            • 6 months ago

            I’m sorry you work for HP. But great to have a job!

            • Waco
            • 6 months ago

            I’m sorry you think I work for HP? 😛

            • gerryg
            • 6 months ago

            LOL! I’ve just been in one of those moods.

      • chuckula
      • 6 months ago

      It was a simple calculation.

      Since we’re clearly going bankrupt this year, we didn’t want to buy Mellanox only for AMD to get it all for pennies on the dollar later. We figured we’d just make Nvidia overpay as one last middle finger on our way to bankruptcy court.

    • ronch
    • 6 months ago

    Oh geez Nvidia beat me to it. I was just about to make an offer of $7B. Oh well, maybe I’ll just buy Apple.

      • MOSFET
      • 6 months ago

      Probably need to bring closer to $7T.

        • ronch
        • 6 months ago

        Can’t afford $7T. I only have billions, I’m just a poor boy.

    • Ummagumma
    • 6 months ago

    Choosing between Intel buying you out versus Nvidia buying you out is like choosing which way you want to die. No matter what you choose you know that you are going to die.

      • Anonymous Coward
      • 6 months ago

      Is nVidia so bad? I don’t recall them buying things just to watch them burn to the ground via incompetence, neglect, politics or whatever. I can however think of some things eaten by Microsoft or Oracle that didn’t seem to blossom too much.

        • stefem
        • 6 months ago

        They aren’t but looks they become a target for any complain recently, they always had a concrete project behind any acquisition they made and they put a lot of efforts into them, of course they not always succeed (like with Icera as they have been pushed out of the market) but they never bought any company to prevent it to become a competitor or to kill a technology that could favor a competitor (unlike other companies).
        It’s an interesting acquisition as it’s the biggest one they ever made and it will be the first to maintain it’s name and so appear as a separate entity (usually NVIDIA keep technology and operations of the acquired company but drop their name).

    • Waco
    • 6 months ago

    It’s better than Intel owning them, at least.

    But if I had to pick the top two toxic players in the HPC market…it’d be Intel and Nvidia, in that order.

      • NovusBogus
      • 6 months ago

      Better them than Oracle.

        • Waco
        • 6 months ago

        True that. Oracle would just buy them, realize they can’t make money by driving prices up without added value, and then kill all support. 😛

        • Mr Bill
        • 6 months ago

        [url=https://www.youtube.com/watch?v=L57ZyM8YV30<]Better Than Ezra?[/url<]

      • synthtel2
      • 6 months ago

      In what ways is Intel that toxic? You would definitely know and I believe you, but I haven’t heard of anything too dire myself.

        • Waco
        • 6 months ago

        I can’t go into any real detail, unfortunately. I wish I could.

        Perhaps in a few years someone else will leak the details and I’ll fill in my experiences without breaking any rules.

          • synthtel2
          • 6 months ago

          Thanks for mentioning it anyway.

          • davidbowser
          • 6 months ago

          I’m not going to agree with Waco’s sentiment about Intel, but rather with his vagueness. I work in the tech industry (VMware, Google, and a bunch of startups) and there is far too much stuff that we would get fired for. I almost never comment about things that I have direct history with, unless it is already public knowledge.

      • chuckula
      • 6 months ago

      Of course you’ll be upthumbed for dumping on Intel and Nvidia in one post, but for all the AMD fanboys out there I have one simple challenge: Go ahead and strip out all the 100% open source code that Intel — as the #1 contributor to Linux and a massive contributor accross the board to HPC software infrastructure — has given away for free and watch your precious Epyc chips fail to even begin a boot up sequence, much less run your little compute cluster.

      People whine about Intel, but then forget that Intel is the company that pretty much single-handedly democratized HPC and made it so you can run literally the same HPC software on a NUC as on a multi-million dollar compute cluster. Without the x86 infrastructure that is now used on the vast majority of supercomputers, I can guarantee you we’d still be in the dark ages of proprietary OS systems that require you to work within the tiny box of a proprietary vendor just like in the bad old days of not that long ago.

      For those of you delusional enough to think that magic happy gumdrop AMD will be your fairy godmother and save you: AMD only exists in the HPC or enterprise space because Intel made a massive ecosystem and AMD benefits from the x86 license. Don’t believe me? Go back and look at how fast Lisa Su put a bullet in AMD’s miracle server ARM chips. I can guarantee it happened within nanoseconds of getting back a halfway decent result from testing Zen.

        • synthtel2
        • 6 months ago

        I upthumbed him back to zero not because I have any reason of my own to think Intel is toxic, but because he’s in a good position to know about that and I value his opinion on it.

        • Waco
        • 6 months ago

        I don’t have high hopes for AMD in the HPC space either. For IO systems? Sure. Not core compute though.

        I appreciate your attempt to denigrate my comment though. If I didn’t care about my job and not going to prison I’d be able to put a lot more behind my comments.

          • chuckula
          • 6 months ago

          Why shouldn’t I “denigrate” a comment that’s LITERALLY equivalent to saying: AMD SUCKS! But I’m not going to give any reasons for WHY AMD sucks, I just do it to get upthumbs because I know that targeting the right company with emotionalistic vitriol will get me easy upthumbs from people with the right prejudices and bigotries!

          Notice how my post that is filled with facts is downthumbed but literally nobody – including you – has a factual counter argument.

          I’ll pay any of your here a $1000 to strip out literally every single open source contribution from Intel to Linux and other open source projects, get your precious Epyc cluster running anyway, and then show them curb stomping Cascade Lake at.. I dunno.. Let’s say some standard CFD workloads. Go ahead.

            • Waco
            • 6 months ago

            I don’t post for upthumbs, but okay. Perhaps you think I’m an AMD fanboy or something? I have very good reasons for disliking the green and blue teams, but I am bound by many things in going much further than that.

            For consumer parts I have little ill will towards either…but when they screw with my day job it’s a little hard to not get frustrated with some of the things they’ve pulled that either directly or indirectly cause me great time and pain professionally.

            I know that won’t make you happy, but please realize I post out of frustration for very real reasons.

            • anotherengineer
            • 6 months ago

            Ya I don’t know what’s up with him, he seems to take it personal, or has a personal vendetta against peoples personal opinions, whether they are fanboys or not. I stopped reading his comments as best I can, and try to avoid replying to them.

            • Waco
            • 6 months ago

            He clearly knows what he’s talking about sometimes (heck, most of the time)…I just don’t get the personality switch for certain subjects.

            • synthtel2
            • 6 months ago

            [quote<]AMD SUCKS! But [...][/quote<] That's really not an accurate analogue at all. If literally everyone on the internet were untrustworthy unless they could cite sources, it would be, but it's merely very difficult to have a good reputation, not impossible. [quote<]but literally nobody - including you - has a factual counter argument.[/quote<] Is it so tough to believe that a factual counter-argument might exist but not be shareable? A whole lot of info falls in that category, and if you completely and immediately discard anything you can't prove, you're missing out on a lot... unless of course your mental models don't have room for grains of salt, but I think you're capable of that kind of reasoning when you want to be.

        • Anonymous Coward
        • 6 months ago

        The entire course of history is defined by people and organizations that do this or that first, but its pretty ridiculous to imagine that history would have stopped if those people had done something else instead.

        If not Intel, some other company would have done pretty much the same things. They were just one of many organizations in a complex market ready to reward the right products (or in the case of Intel, perhaps we can say the [i<]least bad[/i<] option).

    • NTMBK
    • 6 months ago

    I, for one, welcome our new HPC overlords

    • chuckula
    • 6 months ago

    So this is about $1 Billion more than AMD paid to get ATi back in the day. [Make that about $1.3 Billion, [url<]https://www.sec.gov/Archives/edgar/data/2488/000119312509036235/d10k.htm[/url<] } And frankly I think this buyout will be a bigger deal in the long run given the markets where Mellanox plays. It's pretty clear that Nvidia has aspirations beyond just providing accelerator cards that slot into other people's systems. I think they saw on-package Omnipath links and decided they want the same thing for their own supercomputer setups.

      • NovusBogus
      • 6 months ago

      NV isn’t stupid. They no doubt realize that GPU gaming demand will fall off when the cards are ‘good enough’ relative to software requirements, as has happened with CPUs, and they’re gonna need some cards to play in the future. Huge profits and happy investors makes for a good time to prepare for that.

      • psuedonymous
      • 6 months ago

      Nvidia already has NVLink for that. This seems more like a play to have arrays of GPUs talk to each other without needing a pesky host CPU to be involved in the first place. Install your racks of GPU accelerators, integrate them into your existing mesh, then dispatch jobs to them from elsewhere on the network.

        • Waco
        • 6 months ago

        They would just need a little SoC to handle driver interaction with the cards, then use PCIe point to point between the NIC and the GPUs to bypass memory bandwidth limitations. You’d basically just need a PCIe switch, small CPU, and a bunch of NICs and GPUs to make a full GPU compute node.

        • chuckula
        • 6 months ago

        NVLink doesn’t do everything Nvidia needs it to do and it’s never going to be more than a niche technology anyway (don’t expect your Epyc chips to implement NVLink anytime soon).

        Of course Nvidia has PCIe support too, but PCIe (even faster PCIe that’s not limiting bandwidth) doesn’t do the job either.

        The real thing that Nvidia wants is expertise in how to build interconnects that are not only fast but provide memory coherency as well. That’s why the “Gen Z” “CCIX” and the new “CXL” standards exist and Nvidia needs the expertise to implement those types of solutions.

      • K-L-Waster
      • 6 months ago

      Another benefit for them is it means they can’t be frozen out. If all of the high speed interconnect vendors get swallowed up by their competitors, they could have been at risk of losing their access to cutting edge connectivity.

      Buying a vendor is one way to make sure you always have a source.

      • cynan
      • 6 months ago

      If you account for inflation, AMD payed more for ATI.

      Edit: Actually, not if you go by the popular figure that AMD payed 5.4 billion for ATI, then that’s more like 6.7 billion in today’s dollars. (but then that’s a $1.5 billion difference, not $1 or $1.3 billion). I think we safely call it roughly equivalent.

        • chuckula
        • 6 months ago

        Of you account for business sense, AMD paid too much for ATi.

      • cynan
      • 6 months ago

      Larger deal in the long run? Not from the perspective that AMD might have been out of business by now without ATI – though it’s hard to make counterfactual comparisons based on whatifs.

        • K-L-Waster
        • 6 months ago

        AMD likely would not have gotten the console business without ATI.

        OTOH, they wouldn’t have had an extra $5B+ in debt to deal with. And by remaining focused on the CPU business they might not have ended up so far behind for so long.

        Not to mention who knows what ATI would have ended up doing if they hadn’t been bought….

        • chuckula
        • 6 months ago

        What exactly has been the “omg synergy” of AMD buying ATi again? And don’t give me some crap about how graphics “saved” AMD, that’s like saying the Titanic hitting the iceberg was a NOBEL PRIZE LEVEL GENIUS IDEA because the consequences of the collision meant that there was a plank to save Rose… uh yeah, how about a better idea to NOT RUN INTO THE ICEBERG IN THE FIRST PLACE [although it killed off Leonardo DeCrappio so maybe the iceberg idea wasn’t completely wrong].

        Don’t believe me? GPUs are [b<]DYING[/b<] right? Ngreedia has no real CPU play right? (that's actually pretty much true, BTW). Ok... name the company that failed to embrace AMD's magic-genius THE FUTURE IS FUSION line and that is at least twice AMD's size.... that's right, Nvidia kids! I'm not even bringing up Intel since they were already bigger than AMD, the real kick to the nuts is that "big bad evil" Ngreedia back in 2006 at the time of the "holy union" between AMD & ATi was smaller than BOTH AMD and ATi individually. How did the "future" treat AMD and ATi's "fusion" there? What people who still to this day say THE FUTURE IS FUSION while only ever paying attention to AMD products that literally have zero GPU transistors fail to realize is that the "revolutionary" idea that only AMD among any company on earth had to put a CPU and a GPU together wasn't revolutionary or innovative. See Sandy Bridge launching first and Timna from 12 years before Llano launched. Hector "I lost my job over insider trading!" Ruiz wasn't any smarter than some shlub back in 1905 saying "OMG THESE NEWFANGLED CARS ARE GOING TO BE A THING! I AM A GENIUS AND THE ONLY PERSON ON EARTH TO THINK THIS". Except other people did. Remember HSA? That's about all it is, a memory. THE FUTURE IS FUSION![?] Ok sure. Where the hell is the GPU "chiplet" that's supposed to be in RyZen 2 for all that "fusion" to happen? Where the hell is that 50 TFlop GPU that's embedded with unrivaled connectivity at the heart of Epyc 2 instead of a M0ar Chiplet strategy? Or are we running around calling a 4-core APU a miracle in 2019? Funny, back in 2017 I seem to recall 4-core APUs from Intel being called "obsolete" because AMD released a chip with 0 transistors devoted to graphics. Funny how literally the only exciting product from AMD in 2019 devotes exactly ZERO transistors to the GPU.... and we're supposed to buy into the line that AMD needs to have our slavish worship because their low-end product line has integrated graphics just like everybody else?

          • Krogoth
          • 6 months ago

          iGPUs are the future for mainstream personal computing needs though. AMD just lid the fire back in the mid to late 2000s while Intel did all of the heavy-lifting for it. AMD again is just raising the bar while Intel puts just enough R&D to keep up.

          2020s will be the decade where discrete GPUs will become niche and mainstream-value SKUs will start to evaporate from etailer/B&M shelves.

          That’s why Nvidia has been trying to move away from gaming GPUs as their bread and butter since Fermi. They know sooner or later that iGPUs are going to become “good enough” for the masses and kiddie gamers.

          • cynan
          • 6 months ago

          Don’t know how my comment provoked this. I simply stated that having competitive GPUs may have kept AMD afloat prior to Ryzen – not a particularly novel sentiment. Sure, they [i<]could[/i<] have devoted that 5.5 billion to CPU R&D and [i<]could[/i<] have been focused and disciplined enough to come up with an Intel-competitive CPU years earlier, but as I stated - it's impossible to know what would have been. What ended up happening empirically, is that having the ability to get the console contracts, and have some competitive GPU skus (and compute products) seemed to have been instrumental in tiding them over until Ryzen. Whatifisms and all that. If AMD does get a large enough process advantage over Intel, maybe that will allow them to expand their iGPU offering for Ryzen 3 or 4...(As you are aware, they do offer iGPUs in their lower tier Ryzen 2 skus) They obviously need Ryzen to be competitive as a CPU first to regain mindshare as a CPU maker.

    • Growler
    • 6 months ago

    [quote<]for anyone looking to build their own Daedalus or SkyNet.[/quote<] We're going to need lots of power for [url=https://shadowrun.fandom.com/wiki/Echo_Mirage<]Echo Mirage.[/url<]

      • RAGEPRO
      • 6 months ago

      Wiz. I was gonna go for the Matrix at first but I was worried people would connect it with the films. Nice post, chum.

    • Krogoth
    • 6 months ago

    This is another part of Nvidia’s long-term exit strategy of moving away from gaming GPUs as their bread and butter.

      • chuckula
      • 6 months ago

      For once you make an accurate post and that’s why they downthumbs you?

      But yes, this is all about the data center.

        • Krogoth
        • 6 months ago

        It is also about seizing the HPC market as well. That’s why Intel is getting on the dGPU action. They are trying to make their own GPGPUs while keeping up with AMD RTG’s iGPU initiatives.

        • cynan
        • 6 months ago

        That data center? Whatever. It’s obviously about development of a new proprietary super-speedy SLI bridge. And this time ACROSS MULTIPLE COMPUTERS.

        Nvidia have been tight-lipped about the top secret project so far, though an insider at the company has purportedly stated: “Don’t worry. The user experience will be seamless.”

      • anotherengineer
      • 6 months ago

      It’s about their long-term plan to get to the same profits as intel and apple.

      Then buy out AMD and anyone else along the way, then get the government to get the x86 license re-approved, the slowly going toe to toe with intel before finally merging and becoming the CEO of the highest profit company in the WORLD!!!

      Ok well maybe the first line.

      • gerryg
      • 6 months ago

      Is it? Or is Nvidia making a move toward game streaming over the internet is the wave of the future, and data centers with GPU farms hosting games with fast networking is the way to go? And jeepers if you can’t use that same farm/cloud of GPUs with fast networking to do data science, rendering, simulation, etc. This could be a play for the cloud, plain and simple.

    • anotherengineer
    • 6 months ago

    and
    [url<]https://www.techpowerup.com/253490/nvidia-ceases-support-for-3dvision-mobile-kepler[/url<] if that even matters

      • stefem
      • 6 months ago

      Guessing why move to legacy only mobile Kepler parts, something new on the drivers front is coming?

      • K-L-Waster
      • 6 months ago

      If it means I don’t have to keep disabling the 3D Vision driver when I update, I’m all for it.

      (Does anyone actually own a 3D Vision device?)

    • johnrreagan
    • 6 months ago

    High-end machines like HPE’s Superdome Flex uses Mellanox’s Infiniband controllers for the fabric at something like 13GB/sec (yes, Byte, not bit) data rate. [url<]https://www.hpe.com/us/en/servers/superdome.html[/url<]

      • stefem
      • 6 months ago

      Yep, I’m wondering how Zak didn’t know about Mellanox

        • RAGEPRO
        • 6 months ago

        I just have absolutely zero interaction with that market, heh. I’m a gamer and enthusiast; I’ve never even been in the room with a rack-mount or fiber connection. The only Xeons I’ve ever seen are older chips people bought cheap for overclocking abuse, haha.

          • chuckula
          • 6 months ago

          It’s a rarified market but it is very lucrative.

          The last time I was looking at hardware from Mellanox was when Infiniband was still at 10 Gbit speeds. They have also made a name for themselves in 100Gbit and higher Ethernet hardware.

          • stefem
          • 6 months ago

          [quote<]I've never even been in the room with a rack-mount or fiber connection[/quote<] Man, I feel really sorry for you... 😉

Pin It on Pinterest

Share This