AMD makes hardware-based GPU virtualization a reality

AMD is jumping into the virtualized GPU market. At VMWorld, the red team showed off what it claims is the first hardware-based GPU virtualization solution, AMD Multiuser GPU (PDF). 

Up to 15 users can use a single Multiuser GPU, though per-user performance will decrease as more guests are provisioned on a card.  AMD says those users will benefit from more predictable performance regardless of how busy the host graphics card is. The company also claims that the Multiuser GPU is more secure than software-based virtualization, since each user gets its own slice of the GPU's memory. The net result, according to AMD corporate vice president and general manager Sean Burke, is that "each user is provided with the virtualized performance to design, create and execute their workflows without any one user tying up the entire GPU."

Multiuser GPU is built on Single Root I/O Virtualization (SR-IOV), an open standard created by the PCI Special Interest Group. AMD's virtualized GPUs have the same features as their single-user brethren, including OpenCL 2.0, DirectX 12, and OpenGL 4.4. The solution requires VMWare vSphere or ESXi 5.5 or newer, and it supports remote access protocols like Horizon View, Citrix Xen Desktop, and Teradici's Workstation Host Software. The company hasn't announced specs, pricing, or availability of the Multiuser GPU. 

Ben Funk

Sega nerd and guitar lover

Comments closed
    • Kougar
    • 4 years ago

    So VMware only, no support for HyperV?

    • Leader952
    • 4 years ago

    AMD is late to the party again.

    I remember a quote from AMD some years ago that they would wait for the market to develop before producing products addressing said market.

    The problem with that logic is that the competitor who made and developed that market is entrenched and new players seem to never get any traction.

      • HisDivineOrder
      • 4 years ago

      I’m sure AMD’ll do the token amount of support to make it seem like they have a shot to keep investors from revolting and the Board looking like they’re doing things. They don’t seem to care if those “things” actually work out, though. They just care for the appearance of doing them.

      Sometimes, I think AMD gets paid by the hour.

    • ronch
    • 4 years ago

    So is this gonna help AMD earn money or is this just another sideshow project?

    • Terra_Nocuus
    • 4 years ago

    Makes me think of that monster Puget Systems build ([url=https://www.pugetsystems.com/labs/articles/Multi-headed-VMWare-Gaming-Setup-564/<]link[/url<]) where you could switch between a single 12c/24t xeon w/ quad crossfire to 4 separate VMs with 4c / single GPU.

    • Flatland_Spider
    • 4 years ago

    Come one AMD, just bake the multiuser support into all of your cards, and don’t make people buy the workstation cards.

      • Thrashdog
      • 4 years ago

      I agree in principle… but what the hell are you planning to do on your home desktop that requires the ability to simultaneously accelerate 3D graphics workloads in multiple VMs? I’m genuinely curious.

        • Waco
        • 4 years ago

        A LAN party rig with 4 heads would be badass.

          • wimpishsundew
          • 4 years ago

          I haven’t seen a LAN party since the dial-up or slow broadband days. Red Alert and Quake were so much better with it. But I don’t see a point for it now with the party system for online gaming. And I’m not sure I want to virtualize my GPU for it.

            • Thrashdog
            • 4 years ago

            My old college friends and I still do it from time to time. It’s fun to eat finger food together and shout at each other, and until we started playing D&D, we didn’t have many other excuses to get together.

        • meerkt
        • 4 years ago

        Won’t this help with proper full usage of 3D in a single VM? Legacy 3D gaming in a guest OS.

        • chuckula
        • 4 years ago

        I can see having 3D acceleration in one VM like using Windows with 3D acceleration on top of a Linux host, but then requiring multiple VMs to split up a GPU is getting into a pretty esoteric area.

        Maybe for large terminal servers? But at that point you have professional admins running the show and it’s not for home use. Plus, no matter how you cut it, slicing up a GPU’s resources among multiple users isn’t going to deliver insane performance when even super-expensive GPUs can barely do 4K properly.

          • Flatland_Spider
          • 4 years ago

          Splitting up resources isn’t great, but it should be enough to enable hardware acceleration for multiple VMs.

        • Flatland_Spider
        • 4 years ago

        [quote<]simultaneously accelerate 3D graphics workloads in multiple VMs[/quote<] That's pretty much it. Or OpenCL workloads. It's about the flexibility. The GPU is one of the last things that can't be shared and a lot of stuff could make use of it.

        • TheMonkeyKing
        • 4 years ago

        Now all my personalities can play at teh same time!

      • Ninjitsu
      • 4 years ago

      Hey, they have to make money too. lol.

      • Mr Bill
      • 4 years ago

      I’m having difficulty seeing how this is useful. Say you buy a $3000 AMD FirePro W9100. I can see why you would not want to put that card in every workstation. You put it in one PC or maybe many in a single server and then everyone in the office can share it for their 3D CAD programs? So this is an accurate drawing resource but not a fast (as in high FPS) resource? Is the virtualization over the network or is it over a specialised cable? There are 6 multi display ports on this card. Do you do your job on the card and then see it wherever the monitors happen to be located?

      edit: difficulty

        • Flatland_Spider
        • 4 years ago

        Right now, graphics cards can be assigned to a VM via PCI passthrough, but that means one VM gets to use the card. (http://wiki.xen.org/wiki/Xen_PCI_Passthrough) This isn’t a problem when there are as many VMs that need graphics cards as there are graphics cards in the computer. I think four is about the max number of 16x PCIe I’ve seen on a motherboard.

        However, if there are multiple VMs that need a little bit of graphics power they’re kind of stuck between wasting GPU cycles or using more CPU cycles. Right now, it’s an all or nothing approach which isn’t great.

        With this technology, the GPU could be split up between multiple VMs that need a small amount of GPU cycles. Mainly compositing desktops and videos.

        In your scenario, it would be a VDI solution where the end user is accessing a VM via a terminal. VMware has had some demos of their VDI stuff which runs over Ethernet. The problem with a VDI setup is that a lot of software takes advantage of hardware acceleration these days, and the fall back is to use the CPU to render what is normally offloaded to the GPU. It’s not a great situation for the end user experience, or the sysadmins trying not to overload boxes.

        While this is mainly targeted at the VDI market, this can be useful outside of that. Testing software with OpenGL, OpenCL, DirectX, and DirectCompute code in a VM is now more of an option. One multiuser GPU will provide 15 VMs for testing. Granted the performance will be cut down, but 15 is more then 1.

    • Buzzard44
    • 4 years ago

    ” The solution requires VMWare vSphere or ESXi 5.5 or newer”

    Awww…I like VMWare products, but it’s so flippin’ expensive that it’s not feasible outside of work.

    KVM support plz! Then it’d be pretty sweet to run a hypervisor on your desktop and have a multi-head gaming station, without a dedicated GPU passed to each VM via IOMMU. Lots of games don’t max out a large modern GPU at 1080, so would be awesome. Step in the right direction though.

      • Bauxite
      • 4 years ago

      If its SR-IOV then it will probably work with any hypervisor once drivers are released. A lot of network and disk controllers work with large VM deployments under the same concept.

      • chuckula
      • 4 years ago

      This isn’t the first time that GPU virtualization has been done, but it looks like it’s getting (very slightly) closer to being more standardized. There’s still way too much black magic and weird hardware requirements needed to get it working right, but I’d like to see it improve in the future.

        • xeridea
        • 4 years ago

        It is becoming more standard because it isn’t being done by Nvidia, who laughs in the face of standards.

          • chuckula
          • 4 years ago

          OK, you got one big upvote for a snarky post, but don’t let it go to your head.

          Incidentally, don’t get too high & mighty with an announcement of an upcoming AMD product that supports virtualization when Nvidia is more than willing to sell you a working solution today: [url<]http://blogs.nvidia.com/blog/2015/07/15/nvidia-grid-vmware-horizon-6/[/url<]

            • xeridea
            • 4 years ago

            Just saying, there is a common trend where Nvidia releases a proprietary solution and AMD releases a more open solution that often ends up working better anyway, moving industry forward rather than vendor locking. Nvidia having Grid first won’t matter in a few years if there is a standard for GPU virtualization. When is the last time Nvidia released something open source, or contributed to an open standard?

            • chuckula
            • 4 years ago

            There is a common trend where AMD is late to the game but AMD fanboys pretend that AMD is literally the only company that has ever produced innovation.

            Here’s a hint: AMD is finally coming around to implement open standards that have been in place for YEARS.

            Here’s a nice slide presentation… from Intel in [b<][i<]2011[/i<][/b<]... about SR-IOV: [url<]http://www.intel.com/content/www/us/en/pci-express/pci-sig-sr-iov-primer-sr-iov-technology-paper.html[/url<] You also make the FALSE assumption that just because AMD aped the name of a pre-existing standard in marketing material that Nvidia somehow doesn't support the standard. Here's a paper... that was already published 6 months ago... describing just how Nvidia hardware ALREADY WORKS WITH SR-IOV: [url<]http://grids.ucs.indiana.edu/ptliupages/publications/15-md-gpudirect%20(1).pdf[/url<]

            • xeridea
            • 4 years ago

            Its not fanboy, its the dislike of a company trying as hard as they can to hold the industry back in an attempt to vendor lock everyone. Nvidia “innovates” in the same way that Apple does, by despising competition, and vendor locking everything.

            Nvidia refuses to support AdaptiveSync, but will be forced to now now they are outvoted by AMD AND Intel, and who would want to buy a vendor locked monitor? Nvidia has always loathed OpenCL because it isn’t vendor locked, AMD has always had better support. Nvidia refused to support Mantle, but they will be forced to via Vulkan, which they also don’t like because all the care about is DX11 for graphics (as shown by their terrible DX12 benchmark results). Nvidia has their GameWorks garbage that runs poorly, even on their own hardware, for effects that are better achieve with open source code.

            • chuckula
            • 4 years ago

            Yeah, I’ll believe that any of that is sincere after you publicly and forcefully scold AMD for vendor-locking their own hardware to Windows via abysmal support of Linux and for failing to get Freesync support working with Linux.

            Yeah, that’s right, the supposedly “proprietary” G-Sync works with Linux and has worked with Linux for some time while the supposedly “standards compliant” Freesync has received literally zero support from AMD.

            • renz496
            • 4 years ago

            [quote<]Nvidia refused to support Mantle[/quote<] please don't kid yourself. talk about nvidia refuse to support Mantle is useless when AMD have no intention to truly open the API in the first place. intel asked for it. they keep giving 'beta' excuse while commercial games able to run mantle just fine. at one point they even try to take advantage intel interest in Mantle as a marketing bullet point to convince game developer (only) to sign in mantle program (hence there are quick statement from intel after that to clarify their stand). and don't try to mix Vulcan into this. in one of Richard Huddy interviews AMD clearly want Mantle to continue to exist along side other graphic API. he even specifically mention that Mantle 2.0 will come out by the time MS officially launch DX12. [quote<]but they will be forced to via Vulkan, which they also don't like[/quote<] prove this. if anything it is nvidia that are much faster than AMD to support latest spec from OpenGL like what they did with OGL 4.4 and 4.5. that's why i would not be surprise if nvidia will have better support for Vulcan than AMD in the future.

      • Beelzebubba9
      • 4 years ago

      ESXi has had a free license tier since vSphere 5.0, so you can download and install it without cost or legal issues.

        • Flatland_Spider
        • 4 years ago

        ESXi has been a free-tier for forever, but VMware has started to cut features out of the free management client with 5.0 to force everyone to buy a vCenter license.

          • Beelzebubba9
          • 4 years ago

          Yeah, and they added most of the features back in with 5.1/5.5. 5.0 had the absurd RAM limitations, but those were removed in 5.1 and ESXi free 5.5/6.0 supports the majority of features one would need in a single host, non-production environment, and should support a full fledge PoC of AMD’s GPU virtualization.

          Beyond that I can’t find fault in VMware for, you know, charging money for their product. It’s kind of their core business, you know. 🙂

            • Flatland_Spider
            • 4 years ago

            VMware trying to get people to buy licenses to use their hardware is not what I’m talking about at all. I’m talking about VMware starting to nerf the management tools that come with the free version of vSphere 5.

            I have no problem with them charging for software that is quite good. It’s when they start nickel and diming me that I start to have a problem.

            • Beelzebubba9
            • 4 years ago

            VMware doesn’t sell hardware, so I’m not sure what you’re talking about there? Also, what changed between 4.1 and 5.x regarding host management?

            I was of the impression 4.1 had the same requirements for a vCenter license that 5.x/6.x does. There was a couple of shitty choices like DRAM restrictions in ESXi 5.0 Free, but those were removed in 5.1 IIRC to help stem the tied of Xen/KVM in OpenStack deployments. If you said 5.5 or 6.0, which depend on vCenter server to run the web UI, then I’d see your point. But at least of 5.5 the stand alone client handled all functions for a stand alone host, and only required the web UI for vCenter and great levels of functionality.

            • Flatland_Spider
            • 4 years ago

            [quote<]If you said 5.5 or 6.0, which depend on vCenter server to run the web UI, then I'd see your point. But at least of 5.5 the stand alone client handled all functions for a stand alone host, and only required the web UI for vCenter and great levels of functionality.[/quote<] That's exactly what I'm talking about. The stand alone management tools are frozen in their functionality, and access to new features was only being added to the web UI. I also wasn't happy with how they moved to ESXi as the main product which killed the command line ESX had. Specifically, CentOS 7 was only available as an option in the vCenter web UI. I switched companies shortly after vSphere 5.5 was starting to be rolled out, so that's the only example I had time to find.

      • Forge
      • 4 years ago

      Google up GVT-g. You just need Intel to release a GPU worth using it with.

    • DPete27
    • 4 years ago

    How is this different than Nvidia’s Grid 2.0?

    Add: For reference “[url=https://techreport.com/news/28939/grid-2-0-doubles-up-on-users-and-performance-with-maxwell-gpus<]Each Grid server can use multiple GPUs, with up to 16 users per GPU[/url<]"

      • Buzzard44
      • 4 years ago

      In the second PDF link, at the bottom of the second page, AMD presents a table contrasting it with Grid.

      According to AMD, AMD’s solution:
      -Hardware based, supports opencl 2.0, has stable/predictable performance, has a dedicated share of local memory for increased security, and supports up to 15 users per physical GPU.

      According to AMD, NVidia’s solution:
      -Software based, doesn’t support opencl 2.0, has unstable and/or unpredictable performance, does not have a dedicated share of local memory for reduced security, and support up to 8 users per physical GPU.

        • Thrashdog
        • 4 years ago

        I think the GRID cards have a couple of options. The high end ones will only support 4 users apiece (because they assume you’ll want more horsepower per user if you’re paying for the big iron) but the low end ones will do 16 users.

        What I realy want to see is the ability to oversubscribe the cards and have them dynamically allocate resources to users based on demand. It’s a rare day that more than a few people at a time need all the power a GPU will give them.

          • Usacomp2k3
          • 4 years ago

          That’s my thinking as well. Which makes AMD’s case of “each user gets its own slice of the GPU’s memory” the wrong direction.

            • Thrashdog
            • 4 years ago

            Funny to think that after all these years we’re beginning to go right back to time-sharing on a mainframe.

            • the
            • 4 years ago

            And we’re going back to the terminal philosophy with everything now being handled ‘in the cloud’.

            • maxxcool
            • 4 years ago

            YUP!!!

          • wimpishsundew
          • 4 years ago

          AMD’s solution have 3 different options for # of users per card. But it also said it’s up to the administrator how many users is using it and resource allocation.

          They didn’t describe it in details but AMD’s approach sounds no different than VPS hosting. There are probably some type of dynamic resource allocation but you’re guaranteed at least X amount when needed.

        • chuckula
        • 4 years ago

        The most important thing: According the market, Nvidia’s solution is actually available.

        As for OpenCL: Seriously, nobody cares.

        As for being “Software based”: I have no idea what that means. Does that mean that the “Nvidia Grid” uses no Nvidia hardware and you can just run it with any generic processor?

        “has unstable and/or unpredictable performance”
        I thought that was AMD’s primary feature?

        “does not have a dedicated share of local memory for reduced security”

        Fancy way of saying that AMD has to statically allocate memory.

          • xeridea
          • 4 years ago

          [quote<]As for OpenCL: Seriously, nobody cares.[/quote<] Photoshop, Luxmark, Bitcoin miners... [quote<]As for being "Software based": I have no idea what that means. Does that mean that the "Nvidia Grid" uses no Nvidia hardware and you can just run it with any generic processor?[/quote<] Its like oldschool PC VMs had no CPU support so they were limited. Hardware based allows more control, and 100% isolation between VMs, which is critical for the cloud. Grid is software, and vendor locked. [quote<]"has unstable and/or unpredictable performance" I thought that was AMD's primary feature?[/quote<] Nvidia tends to have better DX11 drivers, but AMD has better OpenCL, Dx12, and Vulkan. AMD workstation cards also work better than you may think. Dx11 driver optimizations will be obsolete soon, and newer APIs and computing support will be what matters. [quote<]"does not have a dedicated share of local memory for reduced security" Fancy way of saying that AMD has to statically allocate memory.[/quote<] It really is a fancy way to say AMD has full hardware support for VMs, and this is how VMs should be done if you care about security. Also, in theory hardware GPU VMs would run faster due to not having software overhead, and full API support.

            • chuckula
            • 4 years ago

            [quote<]Photoshop, Luxmark, Bitcoin miners...[/quote<] OK, you are either really dumb or a shill.

            • xeridea
            • 4 years ago

            Are you insisting that no one using GPUs in the cloud uses OpenCL? Nvidia doesn’t care about OpenCL because it limits their CUDA vendor lock, but OpenCL is very popular, even if you don’t believe it. Saying something enough times with nothing to back it up doesn’t make it true.

            • chuckula
            • 4 years ago

            [quote<]Are you insisting that no one using GPUs in the cloud uses OpenCL?[/quote<] Yeah, pretty much. The rest is just marketing BS that you lap up because you obviously have never had a real job dealing with large networks, and throwing around words like "cloud" makes me even less impressed. Incidentally, even "GPUs in the cloud" -- which are pretty much all Nvidia GPUs -- can support OpenCL, it's just that Nvidia hasn't gotten around to full OpenCL 2.0 support yet. ONce again who cares: I'd rather have Nvidia hardware with OpenCL support at the spec level of most real-world software AND CUDA support (which is what everyone uses anyway) compared to AMD with OpenCL (kinda) and zero CUDA support of any kind.

          • the
          • 4 years ago

          [quote<]As for OpenCL: Seriously, nobody cares.[/quote<] Sony Vegas and some of the other video editing software I work with uses OpenCL. Granted everything is OpenCL 1.0 or 1.1 but I suspect that OpenCL 2.0 can be used in my workflow.

            • chuckula
            • 4 years ago

            So you are using Sony Vegas in a multi-user virtualized setting that requires OpenCL 2.0?

            Remember the context here.

            • the
            • 4 years ago

            Ironically I recently proposed that as a ‘worst case’ scenario as the VM clusters available don’t have any sort of hardware GPU acceleration. So to support the workload I’d be throwing at it, either technologies like GRID would have to be install throughout the cluster (expensive) or my desktop instance would get to monopolize a beefy VM host (already expensive) for rendering while negatively impacting the performance for others on the same host. It was wisely decided that a dedicated system would be the best course of action in both terms of performance and cost.

            • Deanjo
            • 4 years ago

            Vegas also supports Cuda and tends to have better performance vs an AMD openCL acceleration.

            • the
            • 4 years ago

            True but unfortunately, the laptop I have only has subpar Intel integrated graphics.

            On the bright side, I can use Quick Sync with some renderers which is awesome.

        • the
        • 4 years ago

        The weird thing is that I thought nVidia can partition a running context their GPU based on a per SMX/SMM basis. The management of who gets what SMX could very well be managed by software akin to a hypervisor running on the GPU. If that is the case, it really isn’t that big of a deal that the resource allocation is done via software as each context does get dedicated hardware (though there is always the security aspect to consider). Essentially this virtualization technique breaks down to number of SMX/SMM = max number of concurrent users. Over committing the number of users would require a context switch which would need cause a drop in performance beyond just having a single SMX/SMM cluster assigned. AMD does have the ability to quickly switch between running contexts so there is an advantage here but how large it is depends on the workload and user count.

        nVidia has restricted the number of concurrent contexts on their consumer cards with only the Telsa brand being able to divide up to one context per SMX. I’m not sure how the Quadro lineup fits into this if the context count is fully unlocked like Telsa but older models at least were not as locked down as the consumer Geforce cards.

        • Ninjitsu
        • 4 years ago

        GRID 2 however supports up 32 users with the M60 but it’s a dual GPU card so per GPU it’s 16.

        AMD are comparing with first-gen GRID.

        Maxwell has OpenCL 2.0 so I’m not sure why GRID 2 wouldn’t support it – though GRID 2 does support CUDA now (and Linux).

    • sweatshopking
    • 4 years ago

    MY DREAMS COME TRUE!

      • chuckula
      • 4 years ago

      You have weird dreams.

      • the
      • 4 years ago

      A keyboard with only caps lock keys?

Pin It on Pinterest

Share This