AMD introduces Vega to VDI with Radeon Pro V340

AMD has long touted the benefits of its server graphics cards for virtual desktop infrastructure (VDI) purposes, and it's updating Radeon Pro cards' ability to deliver the desktop over the network  with its Radeon Pro V340 graphics card. The V340 takes a pair of Vega 10 GPUs with 56 compute units enabled, yokes 16 GB of HBM2 RAM to each one, and slaps them on a single card capable of handling up to 32 VMs with 1 GB of graphics memory each.

The Radeon Pro V340 does its thing with AMD's Multiuser GPU, or MxGPU, technology. MxGPU uses what the company calls hardware-based virtualization to claim more consistent, deterministic performance than VDI approaches that depend on software management. AMD also notes that no one user can tie up one of its virtualized graphics cards and that (at least in theory), clients can't snoop on other guests' portions of graphics memory, either. AMD also touts MxGPU's freedom from recurring licensing fees.

Compared to the company's last dual-GPU VDI outing, the Firepro S1750 X2, the V340 doesn't increase the maximum number of guests that can reside on a single board. That said, its Vega chips should offer much better performance overall and per slice, along with a larger frame buffer per user, compared to the Tonga chip that was virtually subdivided inside the S7150 X2. VDI administrators who need pro-grade graphics in a virtual environment can check out the V340 at VMWorld in Las Vegas through August 30.

Comments closed
    • R-Type
    • 1 year ago

    VDI is such a rip off oustside of the thinnest of use cases.

      • chuckula
      • 1 year ago

      Keep your clients thin!

    • techguy
    • 1 year ago

    Depending on price, I could see going this route instead of 2x Tesla P40 for an upcoming VDI refresh for our engineering school.

    • ptsant
    • 1 year ago

    I don’t quite understand the use case for this. Is it for people who run GPU compute on the cloud? Aren’t these people better off having multiple GPUs per server?

    Seems like a very very niche product to me. Elucidate me…

      • chuckula
      • 1 year ago

      This isn’t for GPU compute [at least not as an intended market]. This is for running a bunch of VMs that actually have 3D acceleration using shared resources from a single physical GPU.

      • Usacomp2k3
      • 1 year ago

      For example, at our company, we have ~50 engineers that each have Quadro’s on their PC’s. In theory, those could be downgraded to simpler GPU’s and have all rendering setup to do remotely in the server room. Personally I like the one’s that scale the resources with # of connected clients instead of hard-coding, (ie if only 2 people are working on it, then have them each get 50% of the resources).

        • shank15217
        • 1 year ago

        These are virtual pci functions, what you speak of is a software layer. Virtual pci functions can only be set to take static resources during driver initialization. You also want consistent performance, your idea is great in theory but doesn’t work in real scenarios.

      • dragontamer5788
      • 1 year ago

      There are various professional products which greatly benefit from GPU-acceleration. For example, lets say a Blender Rig. You can use GPU-Cycles to use your GPU to raytrace your results with OpenCL.

      However, 90% of the time, that capability will sit unused on an individual’s machine. So it is more efficient for an organization to pool computers together using thin clients + virtualization.

      So maybe you put 8 users on a single machine, that single machine has 4x Radeon Pro V340. When one user hits the “render” button, there will be effectively 8x Vega 56 cards working on that one person’s problem.

      Consider a central machine that’s a Dual EPYC 7601P (64-cores / 128-threads), 4x Radeon Pro V340. And each user gets a $200 Dell Wyse thin-client.

      Now, all 8-users could hit render at the same time, at which point the OS would load-balance and split it up so that each user has 1x Vega56 per renderer. So you never are worse off than 8x machines with 8x Vega56 cards and 8x motherboard and 8x CPUs.

      But it can be better! Because if 4-people don’t show up at work, the 4x other users basically have double performance (since you can “use their GPU-resources” during your renders).

        • gerryg
        • 1 year ago

        But is the allocation of virtual GPU resources dynamic based on load/users, or is it configured and essentially static? I.e. if somebody doesn’t log in, the resources remain reserved. The smarts of the management software doing things dynamically are key to your scenario working, you don’t want to have to reconfigure and (virtually?) reboot the machine in order to reallocate resources on an hourly or daily basis as the number of users changes. Worst case somebody has to log out and log back in with a fresh session in order to get the resources, but if they do that and somebody else shows up ready to work, they won’t have resources and are locked out.

          • shank15217
          • 1 year ago

          Part od a cluster of machines usually. In that cluster you can have different allocations.

    • Ryhadar
    • 1 year ago

    I think it would be so cool to have this for home use. Maybe limit it to 4 concurrent users and then have a high end gaming PC that serves the whole house.

      • tipoo
      • 1 year ago

      I guess I’m not sure why that would appeal over game streaming to any point in your house, GPUs still aren’t fantastic at task switching so even three other very low load users could probably disproportionately hit your frame times.

    • MileageMayVary
    • 1 year ago

    How about just 2 VMs with 16GB each, one for me and one for the wife!

      • shank15217
      • 1 year ago

      You can do that, it’s upto 32 virtual functions per card.

    • chuckula
    • 1 year ago

    [s<]Pizza Pizza[/s<] I mean, Vega Vega.

      • Krogoth
      • 1 year ago

      #PoorTuring

Pin It on Pinterest

Share This