Nvidia Quadro vDWS brings greater flexibility to virtualized pro graphics

Pascal Teslas play host to Quadro virtues

Nvidia's Quadro pro graphics cards and their accompanying drivers will be familiar to anybody who's ever had to get down to business in CAD or CAE applications, but hulking workstations Kensington-locked to a desk seem to be becoming less and less of the way businesses and employees work these days. For ease of management and security, more and more businesses are delivering applications or entire virtual desktops over the network to lightweight client computing devices, and it's just not possible to cram an entire Quadro GP100 into an ultrabook or tablet.

Nvidia's GRID GPU virtualization software has long offered businesses the benefits of Quadro driver certifications and optimizations through its Virtual Workstation per-user license, but the flexibility of Maxwell Tesla accelerators for virtual workstations was limited by the architecture's lack of preemption support. Virtual users who needed to run CUDA applications with GRID, for example, needed to be allocated an entire Tesla card in order to do their things—a constraint that runs counter to making the most of one's virtual GPU acceleration resources by sharing them among as many users as possible.

All that changes today with Nvidia's introduction of its Quadro Virtual Data Center Workstation software, which will run on Pascal-powered Tesla accelerators for the first time. Quadro vDWS, as Nvidia abbreviates it, could offer system administrators a much more flexible way of provisioning workstation power to remote users. Because the Pascal architecture supports preemption in hardware, a Tesla accelerator with a Pascal GPU can be sliced and diced as needed to support users whose performance needs vary, but who all need Quadro drivers.

For example, a Pascal Tesla might be configured to offer one virtual user 50% of its resources, while two other users might receive 25% each. That quality-of-service management wasn't possible with virtual workstations on Maxwell accelerators. All of those users can run CUDA applications or demanding graphics workloads without taking over the entire graphics card.

Quadro vDWS will also support every Tesla accelerator going forward. Nvidia says that in the past, only a select range of Maxwell cards could be used for virtual workstation duty. Customers' existing Maxwell accelerators will still work with Quadro vDWS, but Pascal Teslas offer potentially exciting new flexibility beyond guaranteed quality of service. The Tesla P40, for example, joins 24GB of memory with a single GPU. That large pool of RAM could help administrators virtualize users whose data sets simply couldn't fit on older products. Past GRID-compatible Maxwell Teslas could only boast as much as 8GB of RAM per GPU.

Quadro vDWS also isn't limited to the highest-performance applications and most demanding users. Since system administrators can use vDWS with any Pascal Tesla, they can better tailor hardware to the needs of a given set of users. Nvidia believes performance-hungry users will still want access to the beefy P40 or the new Tesla P6 for blade servers, but less-demanding users at a branch office could still get the Quadro software support of vDWS through slices of the entry-level P4 accelerator.

Along with Quadro vDWS, Nvidia is introducing updates to its GRID virtual PC, or vPC, service. The updated vPC can now take advantage of Pascal Teslas, as well. Nvidia has been touting the advantages of GPU acceleration for virtual desktop infrastructure for some time, and it only expects common productivity applications' demand for GPU acceleration to increase, making the increased performance of Pascal Teslas more appealing.

The specs of the new Tesla P6

On top of their increased performance, Pascal Teslas could also increase the user density per accelerator. Nvidia says the same Tesla P40 we noted for vDWS work can host 24 1GB vPC slices for accelerating these increasingly demanding virtual desktop apps, and the newly-introduced Tesla P6 accelerator for blade servers offers 16GB of memory, up from the M6's 8GB. The company says recent developments in server hardware will drive up density per rack unit, too. Nvidia says vendors like Cisco will be able to cram as many as four P6es into a blade with Intel's Skylake Xeons playing host, as just one example.

GRID's management software is also getting an update today. System administrators will have finer-grained tools to examine the applications users are running in their VMs in this release, and if a particular Chrome tab or other accelerated application is causing performance issues on the user's slice of the virtualized GPU, the administrator will be able to remotely zap the offender. Of course, if a user doesn't have to call a help desk, that's the best possible outcome. Nvidia says better monitoring tools within the guest VM will allow users to identify troublesome applications themselves and kill them from within their VMs independently, too. 

I've been following Nvidia's GRID for some time now, and this latest round of updates sounds like a significant gain in flexibility, user density, and performance for the product. System administrators with Pascal Teslas in their server farms can begin taking advantage of these new capabilities on that hardware September 1.

Tags: Graphics Software Servers Workstations

Revisiting the Radeon VII and RTX 2080 at 2560x1440Can a lower resolution turn the tables for AMD? 244
AMD's Radeon VII graphics card reviewedPlus ça change 265
Asus' ROG Strix GeForce RTX 2070 graphics card reviewedTU106 takes fully-fledged flight 56
Using Plex to make your media-streaming life betterNo TV-PC for thee 75
The Tech Report System Guide: January 2019 editionNew year, new gear 131
Radeon Software Adrenalin 2019 Edition untethers Radeon gamingHigh-quality gaming comes to low-power devices 58
Intel talks about its architectural vision for the futureGetting real about 10-nm products and beyond 139
Testing Turing content-adaptive shading with Wolfenstein II: The New ColossusPutting shader power where it's needed most 106