Home Nvidia Quadro vDWS brings greater flexibility to virtualized pro graphics
Reviews

Nvidia Quadro vDWS brings greater flexibility to virtualized pro graphics

Renee Johnson
Disclosure
Disclosure
In our content, we occasionally include affiliate links. Should you click on these links, we may earn a commission, though this incurs no additional cost to you. Your use of this website signifies your acceptance of our terms and conditions as well as our privacy policy.

Nvidia’s Quadro pro graphics cards and their accompanying drivers will be familiar to anybody who’s ever had to get down to business in CAD or CAE applications, but hulking workstations Kensington-locked to a desk seem to be becoming less and less of the way businesses and employees work these days. For ease of management and security, more and more businesses are delivering applications or entire virtual desktops over the network to lightweight client computing devices, and it’s just not possible to cram an entire Quadro GP100 into an ultrabook or tablet.

Nvidia’s GRID GPU virtualization software has long offered businesses the benefits of Quadro driver certifications and optimizations through its Virtual Workstation per-user license, but the flexibility of Maxwell Tesla accelerators for virtual workstations was limited by the architecture’s lack of preemption support. Virtual users who needed to run CUDA applications with GRID, for example, needed to be allocated an entire Tesla card in order to do their things—a constraint that runs counter to making the most of one’s virtual GPU acceleration resources by sharing them among as many users as possible.

All that changes today with Nvidia’s introduction of its Quadro Virtual Data Center Workstation software, which will run on Pascal-powered Tesla accelerators for the first time. Quadro vDWS, as Nvidia abbreviates it, could offer system administrators a much more flexible way of provisioning workstation power to remote users. Because the Pascal architecture supports preemption in hardware, a Tesla accelerator with a Pascal GPU can be sliced and diced as needed to support users whose performance needs vary, but who all need Quadro drivers.

For example, a Pascal Tesla might be configured to offer one virtual user 50% of its resources, while two other users might receive 25% each. That quality-of-service management wasn’t possible with virtual workstations on Maxwell accelerators. All of those users can run CUDA applications or demanding graphics workloads without taking over the entire graphics card.

Quadro vDWS will also support every Tesla accelerator going forward. Nvidia says that in the past, only a select range of Maxwell cards could be used for virtual workstation duty. Customers’ existing Maxwell accelerators will still work with Quadro vDWS, but Pascal Teslas offer potentially exciting new flexibility beyond guaranteed quality of service. The Tesla P40, for example, joins 24GB of memory with a single GPU. That large pool of RAM could help administrators virtualize users whose data sets simply couldn’t fit on older products. Past GRID-compatible Maxwell Teslas could only boast as much as 8GB of RAM per GPU.

Quadro vDWS also isn’t limited to the highest-performance applications and most demanding users. Since system administrators can use vDWS with any Pascal Tesla, they can better tailor hardware to the needs of a given set of users. Nvidia believes performance-hungry users will still want access to the beefy P40 or the new Tesla P6 for blade servers, but less-demanding users at a branch office could still get the Quadro software support of vDWS through slices of the entry-level P4 accelerator.

Along with Quadro vDWS, Nvidia is introducing updates to its GRID virtual PC, or vPC, service. The updated vPC can now take advantage of Pascal Teslas, as well. Nvidia has been touting the advantages of GPU acceleration for virtual desktop infrastructure for some time, and it only expects common productivity applications’ demand for GPU acceleration to increase, making the increased performance of Pascal Teslas more appealing.


The specs of the new Tesla P6

On top of their increased performance, Pascal Teslas could also increase the user density per accelerator. Nvidia says the same Tesla P40 we noted for vDWS work can host 24 1GB vPC slices for accelerating these increasingly demanding virtual desktop apps, and the newly-introduced Tesla P6 accelerator for blade servers offers 16GB of memory, up from the M6’s 8GB. The company says recent developments in server hardware will drive up density per rack unit, too. Nvidia says vendors like Cisco will be able to cram as many as four P6es into a blade with Intel’s Skylake Xeons playing host, as just one example.

GRID’s management software is also getting an update today. System administrators will have finer-grained tools to examine the applications users are running in their VMs in this release, and if a particular Chrome tab or other accelerated application is causing performance issues on the user’s slice of the virtualized GPU, the administrator will be able to remotely zap the offender. Of course, if a user doesn’t have to call a help desk, that’s the best possible outcome. Nvidia says better monitoring tools within the guest VM will allow users to identify troublesome applications themselves and kill them from within their VMs independently, too. 

I’ve been following Nvidia’s GRID for some time now, and this latest round of updates sounds like a significant gain in flexibility, user density, and performance for the product. System administrators with Pascal Teslas in their server farms can begin taking advantage of these new capabilities on that hardware September 1.

Latest News

Apple Might Join Hands with Google or OpenAI for Their AI Tech
News

Apple Is Reportedly Planning to Join Hands with Google or OpenAI to License Their AI Tools

YouTube Launches New Tool To Help Label AI-generated Content
News

YouTube Launches a New Tool to Help Creators Label AI-Generated Content

YouTube released a tool that will make creators clearly label the parts of their content that are generated by AI. The initiative was first launched in November in an attempt...

Ripple Dumps 240 Million XRP Tokens Amid 17% Price Decline
Crypto News

Ripple Dumps 240 Million XRP Tokens Amid 17% Price Decline

Popular crypto payment platform Ripple has released 240 million XRP tokens in its latest escrow unlock for March. This comes at a time when XRP’s price has declined significantly. Data from...

Crypto Expert Draws A Links Between Shiba Inu And Ethereum
Crypto News

Crypto Expert Draws Link Between Shiba Inu And Ethereum

The Lucrative FTX Bankruptcy Trade and Ongoing Legal Battle
Crypto News

The Lucrative FTX Bankruptcy Trade and Ongoing Legal Battle

Bitcoin (BTC) Price Set to Enter “Danger Zone” – Time to Back-Off or Bag More Coins?
Crypto News

Bitcoin (BTC) Price Set to Enter “Danger Zone” – Time to Back-Off or Bag More Coins?

SNB to Kick Off Rate Cut Cycle Sooner Than Expected
News

SNB to Kick-Start Rate Cut Cycle Sooner Than Expected