Users of the latest NVIDIA Virtual Compute Server software and NVIDIA A100 GPUs boost performance for AI and data science workloads on virtualized infrastructure.
From AI to VDI, NVIDIA virtual GPU products provide employees with powerful performance for any workflow.
vGPU technology helps IT departments easily scale the delivery of GPU resources, and allows professionals to collaborate and run advanced graphics and computing workflows from the cloud.
Now, NVIDIA is expanding its vGPU software features with a new release that supports the NVIDIA A100 Tensor core GPU with NVIDIA Virtual Compute Server Software. Based on NVIDIA vGPU technology, vCS enables AI and compute-intensive workloads to run in VMs.
With support for the NVIDIA A100, the latest NVIDIA vCS delivers significantly faster performance for AI and data analytics workloads.
Powered by the NVIDIA Ampere architecture, the A100 GPU provides strong scaling for GPU compute and deep learning applications running in single- and multi-GPU servers.
Engineers, researchers, students, data scientists and others can now tackle compute-intensive workloads in a virtual environment, accessing the most powerful GPU in the world through virtual machines that can be securely provisioned in minutes. As NVIDIA A100 GPUs become available in vGPU certified servers from NVIDIA’s partners, professionals across all industries can accelerate their workloads with powerful performance.
Additional new features of the NVIDIA vGPU September 2020 release include:
Multi-Instance GPU (MIG) with VMs: MIG expands the performance and value of NVIDIA A100 by partitioning the GPUs in up to seven instances. Each MIG can be fully isolated with its own high-bandwidth memory, cache and compute cores. Combining MIG with vCS, enterprises can take advantage of management, monitoring and operational benefits of hypervisor-based server virtualization, running a VM on each MIG partition.
Heterogeneous Profiles and OSes: With the ability to have different sized instances through MIG, heterogenous vCS profiles can be used on an A100 GPU. This allows VMs of various sizes to be run on a single A100 GPU. Additionally, with VMs running on the NVIDIA GPUs with vCS, heterogeneous operating systems can also be run on an A100 GPU, where different Linux distributions can be run simultaneously in different VMs.
GPUDirect Remote Direct Memory Access: Now supported with NVIDIA vCS, GPUDirect RDMA enables network devices to directly access GPU memory, bypassing CPU host memory and decreasing GPU-GPU communication latency to completely offload the CPU in a virtualized environment.