The cGPU service is the container sharing technology based on the kernel virtual GPU isolation first developed by Alibaba Cloud. Multiple containers share a single GPU. This isolates and secures your business, improves GPU utilization, and then saves costs.

Why choose the cGPU service?

  • More open

    cGPU is compatible with standard open source solutions such as Kubernetes and NVIDIA Docker.

  • Simpler

    You do not need to re-compile AI applications or replace the Compute Unified Device Architecture (CUDA) library. Reconfiguration is not required after the CUDA and NVIDIA CUDA Deep Neural Network (cuDNN) libraries are upgraded.

  • Stabler

    Compared with the variability and instability of API operations of the CUDA layer and non-API operations of cuDNN, the underlying operations for NVIDIA devices are stabler and more convergent.

  • Complete isolation

    cGPU ensures that the allocated GPU memory and computing capacity do not affect each other.

  • No limit on instances

    cGPU applies to various GPU-accelerated instances such as GPU bare metal instances, virtualized instances, and vGPU-accelerated instances.

cGPU architecture

cGPU-en

When multiple containers run on a single physical GPU and the GPU resources are isolated among the containers, the GPU hardware resource utilization can be improved.

cGPU uses the server kernel driver developed by Alibaba Cloud to provide virtual GPU devices for containers. This can isolate the memory and computing power of the GPUs without the need to compromise performance. This can also ensure the full use of GPU hardware resources for training and inference. You can run commands to configure the virtual GPU devices in containers.