This topic describes the features of vGPU-accelerated instance families of Elastic Compute Service (ECS) and lists the instance specifications of each instance family.

sgn7i-vws, vGPU-accelerated instance family with shared CPUs

Features:
  • This instance family uses third-generation SHENLONG architecture to provide predictable and consistent ultra-high performance. This instance family utilizes fast path acceleration on the chip to improve storage performance, network performance, and computing stability by an order of magnitude. This way, data can be stored and models can be loaded more quickly.
  • Instances of the sgn7i-vws instance family share CPU and network resources to maximize the utilization of underlying resources. Each instance has exclusive access to its memory and GPU memory to ensure data isolation and high performance.
    Note If you want to use exclusive CPU resources, select the vgn7i-vws instance family.
  • This instance family comes with the NVIDIA GRID vWS license to provide certified graphics acceleration capabilities for Computer Aided Design (CAD) software to meet the requirements of professional graphic design. Instances of this instance family can be used as lightweight GPU-accelerated compute-optimized instances to reduce the costs of small-scale AI inference tasks.
  • Compute:
    • Uses NVIDIA A10 GPUs that have the following features:
      • Innovative Ampere architecture
      • Support for acceleration features such as vGPU, RTX, and the TensorRT inference engine that provide all-purpose business support
    • Uses 2.9 GHz Intel® Xeon® Scalable (Ice Lake) processors that deliver an all-core turbo frequency of 3.5 GHz.
  • Storage:
    • Is an instance family in which all instances are I/O optimized.
    • Supports enhanced SSDs (ESSDs), standard SSDs, and ultra disks.
      Note For more information about the performance of cloud disks, see EBS performance.
  • Network:
    • Supports IPv6 addresses.
    • Provides high network performance based on large computing capacity.
  • Applicable scenarios:
    • Concurrent AI inference tasks that require high-performance CPUs, memory, and GPUs, such as image recognition, speech recognition, and behavior identification
    • Compute-intensive graphics processing tasks that require high-performance 3D graphics virtualization capabilities, such as remote graphic design and cloud gaming
    • 3D modeling in fields that require the use of Ice Lake processors, such as animation and film production, cloud gaming, and mechanical design
Instance types
Instance type vCPU Memory (GiB) GPU GPU memory Baseline/Burst bandwidth (Gbit/s) Packet forwarding rate (pps) NIC queues ENIs
ecs.sgn7i-vws-m2.xlarge 4 15.5 NVIDIA A10 * 1/12 24GB * 1/12 1.5/5 500,000 4 2
ecs.sgn7i-vws-m4.2xlarge 8 31 NVIDIA A10 * 1/6 24GB * 1/6 2.5/10 1,000,000 4 4
ecs.sgn7i-vws-m8.4xlarge 16 62 NVIDIA A10 * 1/3 24GB * 1/3 5/20 2,000,000 8 4
ecs.sgn7i-vws-m2s.xlarge 4 8 NVIDIA A10 * 1/12 24GB * 1/12 1.5/5 500,000 4 2
ecs.sgn7i-vws-m4s.2xlarge 8 16 NVIDIA A10 * 1/6 24GB * 1/6 2.5/10 1,000,000 4 4
ecs.sgn7i-vws-m8s.4xlarge 16 32 NVIDIA A10 * 1/3 24GB * 1/3 5/20 2,000,000 8 4
Note

vgn7i-vws, vGPU-accelerated instance family

Features:
  • This instance family uses third-generation SHENLONG architecture to provide predictable and consistent ultra-high performance. This instance family utilizes fast path acceleration on the chip to improve storage performance, network performance, and computing stability by an order of magnitude. This way, data can be stored and models can be loaded more quickly.
  • This instance family comes with the NVIDIA GRID vWS license to provide certified graphics acceleration capabilities for CAD software to meet the requirements of professional graphic design. Instances of this instance family can also be used as lightweight GPU-accelerated compute-optimized instances to reduce the costs of small-scale AI inference tasks.
  • Compute:
    • Uses NVIDIA A10 GPUs that have the following features:
      • Innovative Ampere architecture
      • Support for acceleration features such as vGPU, RTX, and the TensorRT inference engine that provide all-purpose business support
    • Uses 2.9 GHz Intel® Xeon ® Scalable (Ice Lake) processors that deliver an all-core turbo frequency of 3.5 GHz.
  • Storage:
    • Is an instance family in which all instances are I/O optimized.
    • Supports ESSDs, standard SSDs, and ultra disks.
      Note For more information about the performance of cloud disks, see EBS performance.
  • Network:
    • Supports IPv6 addresses.
    • Provides high network performance based on large computing capacity.
  • Applicable scenarios:
    • Concurrent AI inference tasks that require high-performance CPUs, memory, and GPUs, such as image recognition, speech recognition, and behavior identification
    • Compute-intensive graphics processing tasks that require high-performance 3D graphics virtualization capabilities, such as remote graphic design and cloud gaming
    • 3D modeling in fields that require the use of Ice Lake processors, such as animation and film production, cloud gaming, and mechanical design
Instance types
Instance type vCPU Memory (GiB) GPU GPU memory Bandwidth (Gbit/s) Packet forwarding rate (pps) NIC queues ENIs
ecs.vgn7i-vws-m4.xlarge 4 30 NVIDIA A10 * 1/6 24GB * 1/6 3 1,000,000 4 4
ecs.vgn7i-vws-m8.2xlarge 10 62 NVIDIA A10 * 1/3 24GB * 1/3 5 2,000,000 8 6
ecs.vgn7i-vws-m12.3xlarge 14 93 NVIDIA A10 * 1/2 24GB * 1/2 8 3,000,000 8 6
ecs.vgn7i-vws-m24.7xlarge 30 186 NVIDIA A10 * 1 24GB * 1 16 6,000,000 12 8
Note

vgn6i, vGPU-accelerated instance family

Features:
  • If you want your vgn6i instance to support graphics features such as Open Graphics Library (OpenGL), you must purchase a GRID license from NVIDIA. Then, after the instance is created, you must manually install a GRID driver and activate the license.
  • Compute:
    • Uses NVIDIA T4 GPUs.
    • Uses vGPUs.
      • Supports the 1/4 and 1/2 compute capacity of NVIDIA Tesla T4 GPUs.
      • Supports 4 GB and 8 GB of GPU memory.
    • Offers a CPU-to-memory ratio of 1:5.
    • Uses 2.5 GHz Intel® Xeon® Platinum 8163 (Skylake) processors.
  • Storage:
    • Is an instance family in which all instances are I/O optimized.
    • Supports only standard SSDs and ultra disks.
  • Network:
    • Supports IPv6 addresses.
    • Provides high network performance based on large computing capacity.
  • Applicable scenarios:
    • Real-time rendering for cloud games
    • Real-time rendering for augmented reality (AR) and virtual reality (VR) applications
    • AI (deep learning and machine learning) inference for elastic Internet service deployment
    • Educational environment of deep learning
    • Modeling experiment environment of deep learning
Instance types
Instance type vCPU Memory (GiB) GPU GPU memory Bandwidth (Gbit/s) Packet forwarding rate (pps) NIC queues ENIs Private IP addresses per ENI
ecs.vgn6i-m4.xlarge 4 23 NVIDIA T4 * 1/4 16GB * 1/4 3 500,000 2 4 10
ecs.vgn6i-m8.2xlarge 10 46 NVIDIA T4 * 1/2 16GB * 1/2 4 800,000 4 5 20
Note

vgn5i, vGPU-accelerated instance family

Features:
  • If you want your vgn5i instance to support graphics features such as OpenGL, you must purchase a GRID license from NVIDIA. Then, after the instance is created, you must manually install a GRID driver and activate the license.
  • Compute:
    • Uses NVIDIA P4 GPUs.
    • Uses vGPUs.
      • Supports the 1/8, 1/4, 1/2, and 1/1 compute capacity of NVIDIA Tesla P4 GPUs.
      • Supports 1 GB, 2 GB, 4 GB, and 8 GB of GPU memory.
    • Offers a CPU-to-memory ratio of 1:3.
    • Uses 2.5 GHz Intel® Xeon® E5-2682 v4 (Broadwell) processors.
  • Storage:
    • Is an instance family in which all instances are I/O optimized.
    • Supports only standard SSDs and ultra disks.
  • Network:
    • Supports IPv6 addresses.
    • Provides high network performance based on large computing capacity.
  • Applicable scenarios:
    • Real-time rendering for cloud games
    • Real-time rendering for AR and VR applications
    • AI (deep learning and machine learning) inference for elastic Internet service deployment
    • Educational environment of deep learning
    • Modeling experiment environment of deep learning
Instance types
Instance type vCPU Memory (GiB) GPU GPU memory Bandwidth (Gbit/s) Packet forwarding rate (pps) NIC queues ENIs Private IP addresses per ENI
ecs.vgn5i-m1.large 2 6 NVIDIA P4 * 1/8 8GB * 1/8 1 300,000 2 2 6
ecs.vgn5i-m2.xlarge 4 12 NVIDIA P4 * 1/4 8GB * 1/4 2 500,000 2 3 10
ecs.vgn5i-m4.2xlarge 8 24 NVIDIA P4 * 1/2 8GB * 1/2 3 800,000 2 4 10
ecs.vgn5i-m8.4xlarge 16 48 NVIDIA P4 * 1 8GB * 1 5 1,000,000 4 5 20
Note