Function Compute provides elastic instances and GPU-accelerated instances. This topic describes the instance types, specifications, usage notes, and usage modes of instances.

Instance types

  • Elastic instance: The basic instance type of Function Compute. Elastic instances are suitable for scenarios in which burst traffic occurs and compute-intensive scenarios.
  • GPU-accelerated instances: Instances that uses the Ampere and Turing architectures for acceleration in GPU scenarios. GPU-accelerated instances are mainly used in scenarios such as audio and video processing, AI, and image processing. Service loads in different scenarios are accelerated by using hardware-based accelerators to improve the efficiency of service processing.
    The following topics describe best practices for using GPU-accelerated instances in different scenarios:
    Important
    • GPU-accelerated instances can be deployed only by using container images.
    • For the best user experience, join the DingTalk group 11721331) and provide the following information:
      • The name of the organization, such as the name of your company.
      • The ID of your Alibaba Cloud account.
      • The region where you want to use GPU-accelerated instances, such as China (Shenzhen).
      • The contact information, such as your mobile number, email address, or DingTalk account.

Instance specifications

  • Elastic instances

    The following table describes the specifications of elastic instances. You can select instance specifications based on your business requirements.

    vCPUMemory size (MB)Maximum code package size (GB)Maximum function execution duration (s)Maximum disk size (GB)Maximum bandwidth (Gbit/s)
    0.05 to 16

    The value must be a multiple of the 0.05.

    128 to 32768

    The value must be a multiple of 64.

    108640010
    Valid values:
    • 512 MB. This is the default value.
    • 10 GB.
    5
    Note The ratio of vCPU capacity to memory capacity (in GB) ranges from 1:1 to 1:4.
  • GPU-accelerated instances

    The following table describes the specifications of GPU-accelerated instances. You can select instance specifications based on your business requirements.

    Instance specificationsCard typevGPU memory (GB)vGPU computing power (card)vCPU (core)Memory size (MB)
    fc.gpu.tesla.1Tesla T41 to 16

    The value must be an integer. Unit: GB.

    The value is calculated based on the following formula: vGPU memory (GB)/16. For example, if you set the vGPU memory to 5 GB, you can use up to 5/16 memory cards.

    Description: The computing resources are automatically allocated by Function Compute. You do not need to manually allocate the computing resources.

    Valid values: 0.05 to the value of [vGPU memory (GB)/2].

    The value must be a multiple of 0.05. For more information, see GPU specifications.

    Valid values: 128 to the value of [vGPU memory (GB) x 2048].

    The value must be a multiple of 64. For more information, see GPU specifications.

    fc.gpu.ampere.1Ampere A101 to 24

    The value must be an integer. Unit: GB.

    The value is calculated based on the following formula: vGPU memory (GB)/24. For example, if you set the vGPU memory to 5 GB, you can use up to 5/24 memory cards.

    Description: The computing resources are automatically allocated by Function Compute. You do not need to manually allocate the computing resources.

    Valid values: 0.05 to the value of [vGPU memory (GB)/3].

    The value must be a multiple of 0.05. For more information, see GPU specifications.

    Valid values: 128 to the value of [vGPU memory (GB) x 4096)/3].

    The value must be a multiple of 64. For more information, see GPU specifications.

    Function Compute GPU-accelerated instances also support the following resource specifications.

    Image size (GB)Maximum function execution duration (s)Maximum disk size (GB)Maximum bandwidth (Gbit/s)

    Container Registry Enterprise Edition (Standard Edition): 10

    Container Registry Enterprise Edition (Advanced Edition): 10

    Container Registry Enterprise Edition (Basic Edition): 10

    Container Registry Personal Edition (free): 10

    86400105
    Note
    • Setting the instance type to g1 is equivalent to setting the instance type to fc.gpu.tesla.1.
    • GPU-accelerated instances of the T4 type are supported in the following regions: China (Hangzhou), China (Shanghai), China (Beijing), China (Zhangjiakou), China (Shenzhen), Japan (Tokyo), and US (Virginia).
    • GPU-accelerated instances of the A10 type are supported in the following regions: China (Hangzhou), China (Shanghai), China (Shenzhen), Japan (Tokyo), and US (Virginia).

GPU specifications

Expand to view the specification of fc.gpu.tesla.1.
vGPU memory (GB)vCPU (core)Maximum memory size (GB)Memory size (MB)
10.05 to 0.52128 to 2048
20.05 to 14128 to 4096
30.05 to 1.56128 to 6144
40.05 to 28128 to 8192
50.05 to 2.510128 to 10240
60.05 to 312128 to 12288
70.05 to 3.514128 to 14336
80.05 to 416128 to 16384
90.05 to 4.518128 to 18432
100.05 to 520128 to 20480
110.05 to 5.522128 to 22528
120.05 to 624128 to 24576
130.05 to 6.526128 to 26624
140.05 to 728128 to 28672
150.05 to 7.530128 to 30720
160.05 to 832128 to 32768
Expand to view the specifications of fc.gpu.ampere.1.
vGPU memory (GB)vCPU (core)Maximum memory size (GB)Memory size (MB)
10.05 to 0.31.3125128 to 1344
20.05 to 0.652.625128 to 2688
30.05 to 14128 to 4096
40.05 to 1.35.3125128 to 5440
50.05 to 1.656.625128 to 6784
60.05 to 28128 to 8192
70.05 to 2.39.3125128 to 9536
80.05 to 2.6510.625128 to 10880
90.05 to 312128 to 12288
100.05 to 3.313.3125128 to 13632
110.05 to 3.6514.625128 to 14976
120.05 to 416128 to 16384
130.05 to 4.317.3125128 to 17728
140.05 to 4.6518.625128 to 19072
150.05 to 520128 to 20480
160.05 to 5.321.3125128 to 21824
170.05 to 5.6522.625128 to 23168
180.05 to 624128 to 24576
190.05 to 6.325.3125128 to 25920
200.05 to 6.6526.625128 to 27264
210.05 to 728128 to 28672
220.05 to 7.329.3125128 to 30016
230.05 to 7.6530.625128 to 31360
240.05 to 832128 to 32768

Usage notes

If you want to reduce the cold start duration or improve resource utilization, you can use the following solutions.

  • Provisioned mode: the ideal solution to resolve the cold start issue. You can reserve a fixed number of instances based on your resource budget, reserve resources for a specified period of time based on business fluctuations, or select an auto scaling policy based on usage thresholds. The average cold start latency of instances is significantly reduced when the provisioned mode is used.
  • High concurrency for a single instance: the ideal solution to improve resource utilization of instances. We recommend that you configure high concurrency for instances based on the resource demands of your business. If you use this solution, the CPU and memory are preemptively shared when multiple tasks are executed on one instance at the same time. This way, resource utilization is improved.

Instance modes

GPU-accelerated instances and elastic instances support the following usage modes.

On-demand mode

In on-demand mode, Function Compute automatically allocates and releases instances for functions. In this mode, the billed execution duration starts from the time when a request is sent to execute a function and ends at a time when the request is completely executed. An on-demand instance can process one or more requests at a time. For more information, see Configure instance concurrency.

  • Execution duration of functions by a single instance that processes a single request at a time
    When an on-demand instance processes a single request, the billed execution duration starts from the time when the request arrives at the instance to the time when the request is completely executed. instanceconcurrency=1
  • Execution duration of functions by a single instance that concurrently processes multiple requests at a time

    If you use an on-demand instance to concurrently process multiple requests, the billed execution duration starts from the time when the first request arrives at the instance to the time when the last request is completely executed. You can reuse resources to concurrently process multiple requests. This way, resource costs can be reduced.

    instanceconcurrency>1

Provisioned mode

In provisioned mode, function instances are allocated, released, and managed by yourself. For more information, see Configure provisioned instances and auto scaling rules. In provisioned mode, the billed execution duration starts from the time when Function Compute starts a provisioned instance and ends when you release the instance. You are charged for the instance until you release the instance, regardless of whether the provisioned instance executes requests. On-Demand Resources

Additional information