To meet user requirements for image usage in AI application scenarios, Function Compute recommends using images for delivery. Images in AI and big data fields typically range in gigabytes. Function Compute has increased the image size limit and provides image acceleration. This topic describes image size limits, image acceleration methods, and usage of public base images.
Size limits on compressed images
The following table describes the size limits on compressed images. The limits vary based on the types and editions of Container Registry instances. For more information about the billing of Container Registry editions, see Billing.
Container Registry edition | Image size limit (GB) | Billing required |
Container Registry Enterprise Edition (Standard Edition) | 10 | Yes |
Container Registry Enterprise Edition (Pro) | 10 | Yes |
Container Registry Enterprise Edition (Basic Edition) | 10 | Yes |
Container Registry Personal Edition | 3 | Free |
The image size limit for Container Registry Personal Edition is increased to 10 GB in the following regions:
China (Hangzhou), China (Shanghai), China (Beijing), China (Zhangjiakou), China (Shenzhen), China (Hong Kong), Japan (Tokyo), Singapore, Germany (Frankfurt), US (Silicon Valley), and US (Virginia).
Use a driver-independent container image
Do not add driver-related components to your image. Additionally, ensure that your application does not depend on specific driver versions. For example, do not include libcuda.so that provides CUDA Driver API in your image. This dynamic library is strongly associated with the kernel driver version of the device. If the dynamic library does not match the driver version, exceptions may occur.
When a function instance is created, Function Compute injects driver-related user mode components into the container. These components match the driver version provided by the platform. In GPU container virtualization technologies such as NVIDIA Container Runtime, specific driver tasks are performed by platform resource providers to make GPU container images more adaptable to environments. The drivers used by Function Compute GPU-accelerated instances are provided by NVIDIA. The driver versions used by the instances may change as a result of feature iteration, new card releases, bug fixes, and driver lifecycle expiration.
If you are already using GPU container virtualization technologies such as NVIDIA Container Runtime, avoid using the docker commit command to create images. These images contain injected driver-related components. When you use these images on the Function Compute platform, undefined behaviors such as application exceptions may occur because the component versions do not match those on the platform.
Alibaba Cloud base images for Function Compute
To improve compatibility and performance, Function Compute GPU provides and recommends using official base images to build business logic, making it easier for you to build your own business logic.
Function Compute Serverless GPU provides multiple official base images, including mainstream machine learning frameworks and popular model platform images, such as PyTorch, TensorFlow, PaddlePaddle. You can use base images to get started with high performance computing in GPU scenarios with ease. These base images are preloaded with environments and dependencies that are required to run images on Function Compute. You can use these images without the need to install and configure environments and dependencies. The base images help improve the performance and reliability of applications. The following table lists the base images provided by Function Compute GPU.
Image family | Base image address (You can use the registry-vpc prefix for internal network pulling) | Image tag | Computing framework version | Python version | CUDA version | Ubuntu version |
modelscope | registry.{cn-hangzhou|us-east-1|ap-northeast-1}.aliyuncs.com/serverless_devs/modelscope | N/A | 3.7 | 11.3.0 | 20.04 | |
PyTorch | registry.{cn-hangzhou|us-east-1|ap-northeast-1}.aliyuncs.com/serverless_devs/pytorch | 1.14.0 | 3.8 | 11.8.0 | ||
TensorFlow | registry.{cn-hangzhou|us-east-1|ap-northeast-1}.aliyuncs.com/serverless_devs/tensorflow | |||||
PaddlePaddle | registry.{cn-hangzhou|us-east-1|ap-northeast-1}.aliyuncs.com/serverless_devs/paddlepaddle | 22.04 | ||||
CUDA | registry.{cn-hangzhou|us-east-1|ap-northeast-1}.aliyuncs.com/serverless_devs/cuda | 11.8.0-devel-ubuntu22.04 | N/A | N/A |
Base images are available only in regions where GPU is available. The regions include China (Hangzhou), China (Shanghai), China (Beijing), China (Zhangjiakou), China (Shenzhen), Japan (Tokyo), and US (Virginia). For more information about the region IDs that correspond to each region, see Regions.
Base images are available only for users of Container Registry Personal Edition. This feature cannot improve performance for users of Container Registry Enterprise Edition because of data isolation issues.
Benefits of base images
Using Function Compute Serverless GPU base images provides the following benefits:
Higher compatibility
Function Compute Serverless GPU base images are optimized and tested for GPU-accelerated instances to ensure higher compatibility and stability for applications running on GPU-accelerated instances.
Better performance
Function Compute GPU-accelerated instances optimize the frameworks and data reading of base images to provide better end-to-end performance and experience. Additionally, the base images contain some common computing libraries, such as NumPy and TensorFlow, which can help you write high-performance code with ease.
Simplified build process
You can directly use Function Compute Serverless GPU base images to build your own business logic without the need to manually configure environments such as NumPy and SciPy.
Base images help you better build business logic and achieve higher performance and better compatibility. Function Compute recommends that you use Function Compute GPU base images when you build your own business logic.
How to use a base image
Base images are easy to use. You need to only add the base image that you want to use in the Dockerfile when you build your business logic. For example, to use the PyTorch base image in a GPU function in the China (Shanghai) region, you can add the following code to your Dockerfile:
FROM registry.cn-shanghai.aliyuncs.com/serverless_devs/pytorch:22.12-py3
ADD . .
EXPOSE 9000Function Compute provides addresses for you to conveniently pull images. For example, you can run the following command to pull a PyTorch base image:
docker pull registry.cn-shanghai.aliyuncs.com/serverless_devs/pytorch:22.12-py3FAQ
Is there any difference between the base images of Function Compute and the images provided by NVIDIA?
The images are the same. You do not need to worry about compatibility issues.
If the version of the framework that I want to use is later than that of the base image provided by Function Compute, can high performance still be ensured?
Yes. Container images are layered. Therefore, specific data is the same between a framework of an earlier version and a later version, which means high performance can still be ensured.
What do I do if I cannot find a required base image?
We recommend that you join the official Function Compute user group (DingTalk group: 64970014484) for technical support.
Are base images compatible with different types of GPUs?
Yes. Base images are compatible with all types of GPUs of GPU-accelerated instances in Function Compute. The on-demand mode and provisioned mode can be used as expected.