Platform for AI (PAI) provides a range of prebuilt official images with different frameworks and CUDA versions. Select a suitable image when using Deep Learning Container (DLC), Elastic Algorithm Service (EAS), or Data Science Workshop (DSW) to quickly set up an AI development environment. This topic describes the features of PAI official images and provides a list of core images.
Understand the naming convention
PAI official images follow a specific naming convention that helps you identify key details from the name. An image name typically includes the following parts. We recommend following a similar convention when creating custom images.
Sample name | Name breakdown | Module identifier |
|
| Official images are tailored for different PAI services:
|
|
|
Official image features
PAI provides prebuilt images for multiple machine learning frameworks. This section describes the official PAI images for mainstream frameworks. You can view the full list of official PAI images on the AI Computing Asset Management > Images page in the PAI console.
TensorFlow
Framework version | CUDA version (GPU instance types only) | Operating system |
|
|
|
TensorFlow Serving
Framework version | CUDA version (GPU only) | Operating system |
|
|
|
Pytorch
Framework version | CUDA version (GPU instance types only)) | Operating system |
|
|
|
DeepRec
Framework version | CUDA version (GPU instance types only)) | Operating system |
| CUDA 11.4 | Ubuntu 18.04 |
XGBoost
Framework version | CUDA version (GPU only) | Operating system |
XGBoost1.6.0 | Not applicable, supports CPU instance types only | Ubuntu 18.04 |
Triton Inference Server
Framework version | CUDA version (GPU instance types only)) | Operating system |
|
| Ubuntu 20.04 |
Images for common scenarios
Lingjun Intelligent Computing Service (Serverless Edition)
Image name | Framework | Instance type | CUDA version | Operating system | Supported region | Programming language and version |
deepspeed-training:23.06-gpu-py310-cu121-ubuntu22.04 |
| GPU | 12.1 | ubuntu22.04 | China (Ulanqab) | Python3.10 |
megatron-training:23.06-gpu-py310-cu121-ubuntu22.04 |
| GPU | 12.1 | ubuntu22.04 | China (Ulanqab) | Python3.10 |
nemo-training:23.06-gpu-py310-cu121-ubuntu22.04 |
| GPU | 12.1 | ubuntu22.04 | China (Ulanqab) | Python3.10 |
Artificial Intelligence Generated Content (AIGC)
Image name | Framework | Instance type | CUDA version | Operating system | Region | Programming language and version |
stable-diffusion-webui:3.0 | StableDiffusionWebUI3.0 | GPU | 11.7 | ubuntu22.04 |
| Python3.10 |
stable-diffusion-webui:2.2 | StableDiffusionWebUI2.2 | GPU | 11.7 | ubuntu22.04 | Python3.10 | |
stable-diffusion-webui:1.1 | StableDiffusionWebUI1.1 | GPU | 11.7 | ubuntu22.04 | Python3.10 | |
stable-diffusion-webui-env:pytorch1.13-gpu-py310-cu117-ubuntu22.04 | SD-WebUI-ENV | GPU | 11.7 | ubuntu22.04 | Python3.10 |
EAS deployment
The following table lists some of the official PAI images that you can use in EAS. To view the list of all images, go to the AI Computing Asset Management > Images page in the PAI console. The image addresses in the following table use the China (Hangzhou) region as an example.
Image name | Framework | Image description | Image address |
chat-llm-webui:3.0-blade |
| Provides an LLM inference service using the Blade backend. Supports both WebUI and API access. |
|
chatbot-langchain:1.0 | ChatbotLangChain 1.0 | For building Knowledge Base Q&A applications on LangChain. |
|
comfyui:0.2-api | ComfyUI 0.2 | A ComfyUI-based image that provides asynchronous APIs for Text-to-Image and Image-to-Image tasks. |
|
comfyui:0.2 | ComfyUI 0.2 | A ComfyUI-based image for Text-to-Image and Image-to-Image use cases. |
|
comfyui:0.2-cluster | ComfyUI 0.2 | A ComfyUI-based image for Text-to-Image and Image-to-Image use cases. |
|
kohya_ss:2.2 | Kohya 2.2 | For deploying Stable Diffusion model fine-tuning applications using Kohya. |
|
modelscope-inference:1.9.1 | ModelScope 1.9.1 | An EAS image for deploying models from the ModelScope library. |
|
stable-diffusion-webui:4.2-cluster-webui | StableDiffusionWebUI 4.2 | A Stable Diffusion WebUI-based image for Text-to-Image and Image-to-Image tasks. Supports concurrent access for multiple users with resource isolation. |
|
stable-diffusion-webui:4.2-api | StableDiffusionWebUI 4.2 | A Stable Diffusion WebUI-based image that provides asynchronous APIs for Text-to-Image and Image-to-Image tasks. |
|
stable-diffusion-webui:4.2-standard | StableDiffusionWebUI 4.2 | A Stable Diffusion WebUI-based image for Text-to-Image and Image-to-Image use cases. |
|
tensorflow-serving:2.14.1 | TensorflowServing 2.14.1 | Contains TensorFlow Serving and suitable for inference services based on TensorFlow models. This image supports only CPU instances. |
|
tensorflow-serving:2.14.1-gpu | TensorflowServing 2.14.1 | Deploys TensorFlow models as an inference service in GPU environments, based on the open-source TensorFlow Serving framework. |
|
chat-llm-webui:3.0 |
| Provides an LLM inference service using the Hugging Face backend. Supports both WebUI and API access. |
|
chat-llm-webui:3.0-vllm |
| Provides an LLM inference service using the vLLM backend. Supports both WebUI and API access. |
|
huggingface-inference:1.0-transformers4.33 | Transformers 4.33 | An EAS image for deploying models from the Hugging Face Transformers library. |
|
tritonserver:23.11-py3 | TritonServer 23.11 | Deploys various models as an inference service, based on the open-source Triton Inference Server. |
|