All Products
Search
Document Center

Container Compute Service:training-nv-pytorch 25.08

Last Updated:Aug 20, 2025

This topic describes the release notes for version 25.08 of training-nv-pytorch.

Main features and bug fixes

Main features

  • Upgraded transformers to 4.53.3+ali.

  • Upgraded vllm to 0.10.0 and ray to 2.48.0.

Bug fixes

(None)

Contents

Application Scenario

Training/Inference

Framework

PyTorch

Requirements

NVIDIA Driver release >= 575

Core components

  • Ubuntu: 24.04

  • Python: 3.12.7+gc

  • CUDA: 12.8

  • perf: 5.4.30

  • gdb: 15.0.50.20240403-git

  • torch: 2.7.1.8+nv25.3

  • triton: 3.3.0

  • transformer_engine: 2.3.0+5de3e14

  • deepspeed: 0.16.9+ali

  • flash_attn: 2.7.2

  • flashattn-hopper: 3.0.0b1

  • transformers: 4.53.3+ali

  • grouped_gemm: 1.1.4

  • accelerate: 1.7.0+ali

  • diffusers: 0.34.0

  • mmengine: 0.10.3

  • mmcv: 2.1.0

  • mmdet: 3.3.0

  • opencv-python-headless: 4.11.0.86

  • ultralytics: 8.3.96

  • timm: 1.0.19

  • vllm: 0.10.0

  • flashinfer-python: 0.2.5

  • pytorch-dynamic-profiler: 0.24.11

  • peft: 0.16.0

  • ray: 2.48.0

  • accl-n: 2.27.5.14

  • megatron-core: 0.12.1

Assets

25.08

  • egslingjun-registry.cn-wulanchabu.cr.aliyuncs.com/egslingjun/training-nv-pytorch:25.08-serverless

VPC image

  • acs-registry-vpc.{region-id}.cr.aliyuncs.com/egslingjun/{image:tag}

    {region-id} indicates the region where your ACS is activated, such as cn-beijing and cn-wulanchabu.
    {image:tag} indicates the name and tag of the image.
Important

Currently, you can pull only images in the China (Beijing) region over a VPC.

Note

The egslingjun-registry.cn-wulanchabu.cr.aliyuncs.com/egslingjun/training-nv-pytorch:25.08-serverless image is suitable for ACS and Lingjun multi-tenant products. This image is not suitable for Lingjun single-tenant products and must not be used in Lingjun single-tenant scenarios.

Driver requirements

  • The 25.08 release is based on CUDA 12.8.0 and requires NVIDIA driver version 575 or later. However, if you are running on a data center GPU, such as a T4, you can use NVIDIA driver version 470.57 (or a later R470 release), 525.85 (or a later R525 release), 535.86 (or a later R535 release), or 545.23 (or a later R545 release).

  • The CUDA driver compatibility package supports only specific drivers. Therefore, you must upgrade all R418, R440, R450, R460, R510, R520, R530, R545, R555, and R560 drivers because they are not forward-compatible with CUDA 12.8. For a complete list of supported drivers, see CUDA application compatibility. For more information, see CUDA compatibility and upgrades.

Key features and enhancements

PyTorch compiling optimization

The compiling optimization feature introduced in PyTorch 2.0 is suitable or small-scale training on one GPU. However, LLM training requires GPU memory optimization and a distributed framework, such as FSDP or DeepSpeed. Consequently, torch.compile() cannot benefit your training or provide negative benefits.

  • Controlling the communication granularity in the DeepSpeed framework helps the compiler obtain a complete compute graph for a wider scope of compiling optimization.

  • Optimized PyTorch:

    • The frontend of the PyTorch compiler is optimized to ensure compiling when any graph break occurs in a compute graph.

    • The mode matching and dynamic shape capabilities are enhanced to optimize the compiled code.

After the preceding optimizations, the E2E throughput is increased by 20% when a 8B LLM is trained.

GPU memory optimization for recomputation

We forecast and analyze the consumption of GPU memory of models by running performance tests on models deployed in different clusters or configured with different parameters and collecting system metrics, such as GPU memory utilization. Based on the results, we suggest the optimal number of activation recomputation layers and integrate it into PyTorch. This allows users to easily benefit from GPU memory optimization. Currently, this feature can be used in the DeepSpeed framework.

ACCL

ACCL is an in-house HPN communication library provided by Alibaba Cloud for Lingjun. It provides ACCL-N for GPU acceleration scenarios. ACCL-N is an HPN library customized based on NCCL. It is completely compatible with NCCL and fixes some bugs in NCCL. ACCL-N also provides higher performance and stability.

E2E performance gain evaluation

A comprehensive end-to-end (E2E) performance comparison was conducted using the cloud-native AI performance evaluation and analysis tool CNP. Mainstream open-source models and framework configurations were used and compared against the standard base image. Through ablation experiments, the contribution of each optimization component to the overall model training performance was further evaluated.

E2E performance contribution analysis of core GPU components

The following tests are based on version 25.08 and involve an E2E performance evaluation and comparative analysis for training on a multi-node GPU cluster. The comparison items are as follows:

  1. Base: NGC PyTorch Image.

  2. ACS AI Image: Base+ACCL: The image uses the ACCL communication library.

  3. ACS AI Image: AC2+ACCL: The Golden image uses AC2 BaseOS with no optimizations enabled.

  4. ACS AI Image: AC2+ACCL+CompilerOpt: The Golden image uses AC2 BaseOS with only the torch compile optimization enabled.

  5. ACS AI Image: AC2+ACCL+CompilerOpt+CkptOpt: The Golden image uses AC2 BaseOS with both torch compile and selective gradient checkpoint optimizations enabled.

image.png

Quick start

The following example shows how to pull the training-nv-pytorch image using Docker.

Note

To use the training-nv-pytorch image in ACS, you can select it from the Artifacts page when you create a workload in the console, or specify the image reference in a YAML file.

1. Select the image

docker pull egslingjun-registry.cn-wulanchabu.cr.aliyuncs.com/egslingjun/training-nv-pytorch:[tag]

2. Call the API to enable the compiler and recomputation video memory optimization

  • Enable compilation optimization

    Use the transformers Trainer API:

    image.png

  • Enable recomputation video memory optimization

    export CHECKPOINT_OPTIMIZATION=true

3. Start the container

The image includes a built-in model training tool named ljperf. The following steps describe how to start the container and run a training task using this tool.

LLM class

# Start and enter the container
docker run --rm -it --ipc=host --net=host  --privileged egslingjun-registry.cn-wulanchabu.cr.aliyuncs.com/egslingjun/training-nv-pytorch:[tag]

# Run the training demo
ljperf benchmark --model deepspeed/llama3-8b 

4. Usage recommendations

  • The changes in the image involve libraries such as PyTorch and DeepSpeed. Do not reinstall them.

  • In the DeepSpeed configuration, leave zero_optimization.stage3_prefetch_bucket_size empty or set it to auto.

  • The NCCL_SOCKET_IFNAME environment variable built into this image must be dynamically adjusted based on the scenario:

    • If a single pod requests 1, 2, 4, or 8 cards for a training or inference task, set NCCL_SOCKET_IFNAME=eth0. This is the default configuration in this image.

    • If a single pod requests all 16 cards on a machine for a training or inference task, you can use the High-Performance Network (HPN). In this case, set NCCL_SOCKET_IFNAME=hpn0.

Known issues

(None)