This topic describes the release notes for training-nv-pytorch 25.04.
Main features and bug fix list
Main features
Base image aligned with NGC 25.03, CUDA upgraded to 12.8.1, and TransformerEngine upgraded to 2.1.
Triton adapted to 3.2.0, accelerate upgraded to 1.6.0+ali, with corresponding version features and bug fixes integrated.
vLLM upgraded to the latest community version 0.8.5, flashinfer-python upgraded to 0.2.5, Transformers upgraded to 4.51.2+ali, supporting Qwen3.
Bug fixes
None
Content
Scenarios | Training/inference |
Framework | pytorch |
Requirements | NVIDIA Driver release >= 570 |
Core components |
|
Assets
25.04
egslingjun-registry.cn-wulanchabu.cr.aliyuncs.com/egslingjun/training-nv-pytorch:25.04-serverless
VPC image
acs-registry-vpc.{region-id}.cr.aliyuncs.com/egslingjun/{image:tag}
{region-id}indicates the region where your ACS is activated, such as cn-beijing and cn-wulanchabu.{image:tag}indicates the name and tag of the image.
Currently, you can pull only images in the China (Beijing) region over a VPC.
egslingjun-registry.cn-wulanchabu.cr.aliyuncs.com/egslingjun/training-nv-pytorch:25.04-serverlessimage is suitable for ACS product form and Lingjun multi-tenant product form, but not for Lingjun single-tenant product form.egslingjun-registry.cn-wulanchabu.cr.aliyuncs.com/egslingjun/training-nv-pytorch:25.04image is suitable for Lingjun single-tenant scenarios.
Driver requirements
25.04 Release aligns with NGC pytorch 25.03 image version update (since NGC releases images at the end of each month, Golden image development month can only be based on the previous month's NGC version). Therefore, Golden-gpu driver follows the requirements of the corresponding NGC image version. This release is based on CUDA 12.8.1.012 and requires NVIDIA driver version 570 or higher. However, if you are running on a data center GPU (such as T4 or any other data center GPU), you can use NVIDIA driver version 470.57 (or higher R470), 525.85 (or higher R525), 535.86 (or higher R535), or 545.23 (or higher R545).
The CUDA driver compatibility package only supports specific drivers. Therefore, users should upgrade from all R418, R440, R450, R460, R510, R520, R530, R545, R555, and R560 drivers, which are not forward compatible with CUDA 12.8. For a complete list of supported drivers, see the CUDA application compatibility topic. For more information, see CUDA compatibility and upgrades.
Key features and enhancements
PyTorch compiling optimization
The compiling optimization feature introduced in PyTorch 2.0 is suitable or small-scale training on one GPU. However, LLM training requires GPU memory optimization and a distributed framework, such as FSDP or DeepSpeed. Consequently, torch.compile() cannot benefit your training or provide negative benefits.
Controlling the communication granularity in the DeepSpeed framework helps the compiler obtain a complete compute graph for a wider scope of compiling optimization.
Optimized PyTorch:
The frontend of the PyTorch compiler is optimized to ensure compiling when any graph break occurs in a compute graph.
The mode matching and dynamic shape capabilities are enhanced to optimize the compiled code.
After the preceding optimizations, the E2E throughput is increased by 20% when a 8B LLM is trained.
GPU memory optimization for recomputation
We forecast and analyze the consumption of GPU memory of models by running performance tests on models deployed in different clusters or configured with different parameters and collecting system metrics, such as GPU memory utilization. Based on the results, we suggest the optimal number of activation recomputation layers and integrate it into PyTorch. This allows users to easily benefit from GPU memory optimization. Currently, this feature can be used in the DeepSpeed framework.
ACCL
ACCL is an in-house HPN communication library provided by Alibaba Cloud for Lingjun. It provides ACCL-N for GPU acceleration scenarios. ACCL-N is an HPN library customized based on NCCL. It is completely compatible with NCCL and fixes some bugs in NCCL. ACCL-N also provides higher performance and stability.
E2E performance benefit assessment
With the cloud-native AI performance assessment and analysis tool CNP, we can use mainstream open source models and frameworks together with standard base images to analyze E2E performance. Additionally, we can use ablation study to further assess how each optimization component benefits the overall model training.
E2E performance contribution analysis of GPU core components
The following tests are based on Golden-25.04, conducting E2E performance evaluation and comparative analysis on multi-node GPU clusters, with comparison items including the following:
Base: NGC PyTorch Image
ACS AI Image: Base+ACCL: Image using ACCL communication library
ACS AI Image: AC2+ACCL: Golden image using AC2 BaseOS, without enabling any optimization
ACS AI Image: AC2+ACCL+CompilerOpt: Golden image using AC2 BaseOS, with only torch compile optimization enabled
ACS AI Image: AC2+ACCL+CompilerOpt+CkptOpt: Golden image using AC2 BaseOS, with both torch compile and selective gradient checkpoint optimization enabled

Quick Start
The following example uses only Docker to pull the training-nv-pytorch image.
To use the training-nv-pytorch image in ACS, you must pull it from the artifact center page of the console where you create workloads or specify the image in a YAML file.
1. Select an image
docker pull egslingjun-registry.cn-wulanchabu.cr.aliyuncs.com/egslingjun/training-nv-pytorch:[tag]2. Call the API to enable compiling optimization and GPU memory optimization for recomputation
Enable compiling optimization
Use the transformers Trainer API:

Enable GPU memory optimization for recomputation
export CHECKPOINT_OPTIMIZATION=true
3. Launch containers
The image provides a built-in model training tool named ljperf to demonstrate the procedure for launching containers and running training tasks.
LLM
# Launch a container and log on to the container.
docker run --rm -it --ipc=host --net=host --privileged egslingjun-registry.cn-wulanchabu.cr.aliyuncs.com/egslingjun/training-nv-pytorch:[tag]
# Run the training demo.
ljperf benchmark --model deepspeed/llama3-8b 4. Suggestions
Changes in the image involve the PyTorch and DeepSpeed libraries. Do not reinstall it.
Leave
zero_optimization.stage3_prefetch_bucket_sizein the DeepSpeed configuration empty or set it toauto.
Known issues
The image is upgraded to PyTorch 2.6, and the performance benefits of recomputation memory optimization for LLM-type models are not as good as in previous images. Continuous optimization is in progress.