This topic describes the release notes for inference-nv-pytorch 25.06.
Main features and bugs fixed
Main features
vLLM is updated to v0.9.0.1.
SGLang is updated to v0.4.7.
The deepgpu-comfyui plug-in is introduced, which can be used to accelerate ComfyUI services on L20 for Wan2.1 and FLUX model reasoning. Compared with PyTorch, the overall performance is improved by 8% to 40%.
Bugs fixed
None.
Content
inference-nv-pytorch | inference-nv-pytorch | |
Image tag | 25.06-vllm0.9.0.1-pytorch2.7-cu128-20250609-serverless | 25.06-sglang0.4.7-pytorch2.7-cu128-20250611-serverless |
Scenarios | LLM reasoning | LLM inference |
Framework | PyTorch | pytorch |
Requirements | NVIDIA Driver release >= 570 | NVIDIA Driver release >= 550 |
System components |
|
|
Assets
Publicly accessible images
egslingjun-registry.cn-wulanchabu.cr.aliyuncs.com/egslingjun/inference-nv-pytorch:25.06-vllm0.9.0.1-pytorch2.7-cu128-20250609-serverless
egslingjun-registry.cn-wulanchabu.cr.aliyuncs.com/egslingjun/inference-nv-pytorch:25.06-sglang0.4.7-pytorch2.7-cu128-20250611-serverless
VPC image
acs-registry-vpc.{region-id}.cr.aliyuncs.com/egslingjun/{image:tag}
{region-id}indicates the region where your ACS is activated, such as cn-beijing and cn-wulanchabu.{image:tag}indicates the name and tag of the image.
Currently, you can pull only images in the China (Beijing) region over a VPC.
The 25.06-vllm0.9.0.1-pytorch2.7-cu128-20250609-serverless and 25.06-sglang0.4.7-pytorch2.7-cu128-20250611-serverless images are applicable to ACS services and FLUX multi-tenant services, but are inapplicable to FLUX single-tenant services.
Driver requirements
For CUDA 12.8 images: NVIDIA driver 570 and later.
Quick Start
The following example uses only Docker to pull the inference-nv-pytorch image and uses the Qwen2.5-7B-Instruct model to test inference services.
To use the inference-nv-pytorch image in ACS, you must select the image from the artifact center page of the console where you create workloads, or specify the image in a YAML file. For more information, refer to the following topics:
Pull the inference container image.
docker pull egslingjun-registry.cn-wulanchabu.cr.aliyuncs.com/egslingjun/inference-nv-pytorch:[tag]Download an open source model in the modelscope format.
pip install modelscope cd /mnt modelscope download --model Qwen/Qwen2.5-7B-Instruct --local_dir ./Qwen2.5-7B-InstructRun the following command to log on to the container.
docker run -d -t --network=host --privileged --init --ipc=host \ --ulimit memlock=-1 --ulimit stack=67108864 \ -v /mnt/:/mnt/ \ egslingjun-registry.cn-wulanchabu.cr.aliyuncs.com/egslingjun/inference-nv-pytorch:[tag]Run an inference test to test the inference conversation feature of vLLM.
Start the Server service.
python3 -m vllm.entrypoints.openai.api_server \ --model /mnt/Qwen2.5-7B-Instruct \ --trust-remote-code --disable-custom-all-reduce \ --tensor-parallel-size 1Test on the client.
curl http://localhost:8000/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "/mnt/Qwen2.5-7B-Instruct", "messages": [ {"role": "system", "content": "You are a friendly AI assistant."}, {"role": "user", "content": "Please introduce deep learning."} ]}'For more information about how to work with vLLM, see vLLM.
Known issues
The deepgpu-comfyui plug-in supports only GN8IS for accelerating video generation based on the Wanx model.