All Products
Search
Document Center

Container Compute Service:Inference-nv-pytorch 25.03

Last Updated:May 15, 2025

This topic describes the release notes for inference-nv-pytorch 25.03.

Main features and bug fix lists

Main features

  • PyTorch in the vLLM image is updated to 2.6.0.

  • vLLM is updated to v0.8.2.

  • SGLang is updated to v0.4.4.post1.

  • ACCL-N is updated to 2.23.4.12. New features and bug fixes are provided.

Bug fix

None

Content

inference-nv-pytorch

inference-nv-pytorch

Tag

25.03-vllm0.8.2-pytorch2.6-cu124-20250327-serverless

25.03-sglang0.4.4.post1-pytorch2.5-cu124-20250327-serverless

Scenarios

LLM inference

LLM inference

Framework

PyTorch

PyTorch

Requirements

NVIDIA driver release >= 550

NVIDIA driver release >= 550

System components

  • Ubuntu 22.04

  • Python 3.10

  • Torch 2.6.0

  • CUDA 12.4

  • ACCL-N 2.23.4.12

  • accelerate 1.5.2

  • diffusers 0.32.2

  • flash_attn 2.7.4.post1

  • transformer 4.50.1

  • vllm 0.8.2

  • ray 2.44.0

  • triton 3.2.0

  • Ubuntu 22.04

  • Python 3.10

  • Torch 2.5.1

  • CUDA 12.4

  • ACCL-N 2.23.4.12

  • accelerate 1.5.2

  • diffusers 0.32.2

  • flash_attn 2.7.4.post1

  • transformer 4.48.3

  • vllm 0.7.2

  • ray 2.44.0

  • triton 3.2.0

  • flashinfer-python 0.2.3

  • sglang 0.4.4.post1

  • sgl-kernel 0.0.5

Assets

Public image

  • egslingjun-registry.cn-wulanchabu.cr.aliyuncs.com/egslingjun/inference-nv-pytorch:25.03-vllm0.8.2-pytorch2.6-cu124-20250328-serverless

  • egslingjun-registry.cn-wulanchabu.cr.aliyuncs.com/egslingjun/inference-nv-pytorch:25.03-sglang0.4.4.post1-pytorch2.5-cu124-20250327-serverless

VPC image

  • acs-registry-vpc.{region-id}.cr.aliyuncs.com/egslingjun/{image:tag}

    {region-id} indicates the region where your ACS is activated, such as cn-beijing and cn-wulanchabu.
    {image:tag} indicates the name and tag of the image.
Important

Currently, you can pull only images in the China (Beijing) region over a VPC.

Note

The inference-nv-pytorch:25.03-vllm0.8.2-pytorch2.6-cu124-20250328-serverless and inference-nv-pytorch:25.03-sglang0.4.4.post1-pytorch2.5-cu124-20250327-serverless images are suitable for ACS products and Lingjun multi-tenant products. They are not suitable for Lingjun single-tenant products.

Driver requirements

NVIDIA driver release >= 550

Quick Start

The following example uses only Docker to pull the inference-nv-pytorch image and uses the Qwen2.5-7B-Instruct model to test inference services.

Note

To use the inference-nv-pytorch image in ACS, you must select the image from the artifact center page of the console where you create workloads, or specify the image in a YAML file. For more information, refer to the following topics:

  1. Pull the inference container image.

    docker pull egslingjun-registry.cn-wulanchabu.cr.aliyuncs.com/egslingjun/inference-nv-pytorch:[tag]
  2. Download an open source model in the modelscope format.

    pip install modelscope
    cd /mnt
    modelscope download --model Qwen/Qwen2.5-7B-Instruct --local_dir ./Qwen2.5-7B-Instruct
  3. Run the following command to log on to the container.

    docker run -d -t --network=host --privileged --init --ipc=host \
    --ulimit memlock=-1 --ulimit stack=67108864  \
    -v /mnt/:/mnt/ \
    egslingjun-registry.cn-wulanchabu.cr.aliyuncs.com/egslingjun/inference-nv-pytorch:[tag]
  4. Run an inference test to test the inference conversation feature of vLLM.

    1. Start the Server service.

      python3 -m vllm.entrypoints.openai.api_server \
      --model /mnt/Qwen2.5-7B-Instruct \
      --trust-remote-code --disable-custom-all-reduce \
      --tensor-parallel-size 1
    2. Test on the client.

      curl http://localhost:8000/v1/chat/completions \
          -H "Content-Type: application/json" \
          -d '{
          "model": "/mnt/Qwen2.5-7B-Instruct",  
          "messages": [
          {"role": "system", "content": "You are a friendly AI assistant."},
          {"role": "user", "content": "Please introduce deep learning."}
          ]}'

      For more information about how to work with vLLM, see vLLM.

Known issues

None