All Products
Search
Document Center

Container Compute Service:inference-nv-pytorch 25.02

Last Updated:May 19, 2025

This topic describes the release notes for inference-nv-pytorch 25.02.

Main Features and Bug Fix Lists

Main Features

  • vLLM is updated to v0.7.2.

  • SGLang v0.4.3.post2 is supported.

  • DeepSeek models are supported.

Bug Fix

None

Contents

Use scenarios

LLM inference

Framework

pytorch

Requirements

NVIDIA Driver release >= 550

System components

  • Ubuntu 22.04

  • Python 3.10

  • Torch 2.5.1

  • CUDA 12.4

  • transformers 4.48.3

  • triton 3.1.0

  • ray 2.42.1

  • vlllm 0.7.2

  • sgl-kernel 0.0.3.post6

  • sglang 0.4.3.post2

  • flashinfer-python 0.2.1.post2

  • ACCL-N 2.23.4.11

Asset

Public image

  • egslingjun-registry.cn-wulanchabu.cr.aliyuncs.com/egslingjun/inference-nv-pytorch:25.02-vllm0.7.2-sglang0.4.3.post2-pytorch2.5-cuda12.4-20250305-serverless

VPC image

  • acs-registry-vpc.{region-id}.cr.aliyuncs.com/egslingjun/{image:tag}

    {region-id} indicates the region where your ACS is activated, such as cn-beijing.
    {image:tag} indicates the name and tag of the image.
Important

Currently, you can pull only images in the China (Beijing) region over a VPC.

Note
  • The inference-nv-pytorch:25.02-vllm0.7.2-sglang0.4.3.post2-pytorch2.5-cuda12.4-20250305-serverless image is suitable for ACS products and Lingjun multi-tenant products. It is not suitable for Lingjun single-tenant products.

  • The inference-nv-pytorch:25.02-vllm0.7.2-sglang0.4.3.post2-pytorch2.5-cuda12.4-20250305 image is suitable for Lingjun single-tenant scenarios.

Driver Requirements

NVIDIA Driver release >= 550

Quick Start

The following example uses only Docker to pull the inference-nv-pytorch image and uses the Qwen2.5-7B-Instruct model to test inference services.

Note

To use the inference-nv-pytorch image in ACS, you must select the image from the artifact center page of the console where you create workloads, or specify the image in a YAML file. For more information, refer to the following topics:

  1. Pull the inference container image.

    docker pull egslingjun-registry.cn-wulanchabu.cr.aliyuncs.com/egslingjun/inference-nv-pytorch:[tag]
  2. Download an open source model in the modelscope format.

    pip install modelscope
    cd /mnt
    modelscope download --model Qwen/Qwen2.5-7B-Instruct --local_dir ./Qwen2.5-7B-Instruct
  3. Run the following command to log on to the container.

    docker run -d -t --network=host --privileged --init --ipc=host \
    --ulimit memlock=-1 --ulimit stack=67108864  \
    -v /mnt/:/mnt/ \
    egslingjun-registry.cn-wulanchabu.cr.aliyuncs.com/egslingjun/inference-nv-pytorch:[tag]
  4. Run an inference test to test the inference conversation feature of vLLM.

    1. Start the Server service.

      python3 -m vllm.entrypoints.openai.api_server \
      --model /mnt/Qwen2.5-7B-Instruct \
      --trust-remote-code --disable-custom-all-reduce \
      --tensor-parallel-size 1
    2. Test on the client.

      curl http://localhost:8000/v1/chat/completions \
          -H "Content-Type: application/json" \
          -d '{
          "model": "/mnt/Qwen2.5-7B-Instruct",  
          "messages": [
          {"role": "system", "content": "You are a friendly AI assistant."},
          {"role": "user", "content": "Please introduce deep learning."}
          ]}'

      For more information about how to work with vLLM, see vLLM.

Known Issues