All Products
Search
Document Center

Alibaba Cloud Model Studio:OpenAI compatible - Vision

Last Updated:Mar 15, 2026

Qwen-VL models are OpenAI-compatible. To migrate an existing OpenAI vision app, update three client configuration parameters:

Parameter Change to
base_url https://dashscope-intl.aliyuncs.com/compatible-mode/v1
api_key Your Model Studio API key
model A Qwen-VL model from the supported models list
The base_url varies by region. See Regional endpoints for other regions.

Supported models

Global

  • Qwen-VL series: qwen3-vl-plus, qwen3-vl-plus-2025-09-23, qwen3-vl-flash, qwen3-vl-flash-2025-10-15, qwen3-vl-235b-a22b-thinking, qwen3-vl-235b-a22b-instruct, qwen3-vl-32b-instruct, qwen3-vl-30b-a3b-thinking, qwen3-vl-30b-a3b-instruct, qwen3-vl-8b-thinking, qwen3-vl-8b-instruct

  • Qwen-OCR series: qwen-vl-ocr, qwen-vl-ocr-2025-11-20

International

  • Qwen-VL series

    • qwen3-vl-plus, qwen3-vl-plus-2025-12-19, qwen3-vl-plus-2025-09-23, qwen3-vl-flash, qwen3-vl-flash-2025-10-15, qwen3-vl-235b-a22b-thinking, qwen3-vl-235b-a22b-instruct, qwen3-vl-32b-instruct, qwen3-vl-30b-a3b-thinking, qwen3-vl-30b-a3b-instruct, qwen3-vl-8b-thinking, qwen3-vl-8b-instruct

    • qwen-vl-max, qwen-vl-max-latest, qwen-vl-max-2025-08-13, qwen-vl-max-2025-04-08, qwen-vl-plus, qwen-vl-plus-latest, qwen-vl-plus-2025-08-15, qwen-vl-plus-2025-07-10, qwen-vl-plus-2025-05-07, qwen-vl-plus-2025-01-25, qwen2.5-vl-72b-instruct, qwen2.5-vl-32b-instruct, qwen2.5-vl-7b-instruct, qwen2.5-vl-3b-instruct

  • QVQ series: qvq-max, qvq-max-latest, qvq-max-2025-03-25

  • Qwen-OCR series: qwen-vl-ocr, qwen-vl-ocr-2025-11-20

US

qwen3-vl-flash-us, qwen3-vl-flash-2025-10-15-us

Chinese mainland

  • Qwen-VL series

    • qwen3-vl-plus, qwen3-vl-plus-2025-12-19, qwen3-vl-plus-2025-09-23, qwen3-vl-flash, qwen3-vl-flash-2025-10-15, qwen3-vl-235b-a22b-thinking, qwen3-vl-235b-a22b-instruct, qwen3-vl-32b-instruct, qwen3-vl-30b-a3b-thinking, qwen3-vl-30b-a3b-instruct, qwen3-vl-8b-thinking, qwen3-vl-8b-instruct

    • qwen-vl-max, qwen-vl-max-latest, qwen-vl-max-2025-08-13, qwen-vl-max-2025-04-08, qwen-vl-max-2025-04-02, qwen-vl-max-2025-01-25, qwen-vl-max-2024-12-30, qwen-vl-max-2024-11-19, qwen-vl-plus, qwen-vl-plus-latest, qwen-vl-plus-2025-08-15, qwen-vl-plus-2025-07-10, qwen-vl-plus-2025-05-07, qwen-vl-plus-2025-01-25, qwen-vl-plus-2025-01-02, qwen2.5-vl-72b-instruct, qwen2.5-vl-32b-instruct, qwen2.5-vl-7b-instruct, qwen2.5-vl-3b-instruct, qwen2-vl-72b-instruct, qwen2-vl-7b-instruct, qwen2-vl-2b-instruct

  • QVQ series: qvq-max, qvq-max-latest, qvq-max-2025-03-25

  • Qwen-OCR series: qwen-vl-ocr, qwen-vl-ocr-latest, qwen-vl-ocr-2025-11-20, qwen-vl-ocr-2025-08-28, qwen-vl-ocr-2025-04-13, qwen-vl-ocr-2024-10-28

Regional endpoints

Region base_url
Singapore https://dashscope-intl.aliyuncs.com/compatible-mode/v1
Virginia https://dashscope-us.aliyuncs.com/compatible-mode/v1
Beijing https://dashscope.aliyuncs.com/compatible-mode/v1

Prerequisites

Send a request

All examples use the OpenAI-compatible message format. Pass images via the image_url content type in the messages array.

QVQ models support only streaming output. For QVQ models, see Visual reasoning.

For additional input methods and languages, see Visual understanding request examples.

OpenAI SDK (Python)

Install the OpenAI Python SDK:

pip install openai

This example sends an image URL to qwen3-vl-plus with streaming enabled:

from openai import OpenAI
import os

def get_response():
    client = OpenAI(
        api_key=os.getenv("DASHSCOPE_API_KEY"),
        # Singapore region. For other regions, see the Regional endpoints table.
        base_url="https://dashscope-intl.aliyuncs.com/compatible-mode/v1",
    )
    completion = client.chat.completions.create(
        model="qwen3-vl-plus",
        messages=[
            {
                "role": "user",
                "content": [
                    {
                        "type": "text",
                        "text": "What is this"
                    },
                    {
                        "type": "image_url",
                        "image_url": {
                            "url": "https://dashscope.oss-cn-beijing.aliyuncs.com/images/dog_and_girl.jpeg"
                        }
                    }
                ]
            }
        ],
        stream=True,
        stream_options={"include_usage": True}
    )
    for chunk in completion:
        print(chunk.model_dump())

if __name__ == "__main__":
    get_response()

Sample streaming response:

{"id": "chatcmpl-31042a05-c968-4fc6-ba28-c3aa471258dc", "choices": [{"delta": {"content": "", "role": "assistant"}, "finish_reason": null, "index": 0}], "model": "qwen-vl-plus", "object": "chat.completion.chunk", "usage": null}
{"id": "chatcmpl-31042a05-c968-4fc6-ba28-c3aa471258dc", "choices": [{"delta": {"content": "This"}, "finish_reason": null, "index": 0}], "model": "qwen-vl-plus", "object": "chat.completion.chunk", "usage": null}
...
{"id": "chatcmpl-31042a05-c968-4fc6-ba28-c3aa471258dc", "choices": [{"delta": {"content": "."}, "finish_reason": "stop", "index": 0}], "model": "qwen-vl-plus", "object": "chat.completion.chunk", "usage": null}
{"id": "chatcmpl-31042a05-c968-4fc6-ba28-c3aa471258dc", "choices": [], "model": "qwen-vl-plus", "object": "chat.completion.chunk", "usage": {"completion_tokens": 230, "prompt_tokens": 1259, "total_tokens": 1489}}

LangChain (Python)

Install the langchain_openai package:

# If the following command fails, replace pip with pip3.
pip install -U langchain_openai

Non-streaming

Use invoke for a single complete response:

from langchain_openai import ChatOpenAI
import os

def get_response():
    llm = ChatOpenAI(
        api_key=os.getenv("DASHSCOPE_API_KEY"),
        # Singapore region. For other regions, see the Regional endpoints table.
        base_url="https://dashscope-intl.aliyuncs.com/compatible-mode/v1",
        model="qwen3-vl-plus",
    )
    messages = [
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "text": "What is this?"
                },
                {
                    "type": "image_url",
                    "image_url": {
                        "url": "https://dashscope.oss-cn-beijing.aliyuncs.com/images/dog_and_girl.jpeg"
                    }
                }
            ]
        }
    ]
    response = llm.invoke(messages)
    print(response.content)

if __name__ == "__main__":
    get_response()

Sample response:

{
  "content": "In the picture, a woman and her dog are interacting on the beach. The dog is sitting on the ground, extending its paw as if to shake hands or give a high five. The woman is wearing a plaid shirt and seems to be having an intimate interaction with the dog, and is smiling. The background is the ocean and the sky at sunrise or sunset. This is a heartwarming photo that shows a moment of friendship between a person and a pet.",
  "response_metadata": {
    "token_usage": {
      "completion_tokens": 267,
      "prompt_tokens": 1259,
      "total_tokens": 1526
    },
    "model_name": "qwen-vl-plus",
    "finish_reason": "stop"
  }
}

Streaming

Use stream to receive incremental chunks. Set stream_options to include token usage in the final chunk.

This streaming example does not apply to QVQ models. For QVQ models, see Visual reasoning.
from langchain_openai import ChatOpenAI
import os


def get_response():
    llm = ChatOpenAI(
        # API keys vary by region. To get an API key, see https://www.alibabacloud.com/help/en/model-studio/get-api-key.
        api_key=os.getenv("DASHSCOPE_API_KEY"),
        # The following is the base_url for the Singapore region. If you use a model in the Virginia region, change the base_url to https://dashscope-us.aliyuncs.com/compatible-mode/v1.
        # If you use a model in the Beijing region, replace the base_url with https://dashscope.aliyuncs.com/compatible-mode/v1.
        base_url="https://dashscope-intl.aliyuncs.com/compatible-mode/v1",
        model="qwen3-vl-plus",
        # With these settings, token usage information appears in the final chunk of the streaming output.
        stream_options={"include_usage": True}
    )
    messages= [
            {
              "role": "user",
              "content": [
                {
                  "type": "text",
                  "text": "What is this"
                },
                {
                  "type": "image_url",
                  "image_url": {
                    "url": "https://dashscope.oss-cn-beijing.aliyuncs.com/images/dog_and_girl.jpeg"
                  }
                }
              ]
            }
          ]
    response = llm.stream(messages)
    for chunk in response:
        print(chunk.json())

if __name__ == "__main__":
    get_response()

For all input parameters, see Input parameters. Define them in the ChatOpenAI object.

cURL (HTTP)

Send requests to the chat completions endpoint:

Singapore: POST https://dashscope-intl.aliyuncs.com/compatible-mode/v1/chat/completions
Virginia: POST https://dashscope-us.aliyuncs.com/compatible-mode/v1/chat/completions
Beijing: POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions
If the API key isn't stored in DASHSCOPE_API_KEY, replace $DASHSCOPE_API_KEY in the commands below with your actual API key.

Non-streaming

This example sends two images in a single request:

# ======= Important =======
# API keys vary by region. To get an API key, see https://www.alibabacloud.com/help/en/model-studio/get-api-key.
# The following is the base_url for the Singapore region. If you use a model in the Virginia region, change the base_url to https://dashscope-us.aliyuncs.com/compatible-mode/v1/chat/completions.
# If you use a model in the Beijing region, replace the base_url with https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions.
# === Delete this comment before execution ===

curl --location 'https://dashscope-intl.aliyuncs.com/compatible-mode/v1/chat/completions' \
--header "Authorization: Bearer $DASHSCOPE_API_KEY" \
--header 'Content-Type: application/json' \
--data '{
  "model": "qwen3-vl-plus",
  "messages": [
    {
      "role": "user",
      "content": [
        {
          "type": "text",
          "text": "What are these"
        },
        {
          "type": "image_url",
          "image_url": {
            "url": "https://dashscope.oss-cn-beijing.aliyuncs.com/images/dog_and_girl.jpeg"
          }
        },
        {
          "type": "image_url",
          "image_url": {
            "url": "https://dashscope.oss-cn-beijing.aliyuncs.com/images/tiger.png"
          }
        }
      ]
    }
  ]
}'

Sample response:

{
  "choices": [
    {
      "message": {
        "content": "In Figure 1, a woman is interacting with her pet dog on the beach. The dog raises its front paw as if it wants to shake hands.\nFigure 2 is a CG-rendered picture of a tiger.",
        "role": "assistant"
      },
      "finish_reason": "stop",
      "index": 0,
      "logprobs": null
    }
  ],
  "object": "chat.completion",
  "usage": {
    "prompt_tokens": 2509,
    "completion_tokens": 34,
    "total_tokens": 2543
  },
  "created": 1724729556,
  "system_fingerprint": null,
  "model": "qwen-vl-plus",
  "id": "chatcmpl-1abb4eb9-f508-9637-a8ba-ac7fc6f73e53"
}

Streaming

Set "stream": true and optionally "stream_options": {"include_usage": true} to include token usage in the final chunk:

# ======= Important =======
# API keys vary by region. To get an API key, see https://www.alibabacloud.com/help/en/model-studio/get-api-key.
# The following is the base_url for the Singapore region. If you use a model in the Virginia region, change the base_url to https://dashscope-us.aliyuncs.com/compatible-mode/v1/chat/completions.
# If you use a model in the Beijing region, replace the base_url with https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions.
# === Delete this comment before execution ===

curl --location 'https://dashscope-intl.aliyuncs.com/compatible-mode/v1/chat/completions' \
--header "Authorization: Bearer $DASHSCOPE_API_KEY" \
--header 'Content-Type: application/json' \
--data '{
    "model": "qwen3-vl-plus",
    "messages": [
    {
      "role": "user",
      "content": [
        {
          "type": "text",
          "text": "What is this"
        },
        {
          "type": "image_url",
          "image_url": {
            "url": "https://dashscope.oss-cn-beijing.aliyuncs.com/images/dog_and_girl.jpeg"
          }
        }
      ]
    }
  ],
    "stream":true,
    "stream_options":{"include_usage":true}
}'

Sample streaming response:

data: {"choices":[{"delta":{"content":"","role":"assistant"},"index":0,"finish_reason":null}],"object":"chat.completion.chunk","usage":null,"model":"qwen-vl-plus","id":"chatcmpl-4c83f437-303f-907b-9de5-79cac83d6b18"}

data: {"choices":[{"delta":{"content":"In the"},"finish_reason":null,"index":0}],"object":"chat.completion.chunk","usage":null,"model":"qwen-vl-plus","id":"chatcmpl-4c83f437-303f-907b-9de5-79cac83d6b18"}

...

data: {"choices":[{"delta":{"content":" a moment of friendship between a person and a pet."},"finish_reason":"stop","index":0}],"object":"chat.completion.chunk","usage":null,"model":"qwen-vl-plus","id":"chatcmpl-4c83f437-303f-907b-9de5-79cac83d6b18"}

data: {"choices":[],"object":"chat.completion.chunk","usage":{"prompt_tokens":1276,"completion_tokens":79,"total_tokens":1355},"model":"qwen-vl-plus","id":"chatcmpl-4c83f437-303f-907b-9de5-79cac83d6b18"}

data: [DONE]

For all input parameters, see Input parameters.

Error handling

Failed requests return an error object with code and message fields:

{
    "error": {
        "message": "Incorrect API key provided. ",
        "type": "invalid_request_error",
        "param": null,
        "code": "invalid_api_key"
    }
}

For all error codes, see Status codes.