All Products
Search
Document Center

Alibaba Cloud Model Studio:Anthropic API compatibility

Last Updated:Feb 16, 2026

Alibaba Cloud Model Studio's Qwen series models support Anthropic API-compatible interfaces. To migrate your existing Anthropic application to Alibaba Cloud Model Studio, modify the following parameters.

  • ANTHROPIC_API_KEY (or ANTHROPIC_AUTH_TOKEN): Replace this value with the Model Studio API key.

  • ANTHROPIC_BASE_URL: Replace this with the Model Studio-compatible endpoint address https://dashscope-intl.aliyuncs.com/apps/anthropic.

  • Model name (model): Replace this with a model name supported by Alibaba Cloud Model Studio, such as qwen3-plus. For more information, see Supported Models.

Important

This topic is applicable only to the International Edition (Singapore region).

Quick Integration

Text Conversation

import anthropic
import os

client = anthropic.Anthropic(
    api_key=os.getenv("ANTHROPIC_API_KEY"),
    base_url=os.getenv("ANTHROPIC_BASE_URL"),
)
# To migrate to Model Studio: Configure the ANTHROPIC_API_KEY and ANTHROPIC_BASE_URL environment variables, and modify the model parameter below.
# For parameter compatibility, see Anthropic API Compatibility Details.
message = client.messages.create(
    model="qwen-plus",   # Set the model to qwen-plus
    max_tokens=1024,
    # Deep thinking is supported by some models only. See the list of supported models.
    thinking={
        "type": "enabled",
        "budget_tokens": 1024
    },
    # Streaming output
    stream=True,
    messages=[
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "text": "Who are you?"
                }
            ]
        }
    ]
)
print("=== Thinking Process ===")
first_text = True
for chunk in message:
    if chunk.type == "content_block_delta":
        if hasattr(chunk.delta, 'thinking'):
            print(chunk.delta.thinking, end="", flush=True)
        elif hasattr(chunk.delta, 'text'):
            if first_text:
                print("\n\n=== Answer ===")
                first_text = False
            print(chunk.delta.text, end="", flush=True)

Supported Models

Alibaba Cloud Model Studio's Anthropic API-compatible service supports the following Qwen series models:

Model Series

Supported Model Names (model)

Qwen Max

(Some models support thinking mode)

qwen3-max, qwen3-max-2026-01-23 (supports thinking mode), qwen3-max-preview (supports thinking mode) View more

Qwen Plus

qwen3.5-plus, qwen3.5-plus-2026-02-15, qwen-plus, qwen-plus-latest, qwen-plus-2025-09-11 View more

Qwen Flash

qwen-flash, qwen-flash-2025-07-28 View more

Qwen Turbo

qwen-turbo, qwen-turbo-latest View more

Qwen Coder

(Does not support thinking mode)

qwen3-coder-plus, qwen3-coder-plus-2025-09-23, qwen3-coder-flash View more

Qwen VL

(Does not support thinking mode)

qwen-vl-max, qwen-vl-plus

Qwen - Open source

qwen3.5-397b-a17b

For information about model parameters and billing rules, see Model List.

Detailed Steps

Activate Alibaba Cloud Model Studio

If you are accessing the Alibaba Cloud Model Studio service platform for the first time, activate it by following these steps.

  1. Log on to the Alibaba Cloud Model Studio console.

  2. If the page displays image at the top, you can activate the Alibaba Cloud Model Studio model service and claim your free quota. If this message does not appear, you have already activated the service.

After you activate Alibaba Cloud Model Studio for the first time, you can claim a new user free quota (valid for 90 days) for model inference services. For more information, see New User Free Quota.
Note

You will be charged if you exceed the free quota or its validity period. To prevent these charges, you can enable the Stop upon free quota exhaustion feature. The actual fees are subject to the real-time quotes in the console and the final bill.

Configure Environment Variables

To access Alibaba Cloud Model Studio's model service using the Anthropic API-compatible method, configure the following two environment variables.

  1. ANTHROPIC_BASE_URL: Set to https://dashscope-intl.aliyuncs.com/apps/anthropic.

  2. ANTHROPIC_API_KEY or ANTHROPIC_AUTH_TOKEN: Set this to your Alibaba Cloud Model Studio API key.

    ANTHROPIC_API_KEY or ANTHROPIC_AUTH_TOKEN can be used for authentication. You only need to set one of them. This topic uses ANTHROPIC_API_KEY as an example.

macOS

  1. You can run the following command in the terminal to view the default shell type.

    echo $SHELL
  2. You can set environment variables based on your shell type as follows:

    Zsh

    # Replace YOUR_DASHSCOPE_API_KEY with your Model Studio API Key.
    echo 'export ANTHROPIC_BASE_URL="https://dashscope-intl.aliyuncs.com/apps/anthropic"' >> ~/.zshrc
    echo 'export ANTHROPIC_API_KEY="YOUR_DASHSCOPE_API_KEY"' >> ~/.zshrc

    Bash

    # Replace YOUR_DASHSCOPE_API_KEY with your Model Studio API Key.
    echo 'export ANTHROPIC_BASE_URL="https://dashscope-intl.aliyuncs.com/apps/anthropic"' >> ~/.bash_profile
    echo 'export ANTHROPIC_API_KEY="YOUR_DASHSCOPE_API_KEY"' >> ~/.bash_profile
  3. You can run the following command in the terminal to apply the environment variables.

    Zsh

    source ~/.zshrc

    Bash

    source ~/.bash_profile
  4. You can open a new terminal and run the following commands to check if the environment variables are applied.

    echo $ANTHROPIC_BASE_URL
    echo $ANTHROPIC_API_KEY

Windows

  1. In Windows, you can set the base URL and API key provided by Alibaba Cloud Model Studio as environment variables.

    CMD

    1. You can run the following commands in CMD to set environment variables.

      # Replace YOUR_DASHSCOPE_API_KEY with your DashScope API key
      setx ANTHROPIC_API_KEY "YOUR_DASHSCOPE_API_KEY"
      setx ANTHROPIC_BASE_URL "https://dashscope-intl.aliyuncs.com/apps/anthropic"
    2. You can open a new CMD window and run the following commands to check if the environment variables are applied.

      echo %ANTHROPIC_API_KEY%
      echo %ANTHROPIC_BASE_URL%

    PowerShell

    1. You can run the following commands in PowerShell to set environment variables.

      # Replace YOUR_DASHSCOPE_API_KEY with your Model Studio API Key.
      [Environment]::SetEnvironmentVariable("ANTHROPIC_API_KEY", "YOUR_DASHSCOPE_API_KEY", [EnvironmentVariableTarget]::User)
      [Environment]::SetEnvironmentVariable("ANTHROPIC_BASE_URL", "https://dashscope-intl.aliyuncs.com/apps/anthropic", [EnvironmentVariableTarget]::User)
    2. You can open a new PowerShell window and run the following commands to check if the environment variables are applied.

      echo $env:ANTHROPIC_API_KEY
      echo $env:ANTHROPIC_BASE_URL

API Call - Text Conversation

cURL

curl -X POST "https://dashscope-intl.aliyuncs.com/apps/anthropic/v1/messages" \
  -H "Content-Type: application/json" \
  -H "x-api-key: ${ANTHROPIC_API_KEY}" \
  -d '{
    "model": "qwen-plus",
    "max_tokens": 1024,
    "stream": true,
    "thinking": {
      "type": "enabled",
      "budget_tokens": 1024
    },
    "system": "You are a helpful assistant",
    "messages": [
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "text": "Who are you?"
                }
            ]
        }
    ]
}'

Python

  1. Install the Anthropic SDK

    pip install anthropic
  2. Code example

    import anthropic
    import os
    
    client = anthropic.Anthropic(
        api_key=os.getenv("ANTHROPIC_API_KEY"),
        base_url=os.getenv("ANTHROPIC_BASE_URL"),
    )
    
    message = client.messages.create(
        model="qwen-plus",
        max_tokens=1024,
        stream=True,
        system="you are a helpful assistant",
        # Deep thinking is supported by some models only. See the list of supported models.
        thinking={
            "type": "enabled",
            "budget_tokens": 1024
        },
        messages=[
            {
                "role": "user",
                "content": [
                    {
                        "type": "text",
                        "text": "Who are you?"
                    }
                ]
            }
        ]
    )
    
    print("=== Thinking Process ===")
    first_text = True
    for chunk in message:
        if chunk.type == "content_block_delta":
            if hasattr(chunk.delta, 'thinking'):
                print(chunk.delta.thinking, end="", flush=True)
            elif hasattr(chunk.delta, 'text'):
                if first_text:
                    print("\n\n=== Answer ===")
                    first_text = False
                print(chunk.delta.text, end="", flush=True)
    

TypeScript

  1. Install the Anthropic TypeScript SDK

    npm install @anthropic-ai/sdk
  2. Code example

    import Anthropic from "@anthropic-ai/sdk";
    
    async function main() {
      const anthropic = new Anthropic({
        apiKey: process.env.ANTHROPIC_API_KEY,
        baseURL: process.env.ANTHROPIC_BASE_URL,
      });
    
      const stream = await anthropic.messages.create({
        model: "qwen-plus",
        max_tokens: 1024,
        stream: true,
        // Deep thinking is supported by some models only. See the list of supported models.
        thinking: { type: "enabled", budget_tokens: 1024 },
        system: "You are a helpful assistant",
        messages: [{ 
          role: "user", 
          content: [
            {
              type: "text",
              text: "Who are you?"
            }
          ]
        }]
      });
    
      console.log("=== Thinking Process ===");
      let firstText = true;
    
      for await (const chunk of stream) {
        if (chunk.type === "content_block_delta") {
          if ('thinking' in chunk.delta) {
            process.stdout.write(chunk.delta.thinking);
          } else if ('text' in chunk.delta) {
            if (firstText) {
              console.log("\n\n=== Answer ===");
              firstText = false;
            }
            process.stdout.write(chunk.delta.text);
          }
        }
      }
      console.log();
    }
    
    main().catch(console.error);
    

Anthropic API compatibility details

HTTP header

Field

Supported

x-api-key

Supported

Authorization Bearer

Supported

anthropic-beta/anthropic-version

Not supported

Basic fields

Field

Is this feature supported?

Description

Example value

model

Supported

The model name. For the supported models, see Supported models.

qwen-plus

max_tokens

Supported

The maximum number of tokens to generate.

1024

container

Not supported

-

-

mcp_servers

Not supported

-

-

metadata

Not supported

-

-

service_tier

Not supported

-

-

stop_sequences

Supported

A custom text sequence that causes the model to stop generating text.

["}"]

stream

Supported

Streaming output.

True

system

Supported

The system prompt.

You are a helpful assistant

temperature

Supported

The temperature coefficient. It controls the diversity of the generated text.

1.0

thinking

Supported

The thinking mode. If you enable this mode, the model performs inference before generating a reply to improve accuracy. Some models do not support this feature. For more information, see Supported models.

{"type": "enabled", "budget_tokens": 1024}

top_k

Supported

The size of the candidate set for sampling during generation.

10

top_p

Supported

The probability threshold for nucleus sampling. It controls the diversity of the generated text.

0.1

Because both temperature and top_p control text diversity, set only one of these parameters. For more information, see Text generation model overview.

Tool fields

tools

Field

Supported

name

Supported

input_schema

Supported

description

Supported

cache_control

Supported

tool_choice

Value

Is it supported?

none

Supported

auto

Supported

any

Supported

tool

Supported

Message fields

Field

Type

Subfield

Is this feature supported?

Description

content

string

-

Supported

Plain text content.

array, type="text"

text

Supported

The content of the text block.

cache_control

Supported

Controls the caching behavior of this text block.

citations

Not supported

-

array, type="image"

-

Not supported

-

array, type="video"

-

Not supported

-

array, type="document"

-

Not supported

-

array, type="search_result"

-

Not supported

-

array, type="thinking"

-

Not supported

-

array, type="redacted_thinking"

-

Not supported

-

array, type="tool_use"

id

Supported

The unique identifier for the tool call.

input

Supported

The parameter object passed when the tool is called.

name

Supported

The name of the tool that is called.

cache_control

Supported

Controls the caching behavior of this tool call.

array, type="tool_result"

tool_use_id

Supported

The ID of the tool_use that corresponds to this result.

content

Supported

The result returned after the tool is executed. It is usually a string or a JSON string.

cache_control

Supported

Controls the caching behavior of this tool result.

is_error

Not supported

-

array, type="server_tool_use"

-

Not supported

-

array, type="web_search_tool_result"

-

Not supported

-

array, type="code_execution_tool_result"

-

Not supported

-

array, type="mcp_tool_use"

-

Not supported

-

array, type="mcp_tool_result"

-

Not supported

-

array, type="container_upload"

-

Not supported

-

Error codes

HTTP status code

Error type

Description

400

invalid_request_error

The request format or content is invalid. Common causes include missing required request parameters or incorrect data types for parameter values.

400

Arrearage

Your account has an overdue payment. Service is paused. Recharge your account and try again.

403

authentication_error

The API key is invalid. Common causes include missing the API key in the request header or providing an incorrect API key.

404

not_found_error

The requested resource was not found. Common causes include a typo in the compatible endpoint or a model name that does not exist in the request header.

429

rate_limit_error

Your account has reached its rate limit. Reduce your request frequency.

500

api_error

A general internal server error occurred. Try again later.

529

overloaded_error

The API server is overloaded and cannot process new requests at this time.