All Products
Search
Document Center

Alibaba Cloud Model Studio:Anthropic API compatibility

Last Updated:Mar 15, 2026

Migrate from Anthropic to Model Studio by updating these parameters:

  • ANTHROPIC_API_KEY (or ANTHROPIC_AUTH_TOKEN): Replace with your Model Studio API key.

  • ANTHROPIC_BASE_URL: Replace this with the Model Studio-compatible endpoint https://dashscope-intl.aliyuncs.com/apps/anthropic.

  • Model name (model): Replace with a supported model name like qwen3-plus. See Supported models for details.

Important

This topic is applicable only to the International Edition (Singapore region).

Quick integration

Text chat

import anthropic
import os

client = anthropic.Anthropic(
    api_key=os.getenv("ANTHROPIC_API_KEY"),
    base_url=os.getenv("ANTHROPIC_BASE_URL"),
)
# Migration: Set ANTHROPIC_API_KEY and ANTHROPIC_BASE_URL, then update the model below.
# See the Compatibility Details section for full parameter support.
message = client.messages.create(
    model="qwen-plus",   # Set the model to qwen-plus
    max_tokens=1024,
    # Deep thinking is supported by some models only. See the supported models list.
    thinking={
        "type": "enabled",
        "budget_tokens": 1024
    },
    # Streaming output
    stream=True,
    messages=[
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "text": "Who are you?"
                }
            ]
        }
    ]
)
print("=== Thinking Process ===")
first_text = True
for chunk in message:
    if chunk.type == "content_block_delta":
        if hasattr(chunk.delta, 'thinking'):
            print(chunk.delta.thinking, end="", flush=True)
        elif hasattr(chunk.delta, 'text'):
            if first_text:
                print("\n\n=== Answer ===")
                first_text = False
            print(chunk.delta.text, end="", flush=True)

Supported models

Supported Qwen models:

Series

Model name (model)

Qwen-Max

(Some supports thinking)

qwen3-max, qwen3-max-2026-01-23 (supports thinking mode), qwen3-max-preview (supports thinking mode) View more

Qwen-Plus

qwen3.5-plus, qwen3.5-plus-2026-02-15, qwen-plus, qwen-plus-latest, qwen-plus-2025-09-11 View more

Qwen-Flash

qwen-flash, qwen-flash-2025-07-28 View more

Qwen-Turbo

qwen-turbo, qwen-turbo-latest View more

Qwen-Coder

(Thinking not supported)

qwen3-coder-next, qwen3-coder-plus, qwen3-coder-plus-2025-09-23, qwen3-coder-flash View more

Qwen-VL

(Thinking not supported)

qwen3-vl-plus, qwen3-vl-flash, qwen-vl-max, qwen-vl-plus View more

For information about model parameters and billing rules, see Models.

Detailed steps

Activate Model Studio

For first-time setup, activate Model Studio.

  1. Log on to the Model Studio console.

  2. If the activation prompt appears at the top, activate Model Studio and claim your free quota. Otherwise, it's already active.

After activation, claim your 90-day free quota for model inference. See Free quota for new users for details.
Note

Charges apply when you exceed the quota or validity period. To prevent charges, enable Free quota only. Fees are based on console quotes and final billing.

Configure environment variables

Configure these environment variables for Anthropic compatibility:

  1. ANTHROPIC_BASE_URL: Set to https://dashscope-intl.aliyuncs.com/apps/anthropic.

  2. ANTHROPIC_API_KEY or ANTHROPIC_AUTH_TOKEN: Set this to your Model Studio API key.

    Use either ANTHROPIC_API_KEY or ANTHROPIC_AUTH_TOKEN for authentication. This guide uses ANTHROPIC_API_KEY.

macOS

  1. Run this command to check your shell type:

    echo $SHELL
  2. Set environment variables for your shell:

    Zsh

    # Replace YOUR_DASHSCOPE_API_KEY with your Model Studio API Key.
    echo 'export ANTHROPIC_BASE_URL="https://dashscope-intl.aliyuncs.com/apps/anthropic"' >> ~/.zshrc
    echo 'export ANTHROPIC_API_KEY="YOUR_DASHSCOPE_API_KEY"' >> ~/.zshrc

    Bash

    # Replace YOUR_DASHSCOPE_API_KEY with your Model Studio API Key.
    echo 'export ANTHROPIC_BASE_URL="https://dashscope-intl.aliyuncs.com/apps/anthropic"' >> ~/.bash_profile
    echo 'export ANTHROPIC_API_KEY="YOUR_DASHSCOPE_API_KEY"' >> ~/.bash_profile
  3. Apply the environment variables:

    Zsh

    source ~/.zshrc

    Bash

    source ~/.bash_profile
  4. Verify environment variables in a new terminal:

    echo $ANTHROPIC_BASE_URL
    echo $ANTHROPIC_API_KEY

Windows

  1. Set Model Studio's base URL and API key as environment variables:

    CMD

    1. Set environment variables in CMD:

      # Replace YOUR_DASHSCOPE_API_KEY with your DashScope API key
      setx ANTHROPIC_API_KEY "YOUR_DASHSCOPE_API_KEY"
      setx ANTHROPIC_BASE_URL "https://dashscope-intl.aliyuncs.com/apps/anthropic"
    2. Verify in a new CMD window:

      echo %ANTHROPIC_API_KEY%
      echo %ANTHROPIC_BASE_URL%

    PowerShell

    1. Set environment variables in PowerShell:

      # Replace YOUR_DASHSCOPE_API_KEY with your Model Studio API Key.
      [Environment]::SetEnvironmentVariable("ANTHROPIC_API_KEY", "YOUR_DASHSCOPE_API_KEY", [EnvironmentVariableTarget]::User)
      [Environment]::SetEnvironmentVariable("ANTHROPIC_BASE_URL", "https://dashscope-intl.aliyuncs.com/apps/anthropic", [EnvironmentVariableTarget]::User)
    2. Verify in a new PowerShell window:

      echo $env:ANTHROPIC_API_KEY
      echo $env:ANTHROPIC_BASE_URL

APIcCall - Text chat

cURL

curl -X POST "https://dashscope-intl.aliyuncs.com/apps/anthropic/v1/messages" \
  -H "Content-Type: application/json" \
  -H "x-api-key: ${ANTHROPIC_API_KEY}" \
  -d '{
    "model": "qwen-plus",
    "max_tokens": 1024,
    "stream": true,
    "thinking": {
      "type": "enabled",
      "budget_tokens": 1024
    },
    "system": "You are a helpful assistant",
    "messages": [
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "text": "Who are you?"
                }
            ]
        }
    ]
}'

Python

  1. Install Anthropic SDK

    pip install anthropic
  2. Example

    import anthropic
    import os
    
    client = anthropic.Anthropic(
        api_key=os.getenv("ANTHROPIC_API_KEY"),
        base_url=os.getenv("ANTHROPIC_BASE_URL"),
    )
    
    message = client.messages.create(
        model="qwen-plus",
        max_tokens=1024,
        stream=True,
        system="you are a helpful assistant",
        # Deep thinking is supported by some models only. See the supported models list.
        thinking={
            "type": "enabled",
            "budget_tokens": 1024
        },
        messages=[
            {
                "role": "user",
                "content": [
                    {
                        "type": "text",
                        "text": "Who are you?"
                    }
                ]
            }
        ]
    )
    
    print("=== Thinking Process ===")
    first_text = True
    for chunk in message:
        if chunk.type == "content_block_delta":
            if hasattr(chunk.delta, 'thinking'):
                print(chunk.delta.thinking, end="", flush=True)
            elif hasattr(chunk.delta, 'text'):
                if first_text:
                    print("\n\n=== Answer ===")
                    first_text = False
                print(chunk.delta.text, end="", flush=True)
    

TypeScript

  1. Install Anthropic TypeScript SDK

    npm install @anthropic-ai/sdk
  2. Example

    import Anthropic from "@anthropic-ai/sdk";
    
    async function main() {
      const anthropic = new Anthropic({
        apiKey: process.env.ANTHROPIC_API_KEY,
        baseURL: process.env.ANTHROPIC_BASE_URL,
      });
    
      const stream = await anthropic.messages.create({
        model: "qwen-plus",
        max_tokens: 1024,
        stream: true,
        // Deep thinking is supported by some models only. See the list of supported models.
        thinking: { type: "enabled", budget_tokens: 1024 },
        system: "You are a helpful assistant",
        messages: [{ 
          role: "user", 
          content: [
            {
              type: "text",
              text: "Who are you?"
            }
          ]
        }]
      });
    
      console.log("=== Thinking Process ===");
      let firstText = true;
    
      for await (const chunk of stream) {
        if (chunk.type === "content_block_delta") {
          if ('thinking' in chunk.delta) {
            process.stdout.write(chunk.delta.thinking);
          } else if ('text' in chunk.delta) {
            if (firstText) {
              console.log("\n\n=== Answer ===");
              firstText = false;
            }
            process.stdout.write(chunk.delta.text);
          }
        }
      }
      console.log();
    }
    
    main().catch(console.error);
    

Compatibility details

HTTP header

Field

Supported

x-api-key

Supported

Authorization Bearer

Supported

anthropic-beta/anthropic-version

Not supported

Basic fields

Field

Supported

Description

Example

model

Supported

Model name. See Supported models for the list.

qwen-plus

max_tokens

Supported

Maximum tokens to generate.

1024

container

Not supported

-

-

mcp_servers

Not supported

-

-

metadata

Not supported

-

-

service_tier

Not supported

-

-

stop_sequences

Supported

Custom text sequence that stops generation.

["}"]

stream

Supported

Streaming output.

True

system

Supported

System prompt.

You are a helpful assistant

temperature

Supported

Controls generation diversity.

1.0

thinking

Supported

When enabled, model performs inference before responding to improve accuracy. Not all models support this. See Supported models.

{"type": "enabled", "budget_tokens": 1024}

top_k

Supported

Number of candidates sampled during generation.

10

top_p

Supported

Probability threshold for nucleus sampling.

0.1

Set only temperature or top_p (both control diversity). See Text generation model overview.

Tool fields

tools

Field

Supported

name

Supported

input_schema

Supported

description

Supported

cache_control

Supported

tool_choice

Value

Supported

none

Supported

auto

Supported

any

Supported

tool

Supported

Message fields

Field

Type

Subfield

Supported

Description

content

string

-

Supported

Plain text content.

array, type="text"

text

Supported

Text block content.

cache_control

Supported

Controls caching for this text block.

citations

Not supported

-

array, type="image"

-

Not supported

-

array, type="video"

-

Not supported

-

array, type="document"

-

Not supported

-

array, type="search_result"

-

Not supported

-

array, type="thinking"

-

Not supported

-

array, type="redacted_thinking"

-

Not supported

-

array, type="tool_use"

id

Supported

Unique identifier for the tool call.

input

Supported

Parameter object passed when calling the tool.

name

Supported

Name of the tool being called.

cache_control

Supported

Controls caching for this tool call.

array, type="tool_result"

tool_use_id

Supported

The ID of the tool_use that corresponds to this result.

content

Supported

Result returned after tool execution. Usually a string or JSON string.

cache_control

Supported

Controls caching for this tool result.

is_error

Not supported

-

array, type="server_tool_use"

-

Not supported

-

array, type="web_search_tool_result"

-

Not supported

-

array, type="code_execution_tool_result"

-

Not supported

-

array, type="mcp_tool_use"

-

Not supported

-

array, type="mcp_tool_result"

-

Not supported

-

array, type="container_upload"

-

Not supported

-

Error codes

HTTP status code

Error type

Description

400

invalid_request_error

Invalid request format or content. Common causes: missing required parameters or wrong parameter types.

400

Arrearage

Account has overdue payment. Service is paused. Recharge and retry.

403

authentication_error

API key is invalid. Common causes: missing or incorrect API key in request header.

404

not_found_error

Requested resource not found. Common causes: endpoint typo or invalid model name.

429

rate_limit_error

Rate limit reached. Reduce request frequency.

500

api_error

Internal server error. Retry later.

529

overloaded_error

API server overloaded. Cannot process new requests.