Alibaba Cloud Model Studio's Qwen series models support Anthropic API-compatible interfaces. To migrate your existing Anthropic application to Alibaba Cloud Model Studio, modify the following parameters.
ANTHROPIC_API_KEY (or ANTHROPIC_AUTH_TOKEN): Replace this value with the Model Studio API key.
ANTHROPIC_BASE_URL: Replace this with the Model Studio-compatible endpoint address
https://dashscope-intl.aliyuncs.com/apps/anthropic.Model name (model): Replace this with a model name supported by Alibaba Cloud Model Studio, such as
qwen3-plus. For more information, see Supported Models.
This topic is applicable only to the International Edition (Singapore region).
Quick Integration
Text Conversation
import anthropic
import os
client = anthropic.Anthropic(
api_key=os.getenv("ANTHROPIC_API_KEY"),
base_url=os.getenv("ANTHROPIC_BASE_URL"),
)
# To migrate to Model Studio: Configure the ANTHROPIC_API_KEY and ANTHROPIC_BASE_URL environment variables, and modify the model parameter below.
# For parameter compatibility, see Anthropic API Compatibility Details.
message = client.messages.create(
model="qwen-plus", # Set the model to qwen-plus
max_tokens=1024,
# Deep thinking is supported by some models only. See the list of supported models.
thinking={
"type": "enabled",
"budget_tokens": 1024
},
# Streaming output
stream=True,
messages=[
{
"role": "user",
"content": [
{
"type": "text",
"text": "Who are you?"
}
]
}
]
)
print("=== Thinking Process ===")
first_text = True
for chunk in message:
if chunk.type == "content_block_delta":
if hasattr(chunk.delta, 'thinking'):
print(chunk.delta.thinking, end="", flush=True)
elif hasattr(chunk.delta, 'text'):
if first_text:
print("\n\n=== Answer ===")
first_text = False
print(chunk.delta.text, end="", flush=True)
Supported Models
Alibaba Cloud Model Studio's Anthropic API-compatible service supports the following Qwen series models:
Model Series | Supported Model Names (model) |
Qwen Max (Some models support thinking mode) | qwen3-max, qwen3-max-2026-01-23 (supports thinking mode), qwen3-max-preview (supports thinking mode) View more |
Qwen Plus | qwen3.5-plus, qwen3.5-plus-2026-02-15, qwen-plus, qwen-plus-latest, qwen-plus-2025-09-11 View more |
Qwen Flash | qwen-flash, qwen-flash-2025-07-28 View more |
Qwen Turbo | qwen-turbo, qwen-turbo-latest View more |
Qwen Coder (Does not support thinking mode) | qwen3-coder-plus, qwen3-coder-plus-2025-09-23, qwen3-coder-flash View more |
Qwen VL (Does not support thinking mode) | qwen-vl-max, qwen-vl-plus |
Qwen - Open source | qwen3.5-397b-a17b |
For information about model parameters and billing rules, see Model List.
Detailed Steps
Activate Alibaba Cloud Model Studio
If you are accessing the Alibaba Cloud Model Studio service platform for the first time, activate it by following these steps.
Log on to the Alibaba Cloud Model Studio console.
If the page displays
at the top, you can activate the Alibaba Cloud Model Studio model service and claim your free quota. If this message does not appear, you have already activated the service.
After you activate Alibaba Cloud Model Studio for the first time, you can claim a new user free quota (valid for 90 days) for model inference services. For more information, see New User Free Quota.
You will be charged if you exceed the free quota or its validity period. To prevent these charges, you can enable the Stop upon free quota exhaustion feature. The actual fees are subject to the real-time quotes in the console and the final bill.
Configure Environment Variables
To access Alibaba Cloud Model Studio's model service using the Anthropic API-compatible method, configure the following two environment variables.
ANTHROPIC_BASE_URL: Set to https://dashscope-intl.aliyuncs.com/apps/anthropic.ANTHROPIC_API_KEYorANTHROPIC_AUTH_TOKEN: Set this to your Alibaba Cloud Model Studio API key.ANTHROPIC_API_KEYorANTHROPIC_AUTH_TOKENcan be used for authentication. You only need to set one of them. This topic usesANTHROPIC_API_KEYas an example.
macOS
You can run the following command in the terminal to view the default shell type.
echo $SHELLYou can set environment variables based on your shell type as follows:
Zsh
# Replace YOUR_DASHSCOPE_API_KEY with your Model Studio API Key. echo 'export ANTHROPIC_BASE_URL="https://dashscope-intl.aliyuncs.com/apps/anthropic"' >> ~/.zshrc echo 'export ANTHROPIC_API_KEY="YOUR_DASHSCOPE_API_KEY"' >> ~/.zshrcBash
# Replace YOUR_DASHSCOPE_API_KEY with your Model Studio API Key. echo 'export ANTHROPIC_BASE_URL="https://dashscope-intl.aliyuncs.com/apps/anthropic"' >> ~/.bash_profile echo 'export ANTHROPIC_API_KEY="YOUR_DASHSCOPE_API_KEY"' >> ~/.bash_profileYou can run the following command in the terminal to apply the environment variables.
Zsh
source ~/.zshrcBash
source ~/.bash_profileYou can open a new terminal and run the following commands to check if the environment variables are applied.
echo $ANTHROPIC_BASE_URL echo $ANTHROPIC_API_KEY
Windows
In Windows, you can set the base URL and API key provided by Alibaba Cloud Model Studio as environment variables.
CMD
You can run the following commands in CMD to set environment variables.
# Replace YOUR_DASHSCOPE_API_KEY with your DashScope API key setx ANTHROPIC_API_KEY "YOUR_DASHSCOPE_API_KEY" setx ANTHROPIC_BASE_URL "https://dashscope-intl.aliyuncs.com/apps/anthropic"You can open a new CMD window and run the following commands to check if the environment variables are applied.
echo %ANTHROPIC_API_KEY% echo %ANTHROPIC_BASE_URL%
PowerShell
You can run the following commands in PowerShell to set environment variables.
# Replace YOUR_DASHSCOPE_API_KEY with your Model Studio API Key. [Environment]::SetEnvironmentVariable("ANTHROPIC_API_KEY", "YOUR_DASHSCOPE_API_KEY", [EnvironmentVariableTarget]::User) [Environment]::SetEnvironmentVariable("ANTHROPIC_BASE_URL", "https://dashscope-intl.aliyuncs.com/apps/anthropic", [EnvironmentVariableTarget]::User)You can open a new PowerShell window and run the following commands to check if the environment variables are applied.
echo $env:ANTHROPIC_API_KEY echo $env:ANTHROPIC_BASE_URL
API Call - Text Conversation
cURL
curl -X POST "https://dashscope-intl.aliyuncs.com/apps/anthropic/v1/messages" \
-H "Content-Type: application/json" \
-H "x-api-key: ${ANTHROPIC_API_KEY}" \
-d '{
"model": "qwen-plus",
"max_tokens": 1024,
"stream": true,
"thinking": {
"type": "enabled",
"budget_tokens": 1024
},
"system": "You are a helpful assistant",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Who are you?"
}
]
}
]
}'Python
Install the Anthropic SDK
pip install anthropicCode example
import anthropic import os client = anthropic.Anthropic( api_key=os.getenv("ANTHROPIC_API_KEY"), base_url=os.getenv("ANTHROPIC_BASE_URL"), ) message = client.messages.create( model="qwen-plus", max_tokens=1024, stream=True, system="you are a helpful assistant", # Deep thinking is supported by some models only. See the list of supported models. thinking={ "type": "enabled", "budget_tokens": 1024 }, messages=[ { "role": "user", "content": [ { "type": "text", "text": "Who are you?" } ] } ] ) print("=== Thinking Process ===") first_text = True for chunk in message: if chunk.type == "content_block_delta": if hasattr(chunk.delta, 'thinking'): print(chunk.delta.thinking, end="", flush=True) elif hasattr(chunk.delta, 'text'): if first_text: print("\n\n=== Answer ===") first_text = False print(chunk.delta.text, end="", flush=True)
TypeScript
Install the Anthropic TypeScript SDK
npm install @anthropic-ai/sdkCode example
import Anthropic from "@anthropic-ai/sdk"; async function main() { const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY, baseURL: process.env.ANTHROPIC_BASE_URL, }); const stream = await anthropic.messages.create({ model: "qwen-plus", max_tokens: 1024, stream: true, // Deep thinking is supported by some models only. See the list of supported models. thinking: { type: "enabled", budget_tokens: 1024 }, system: "You are a helpful assistant", messages: [{ role: "user", content: [ { type: "text", text: "Who are you?" } ] }] }); console.log("=== Thinking Process ==="); let firstText = true; for await (const chunk of stream) { if (chunk.type === "content_block_delta") { if ('thinking' in chunk.delta) { process.stdout.write(chunk.delta.thinking); } else if ('text' in chunk.delta) { if (firstText) { console.log("\n\n=== Answer ==="); firstText = false; } process.stdout.write(chunk.delta.text); } } } console.log(); } main().catch(console.error);
Anthropic API compatibility details
HTTP header
Field | Supported |
x-api-key | |
Authorization Bearer | |
anthropic-beta/anthropic-version |
Basic fields
Field | Is this feature supported? | Description | Example value |
model | The model name. For the supported models, see Supported models. | qwen-plus | |
max_tokens | The maximum number of tokens to generate. | 1024 | |
container | - | - | |
mcp_servers | - | - | |
metadata | - | - | |
service_tier | - | - | |
stop_sequences | A custom text sequence that causes the model to stop generating text. | ["}"] | |
stream | Streaming output. | True | |
system | The system prompt. | You are a helpful assistant | |
temperature | The temperature coefficient. It controls the diversity of the generated text. | 1.0 | |
thinking | The thinking mode. If you enable this mode, the model performs inference before generating a reply to improve accuracy. Some models do not support this feature. For more information, see Supported models. | {"type": "enabled", "budget_tokens": 1024} | |
top_k | The size of the candidate set for sampling during generation. | 10 | |
top_p | The probability threshold for nucleus sampling. It controls the diversity of the generated text. | 0.1 |
Because both temperature and top_p control text diversity, set only one of these parameters. For more information, see Text generation model overview.
Tool fields
tools
Field | Supported |
name | |
input_schema | |
description | |
cache_control |
tool_choice
Value | Is it supported? |
none | |
auto | |
any | |
tool |
Message fields
Field | Type | Subfield | Is this feature supported? | Description |
content | string | - | Plain text content. | |
array, type="text" | text | The content of the text block. | ||
cache_control | Controls the caching behavior of this text block. | |||
citations | - | |||
array, type="image" | - | - | ||
array, type="video" | - | - | ||
array, type="document" | - | - | ||
array, type="search_result" | - | - | ||
array, type="thinking" | - | - | ||
array, type="redacted_thinking" | - | - | ||
array, type="tool_use" | id | The unique identifier for the tool call. | ||
input | The parameter object passed when the tool is called. | |||
name | The name of the tool that is called. | |||
cache_control | Controls the caching behavior of this tool call. | |||
array, type="tool_result" | tool_use_id | The ID of the | ||
content | The result returned after the tool is executed. It is usually a string or a JSON string. | |||
cache_control | Controls the caching behavior of this tool result. | |||
is_error | - | |||
array, type="server_tool_use" | - | - | ||
array, type="web_search_tool_result" | - | - | ||
array, type="code_execution_tool_result" | - | - | ||
array, type="mcp_tool_use" | - | - | ||
array, type="mcp_tool_result" | - | - | ||
array, type="container_upload" | - | - |
Error codes
HTTP status code | Error type | Description |
400 | invalid_request_error | The request format or content is invalid. Common causes include missing required request parameters or incorrect data types for parameter values. |
400 | Arrearage | Your account has an overdue payment. Service is paused. Recharge your account and try again. |
403 | authentication_error | The API key is invalid. Common causes include missing the API key in the request header or providing an incorrect API key. |
404 | not_found_error | The requested resource was not found. Common causes include a typo in the compatible endpoint or a model name that does not exist in the request header. |
429 | rate_limit_error | Your account has reached its rate limit. Reduce your request frequency. |
500 | api_error | A general internal server error occurred. Try again later. |
529 | overloaded_error | The API server is overloaded and cannot process new requests at this time. |