The Qwen models on Alibaba Cloud Model Studio are compatible with Anthropic APIs. To migrate your existing Anthropic applications, just modify the following parameters.
ANTHROPIC_API_KEY (or ANTHROPIC_AUTH_TOKEN): Replace the value with your Model Studio API key.
ANTHROPIC_BASE_URL: Replace the value with: https://dashscope-intl.aliyuncs.com/apps/anthropic.
model: Replace the value with a supported model name from Model Studio, such as
qwen-plus. See Supported models.
This topic is applicable only to the International Edition (Singapore region).
Getting started
import anthropic
import os
client = anthropic.Anthropic(
api_key=os.getenv("ANTHROPIC_API_KEY"),
base_url=os.getenv("ANTHROPIC_BASE_URL"),
)
# To migrate to Model Studio, configure the ANTHROPIC_API_KEY and ANTHROPIC_BASE_URL environment variables, and change the model parameter below.
# For parameter compatibility, see Anthropic API compatibility details.
message = client.messages.create(
model="qwen-plus", # Set the model to qwen-plus.
max_tokens=1024,
# The thinking parameter is not supported by Qwen-Max and Qwen-Coder series models.
thinking={
"type": "enabled",
"budget_tokens": 1024
},
messages=[
{
"role": "user",
"content": [
{
"type": "text",
"text": "Who are you?"
}
]
}
]
)
print(message.content[0].text)
Supported models
The Anthropic-compatible API service provided by Model Studio supports several models from the Qwen series:
Model series | Supported model name |
Qwen-Max (only qwen3-max-preview supports thinking mode) | qwen3-max, qwen3-max-2025-09-23, qwen3-max-preview View more |
Qwen-Plus | qwen-plus, qwen-plus-latest, qwen-plus-2025-09-11 View more |
Qwen-Flash | qwen-flash, qwen-flash-2025-07-28 View more |
Qwen-Turbo | qwen-turbo, qwen-turbo-latest View more |
Qwen-Coder (thinking mode not supported) | qwen3-coder-plus, qwen3-coder-plus-2025-09-23, qwen3-coder-flash View more |
For more information about model parameters and billing rules, see Models.
Detailed steps
Activate Model Studio
If this is your first time using Model Studio, follow these steps to activate the service.
Log on to the Model Studio console.
If the message
is displayed at the top of the page, activate the Model Studio and claim your free quota. If this message is not displayed, the service is already active.
After you activate Model Studio for the first time, you can claim a new user free quota for model inference, valid for 90 days. For instructions on how to claim the quota and for more details, see Free quota for new users.
Charges are incurred if your free quota is depleted or expires. To avoid these charges, you can enable the Free quota only feature. The actual fees are subject to the pricing displayed in the console and your final bill.
Set environment variables
To access the Model Studio service through the Anthropic-compatible API, set the following two environment variables.
ANTHROPIC_BASE_URL: Set this to https://dashscope-intl.aliyuncs.com/apps/anthropic.ANTHROPIC_API_KEYorANTHROPIC_AUTH_TOKEN: Set this to your Model Studio API key.Both
ANTHROPIC_API_KEYandANTHROPIC_AUTH_TOKENcan be used for access authentication. You only need to set one of them. This topic usesANTHROPIC_API_KEYas an example.
macOS
In the terminal, run the following command to check your default shell type.
echo $SHELLSet the environment variables based on your shell type. The commands are as follows:
Zsh
# Replace YOUR_DASHSCOPE_API_KEY with your Model Studio API key. echo 'export ANTHROPIC_BASE_URL="https://dashscope-intl.aliyuncs.com/apps/anthropic"' >> ~/.zshrc echo 'export ANTHROPIC_API_KEY="YOUR_DASHSCOPE_API_KEY"' >> ~/.zshrcBash
# Replace YOUR_DASHSCOPE_API_KEY with your Model Studio API key. echo 'export ANTHROPIC_BASE_URL="https://dashscope-intl.aliyuncs.com/apps/anthropic"' >> ~/.bash_profile echo 'export ANTHROPIC_API_KEY="YOUR_DASHSCOPE_API_KEY"' >> ~/.bash_profileIn the terminal, run the following command to apply the environment variables.
Zsh
source ~/.zshrcBash
source ~/.bash_profileOpen a new terminal and run the following commands to verify that the environment variables are in effect.
echo $ANTHROPIC_BASE_URL echo $ANTHROPIC_API_KEY
Windows
On Windows, you can use either CMD or PowerShell to set the base URL and API key provided by Model Studio as environment variables.
CMD
In CMD, run the following commands to set the environment variables.
Open a new CMD window and run the following commands to verify that the environment variables are in effect.
echo %ANTHROPIC_API_KEY% echo %ANTHROPIC_BASE_URL%
PowerShell
In PowerShell, run the following commands to set the environment variables.
# Replace YOUR_DASHSCOPE_API_KEY with your Model Studio API key. [Environment]::SetEnvironmentVariable("ANTHROPIC_API_KEY", "YOUR_DASHSCOPE_API_KEY", [EnvironmentVariableTarget]::User) [Environment]::SetEnvironmentVariable("ANTHROPIC_BASE_URL", "https://dashscope-intl.aliyuncs.com/apps/anthropic", [EnvironmentVariableTarget]::User)Open a new PowerShell window and run the following commands to verify that the environment variables are in effect.
echo $env:ANTHROPIC_API_KEY echo $env:ANTHROPIC_BASE_URL
API call
cURL
curl -X POST "https://dashscope-intl.aliyuncs.com/apps/anthropic/v1/messages" \
-H "Content-Type: application/json" \
-H "x-api-key: $ANTHROPIC_API_KEY" \
-d '{
"model": "qwen-plus",
"max_tokens": 1024,
"stream": true,
"thinking": {
"type": "enabled",
"budget_tokens": 1024
},
"system": "You are a helpful assistant",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Who are you?"
}
]
}
]
}'Python
Install the Anthropic SDK
pip install anthropicCode example
import anthropic import os client = anthropic.Anthropic( api_key=os.getenv("ANTHROPIC_API_KEY"), base_url=os.getenv("ANTHROPIC_BASE_URL"), ) message = client.messages.create( model="qwen-plus", max_tokens=1024, stream=True, system="you are a helpful assistant", # The thinking parameter is not supported by Qwen-Max and Qwen-Coder series models. thinking={ "type": "enabled", "budget_tokens": 1024 }, messages=[ { "role": "user", "content": [ { "type": "text", "text": "Who are you?" } ] } ] ) print("=== Thinking process ===") first_text = True for chunk in message: if chunk.type == "content_block_delta": if hasattr(chunk.delta, 'thinking'): print(chunk.delta.thinking, end="", flush=True) elif hasattr(chunk.delta, 'text'): if first_text: print("\n\n=== Response ===") first_text = False print(chunk.delta.text, end="", flush=True)
TypeScript
Install the Anthropic TypeScript SDK
npm install @anthropic-ai/sdkCode example
import Anthropic from "@anthropic-ai/sdk"; async function main() { const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY, baseURL: process.env.ANTHROPIC_BASE_URL, }); const stream = await anthropic.messages.create({ model: "qwen-plus", max_tokens: 1024, stream: true, // The thinking parameter is not supported by Qwen-Max and Qwen-Coder series models. thinking: { type: "enabled", budget_tokens: 1024 }, system: "You are a helpful assistant", messages: [{ role: "user", content: [ { type: "text", text: "Who are you?" } ] }] }); console.log("=== Thinking process ==="); let firstText = true; for await (const chunk of stream) { if (chunk.type === "content_block_delta") { if ('thinking' in chunk.delta) { process.stdout.write(chunk.delta.thinking); } else if ('text' in chunk.delta) { if (firstText) { console.log("\n\n=== Response ==="); firstText = false; } process.stdout.write(chunk.delta.text); } } } console.log(); } main().catch(console.error);
Compatibility details
HTTP header
Field | Supported |
x-api-key | |
Authorization Bearer | |
anthropic-beta/anthropic-version |
Basic fields
Field | Supported | Description | Example value |
model | The model name, see Supported models. | qwen-plus | |
max_tokens | The maximum number of tokens to generate. | 1024 | |
container | - | - | |
mcp_servers | - | - | |
metadata | - | - | |
service_tier | - | - | |
stop_sequences | Custom text sequences that cause the model to stop generating. | ["}"] | |
stream | Streaming output | True | |
system | System prompt | You are a helpful assistant | |
temperature | The temperature, which controls the diversity of the generated text. The value must be in the range of [0, 2). | 1.0 | |
thinking | Thinking mode (Not supported by Qwen-Max and Qwen-Coder series models). | {"type": "enabled", "budget_tokens": 1024} | |
top_k | The size of the candidate set for sampling during generation. | 10 | |
top_p | The probability threshold for nucleus sampling, which controls the diversity of the generated text. The value must be in the range of [0, 1). | 0.1 |
Because both the temperature and top_p parameters control the diversity of the generated text, set only one of them. For more information, see Overview of text generation models.
Tool fields
tools
Field | Supported |
name | |
input_schema | |
description | |
cache_control |
tool_choice
Value | Supported |
none | |
auto | |
any | |
tool |
Message fields
Field | Type | Subfield | Supported |
content | string | - | |
array, type="text" | text | ||
cache_control | |||
citations | |||
array, type="image" | - | ||
array, type = "document" | - | ||
array, type = "search_result" | - | ||
array, type = "thinking" | - | ||
array, type="redacted_thinking" | - | ||
array, type = "tool_use" | id | ||
input | |||
name | |||
cache_control | |||
array, type = "tool_result" | tool_use_id | ||
content | |||
cache_control | |||
is_error | |||
array, type = "server_tool_use" | - | ||
array, type = "web_search_tool_result" | - | ||
array, type = "code_execution_tool_result" | - | ||
array, type = "mcp_tool_use" | - | ||
array, type = "mcp_tool_result" | - | ||
array, type = "container_upload" | - |
Error codes
HTTP status code | API error code | Description |
400 | invalid_request_error | The request format or content is invalid. This can be caused by missing required parameters or incorrect data types for parameter values. |
401 | authentication_error | The API key is invalid. This can be caused by a missing API key in the request header or an incorrect API key. |
403 | permission_error | The API key does not have permission to access the specified resource. This can be caused by an account tier that does not have access to a specific model or an operation that is not allowed for the account. |
404 | not_found_error | The requested resource was not found. This can be caused by a misspelled compatible interface or a model in the request header that does not exist. |
413 | request_too_large | The request exceeds the maximum allowed size in bytes. The maximum request size for the standard API endpoint is 32 MB. |
429 | rate_limit_error | The account has reached its rate limit. Reduce the request frequency. |
500 | api_error | A general internal server error occurred. Retry the request later. |
529 | overloaded_error | The API server is currently overloaded and cannot process new requests. |