Migrate from Anthropic to Model Studio by updating these parameters:
-
ANTHROPIC_API_KEY (or ANTHROPIC_AUTH_TOKEN): Replace with your Model Studio API key.
-
ANTHROPIC_BASE_URL: Replace this with the Model Studio-compatible endpoint
https://dashscope-intl.aliyuncs.com/apps/anthropic. -
Model name (model): Replace with a supported model name like
qwen3-plus. See Supported models for details.
This topic is applicable only to the International Edition (Singapore region).
Quick integration
Text chat
import anthropic
import os
client = anthropic.Anthropic(
api_key=os.getenv("ANTHROPIC_API_KEY"),
base_url=os.getenv("ANTHROPIC_BASE_URL"),
)
# Migration: Set ANTHROPIC_API_KEY and ANTHROPIC_BASE_URL, then update the model below.
# See the Compatibility Details section for full parameter support.
message = client.messages.create(
model="qwen-plus", # Set the model to qwen-plus
max_tokens=1024,
# Deep thinking is supported by some models only. See the supported models list.
thinking={
"type": "enabled",
"budget_tokens": 1024
},
# Streaming output
stream=True,
messages=[
{
"role": "user",
"content": [
{
"type": "text",
"text": "Who are you?"
}
]
}
]
)
print("=== Thinking Process ===")
first_text = True
for chunk in message:
if chunk.type == "content_block_delta":
if hasattr(chunk.delta, 'thinking'):
print(chunk.delta.thinking, end="", flush=True)
elif hasattr(chunk.delta, 'text'):
if first_text:
print("\n\n=== Answer ===")
first_text = False
print(chunk.delta.text, end="", flush=True)
Supported models
Supported Qwen models:
|
Series |
Model name (model) |
|
Qwen-Max (Some supports thinking) |
qwen3-max, qwen3-max-2026-01-23 (supports thinking mode), qwen3-max-preview (supports thinking mode) View more |
|
Qwen-Plus |
qwen3.5-plus, qwen3.5-plus-2026-02-15, qwen-plus, qwen-plus-latest, qwen-plus-2025-09-11 View more |
|
Qwen-Flash |
qwen-flash, qwen-flash-2025-07-28 View more |
|
Qwen-Turbo |
qwen-turbo, qwen-turbo-latest View more |
|
Qwen-Coder (Thinking not supported) |
qwen3-coder-next, qwen3-coder-plus, qwen3-coder-plus-2025-09-23, qwen3-coder-flash View more |
|
Qwen-VL (Thinking not supported) |
qwen3-vl-plus, qwen3-vl-flash, qwen-vl-max, qwen-vl-plus View more |
For information about model parameters and billing rules, see Models.
Detailed steps
Activate Model Studio
For first-time setup, activate Model Studio.
-
Log on to the Model Studio console.
-
If the activation prompt appears at the top, activate Model Studio and claim your free quota. Otherwise, it's already active.
After activation, claim your 90-day free quota for model inference. See Free quota for new users for details.
Charges apply when you exceed the quota or validity period. To prevent charges, enable Free quota only. Fees are based on console quotes and final billing.
Configure environment variables
Configure these environment variables for Anthropic compatibility:
-
ANTHROPIC_BASE_URL: Set to https://dashscope-intl.aliyuncs.com/apps/anthropic. -
ANTHROPIC_API_KEYorANTHROPIC_AUTH_TOKEN: Set this to your Model Studio API key.Use either
ANTHROPIC_API_KEYorANTHROPIC_AUTH_TOKENfor authentication. This guide usesANTHROPIC_API_KEY.
macOS
-
Run this command to check your shell type:
echo $SHELL -
Set environment variables for your shell:
Zsh
# Replace YOUR_DASHSCOPE_API_KEY with your Model Studio API Key. echo 'export ANTHROPIC_BASE_URL="https://dashscope-intl.aliyuncs.com/apps/anthropic"' >> ~/.zshrc echo 'export ANTHROPIC_API_KEY="YOUR_DASHSCOPE_API_KEY"' >> ~/.zshrcBash
# Replace YOUR_DASHSCOPE_API_KEY with your Model Studio API Key. echo 'export ANTHROPIC_BASE_URL="https://dashscope-intl.aliyuncs.com/apps/anthropic"' >> ~/.bash_profile echo 'export ANTHROPIC_API_KEY="YOUR_DASHSCOPE_API_KEY"' >> ~/.bash_profile -
Apply the environment variables:
Zsh
source ~/.zshrcBash
source ~/.bash_profile -
Verify environment variables in a new terminal:
echo $ANTHROPIC_BASE_URL echo $ANTHROPIC_API_KEY
Windows
-
Set Model Studio's base URL and API key as environment variables:
CMD
-
Set environment variables in CMD:
# Replace YOUR_DASHSCOPE_API_KEY with your DashScope API key setx ANTHROPIC_API_KEY "YOUR_DASHSCOPE_API_KEY" setx ANTHROPIC_BASE_URL "https://dashscope-intl.aliyuncs.com/apps/anthropic" -
Verify in a new CMD window:
echo %ANTHROPIC_API_KEY% echo %ANTHROPIC_BASE_URL%
PowerShell
-
Set environment variables in PowerShell:
# Replace YOUR_DASHSCOPE_API_KEY with your Model Studio API Key. [Environment]::SetEnvironmentVariable("ANTHROPIC_API_KEY", "YOUR_DASHSCOPE_API_KEY", [EnvironmentVariableTarget]::User) [Environment]::SetEnvironmentVariable("ANTHROPIC_BASE_URL", "https://dashscope-intl.aliyuncs.com/apps/anthropic", [EnvironmentVariableTarget]::User) -
Verify in a new PowerShell window:
echo $env:ANTHROPIC_API_KEY echo $env:ANTHROPIC_BASE_URL
-
APIcCall - Text chat
cURL
curl -X POST "https://dashscope-intl.aliyuncs.com/apps/anthropic/v1/messages" \
-H "Content-Type: application/json" \
-H "x-api-key: ${ANTHROPIC_API_KEY}" \
-d '{
"model": "qwen-plus",
"max_tokens": 1024,
"stream": true,
"thinking": {
"type": "enabled",
"budget_tokens": 1024
},
"system": "You are a helpful assistant",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Who are you?"
}
]
}
]
}'
Python
-
Install Anthropic SDK
pip install anthropic -
Example
import anthropic import os client = anthropic.Anthropic( api_key=os.getenv("ANTHROPIC_API_KEY"), base_url=os.getenv("ANTHROPIC_BASE_URL"), ) message = client.messages.create( model="qwen-plus", max_tokens=1024, stream=True, system="you are a helpful assistant", # Deep thinking is supported by some models only. See the supported models list. thinking={ "type": "enabled", "budget_tokens": 1024 }, messages=[ { "role": "user", "content": [ { "type": "text", "text": "Who are you?" } ] } ] ) print("=== Thinking Process ===") first_text = True for chunk in message: if chunk.type == "content_block_delta": if hasattr(chunk.delta, 'thinking'): print(chunk.delta.thinking, end="", flush=True) elif hasattr(chunk.delta, 'text'): if first_text: print("\n\n=== Answer ===") first_text = False print(chunk.delta.text, end="", flush=True)
TypeScript
-
Install Anthropic TypeScript SDK
npm install @anthropic-ai/sdk -
Example
import Anthropic from "@anthropic-ai/sdk"; async function main() { const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY, baseURL: process.env.ANTHROPIC_BASE_URL, }); const stream = await anthropic.messages.create({ model: "qwen-plus", max_tokens: 1024, stream: true, // Deep thinking is supported by some models only. See the list of supported models. thinking: { type: "enabled", budget_tokens: 1024 }, system: "You are a helpful assistant", messages: [{ role: "user", content: [ { type: "text", text: "Who are you?" } ] }] }); console.log("=== Thinking Process ==="); let firstText = true; for await (const chunk of stream) { if (chunk.type === "content_block_delta") { if ('thinking' in chunk.delta) { process.stdout.write(chunk.delta.thinking); } else if ('text' in chunk.delta) { if (firstText) { console.log("\n\n=== Answer ==="); firstText = false; } process.stdout.write(chunk.delta.text); } } } console.log(); } main().catch(console.error);
Compatibility details
HTTP header
|
Field |
Supported |
|
x-api-key |
|
|
Authorization Bearer |
|
|
anthropic-beta/anthropic-version |
|
Basic fields
|
Field |
Supported |
Description |
Example |
|
model |
|
Model name. See Supported models for the list. |
qwen-plus |
|
max_tokens |
|
Maximum tokens to generate. |
1024 |
|
container |
|
- |
- |
|
mcp_servers |
|
- |
- |
|
metadata |
|
- |
- |
|
service_tier |
|
- |
- |
|
stop_sequences |
|
Custom text sequence that stops generation. |
["}"] |
|
stream |
|
Streaming output. |
True |
|
system |
|
System prompt. |
You are a helpful assistant |
|
temperature |
|
Controls generation diversity. |
1.0 |
|
thinking |
|
When enabled, model performs inference before responding to improve accuracy. Not all models support this. See Supported models. |
{"type": "enabled", "budget_tokens": 1024} |
|
top_k |
|
Number of candidates sampled during generation. |
10 |
|
top_p |
|
Probability threshold for nucleus sampling. |
0.1 |
Set only temperature or top_p (both control diversity). See Text generation model overview.
Tool fields
tools
|
Field |
Supported |
|
name |
|
|
input_schema |
|
|
description |
|
|
cache_control |
|
tool_choice
|
Value |
Supported |
|
none |
|
|
auto |
|
|
any |
|
|
tool |
|
Message fields
|
Field |
Type |
Subfield |
Supported |
Description |
|
content |
string |
- |
|
Plain text content. |
|
array, type="text" |
text |
|
Text block content. |
|
|
cache_control |
|
Controls caching for this text block. |
||
|
citations |
|
- |
||
|
array, type="image" |
- |
|
- |
|
|
array, type="video" |
- |
|
- |
|
|
array, type="document" |
- |
|
- |
|
|
array, type="search_result" |
- |
|
- |
|
|
array, type="thinking" |
- |
|
- |
|
|
array, type="redacted_thinking" |
- |
|
- |
|
|
array, type="tool_use" |
id |
|
Unique identifier for the tool call. |
|
|
input |
|
Parameter object passed when calling the tool. |
||
|
name |
|
Name of the tool being called. |
||
|
cache_control |
|
Controls caching for this tool call. |
||
|
array, type="tool_result" |
tool_use_id |
|
The ID of the |
|
|
content |
|
Result returned after tool execution. Usually a string or JSON string. |
||
|
cache_control |
|
Controls caching for this tool result. |
||
|
is_error |
|
- |
||
|
array, type="server_tool_use" |
- |
|
- |
|
|
array, type="web_search_tool_result" |
- |
|
- |
|
|
array, type="code_execution_tool_result" |
- |
|
- |
|
|
array, type="mcp_tool_use" |
- |
|
- |
|
|
array, type="mcp_tool_result" |
- |
|
- |
|
|
array, type="container_upload" |
- |
|
- |
Error codes
|
HTTP status code |
Error type |
Description |
|
400 |
invalid_request_error |
Invalid request format or content. Common causes: missing required parameters or wrong parameter types. |
|
400 |
Arrearage |
Account has overdue payment. Service is paused. Recharge and retry. |
|
403 |
authentication_error |
API key is invalid. Common causes: missing or incorrect API key in request header. |
|
404 |
not_found_error |
Requested resource not found. Common causes: endpoint typo or invalid model name. |
|
429 |
rate_limit_error |
Rate limit reached. Reduce request frequency. |
|
500 |
api_error |
Internal server error. Retry later. |
|
529 |
overloaded_error |
API server overloaded. Cannot process new requests. |