Qwen is a series of large language models (LLMs) developed by Alibaba Cloud. Qwen models can understand and analyze natural language and multimodal data, such as image and video. They provide services and assistance across a wide range of fields and tasks. To obtain the best results, provide clear and detailed instructions.
Try the models
You can try the Qwen models in Playground (Singapore region or Beijing region).
Scenarios
With its powerful language and multimodal data processing capabilities, Qwen provides efficient and intelligent language services for various application scenarios, including the following:
Text creation: Write stories, official documents, emails, scripts, and poems.
Text processing: Polish text and extract summaries.
Programming assistance: Write and optimize code.
Translation services: Translate text between various languages, such as English, Japanese, French, and Spanish.
Dialogue simulation: Engage in interactive conversations by having the model assume different roles.
Data visualization: Create charts to present data.
Model list
Commercial models
Text generation - Qwen
The following are the Qwen commercial models. Compared to the open-source versions, the commercial models offer the latest capabilities and improvements.
The parameter sizes of the commercial models are not disclosed.
Each model is updated periodically. To use a fixed version, you can select a snapshot version. A snapshot version is typically maintained for one month after the release of the next snapshot version.
We recommend that you use the stable or latest version for more lenient rate limiting conditions.
Qwen-Max
The best-performing model in the Qwen series, suitable for complex, multi-step tasks. Usage | API reference | Try it online
The Qwen-Max model does not currently support deep thinking.
International (Singapore)
Model | Version | Context window | Max input | Max output | Input cost | Output cost | Free quota |
(Tokens) | (Million tokens) | ||||||
qwen3-max Currently has the same capabilities as qwen3-max-2025-09-23 | Stable | 262,144 | 258,048 | 65,536 | Tiered pricing applies. For more information, see the notes below the table. | 1 million tokens for input and output each Valid for 90 days after you activate Model Studio. | |
qwen3-max-2025-09-23 | Snapshot | ||||||
qwen3-max-preview | Preview | ||||||
Billing for the models listed above is tiered based on the number of input tokens per request.
Input tokens per request | Input price (Million tokens) qwen3-max and qwen3-max-preview support context cache. | Output price (Million tokens) |
0 < Tokens ≤ 32K | $1.2 | $6 |
32K < Tokens ≤ 128K | $2.4 | $12 |
128K < Tokens ≤ 252K | $3 | $15 |
Chinese mainland (Beijing)
Model | Version | Context window | Max input | Max output | Input cost | Output cost |
(Tokens) | (Million tokens) | |||||
qwen3-max Currently has the same capabilities as qwen3-max-2025-09-23 Batch calls are half price. | Stable | 262,144 | 258,048 | 65,536 | Tiered pricing applies. For more information, see the notes below the table. | |
qwen3-max-2025-09-23 | Snapshot | |||||
qwen3-max-preview | Preview | |||||
Billing for the models listed above is tiered based on the number of input tokens per request.
Input tokens per request | Input price (Million tokens) qwen3-max and qwen3-max-preview support context cache. | Output price (Million tokens) |
0 < Tokens ≤ 32K | $0.861 | $3.441 |
32K < Tokens ≤ 128K | $1.434 | $5.735 |
128K < Tokens ≤ 252K | $2.151 | $8.602 |
The latest qwen3-max model is an upgrade to the qwen3-max-preview version and is specifically enhanced for agent programming and tool calling. This official release achieves state-of-the-art (SOTA) performance in its domain and is designed for more complex agent requirements.
Qwen-Plus
A balanced model that offers performance, cost, and speed between those of Qwen-Max and Qwen-Flash. It is suitable for moderately complex tasks. Usage | API reference | Try it online | Deep thinking
International (Singapore)
Model | Version | Context window | Max input | Max output | Input cost | Output cost | Free quota |
(Tokens) | (Million tokens) | ||||||
qwen-plus Currently has the same capabilities as qwen-plus-2025-07-28 Part of the Qwen3 series | Stable | 1,000,000 | Thinking mode 995,904 Non-thinking mode 997,952 The default is 262,144. You can adjust this value using the max_input_tokens parameter. | 32,768 Max chain-of-thought: 81,920 | Tiered pricing applies. For more information, see the notes below the table. | 1 million tokens for input and output each Valid for 90 days after you activate Model Studio. | |
qwen-plus-latest Currently has the same capabilities as qwen-plus-2025-07-28 Part of the Qwen3 series | Latest | Thinking mode 995,904 Non-thinking mode 997,952 | |||||
qwen-plus-2025-09-11 Part of the Qwen3 series. | Snapshot | Thinking mode 995,904 Non-thinking mode 997,952 | |||||
qwen-plus-2025-07-28 also known as qwen-plus-0728 Part of the Qwen3 series | |||||||
qwen-plus-2025-07-14 also known as qwen-plus-0714 Part of the Qwen3 series | 131,072 | Thinking mode 98,304 Non-thinking mode 129,024 | 16,384 Max chain-of-thought: 38,912 | $0.4 | Thinking mode $4 Non-thinking mode $1.2 | ||
qwen-plus-2025-04-28 also known as qwen-plus-0428 Part of the Qwen3 series | |||||||
qwen-plus-2025-01-25 also known as qwen-plus-0125 | 129,024 | 8,192 | $1.2 | ||||
Billing for qwen-plus, qwen-plus-latest, qwen-plus-2025-09-11, and qwen-plus-2025-07-28 is tiered based on the number of input tokens per request.
Input tokens per request | Input price (Million tokens) | Mode | Output price (Million tokens) |
0 < Tokens ≤ 256K | $0.4 | Non-thinking mode | $1.2 |
Thinking mode | $4 | ||
256K < Tokens ≤ 1M | $1.2 | Non-thinking mode | $3.6 |
Thinking mode | $12 |
Chinese mainland (Beijing)
Model | Version | Context window | Max input | Max output | Input cost | Output cost |
(Tokens) | (Million tokens) | |||||
qwen-plus Currently has the same capabilities as qwen-plus-2025-07-28 Part of the Qwen3 series | Stable | 1,000,000 | Thinking mode 995,904 Non-thinking mode 997,952 The default is 131,072. You can adjust this value using the max_input_tokens parameter. | 32,768 Max chain-of-thought: 81,920 | Tiered pricing applies. For more information, see the notes below the table. | |
qwen-plus-latest Currently has the same capabilities as qwen-plus-2025-07-28 Part of the Qwen3 series | Latest | Thinking mode 995,904 Non-thinking mode 997,952 | ||||
qwen-plus-2025-09-11 Part of the Qwen3 series | Snapshot | Thinking mode 995,904 Non-thinking mode 997,952 | ||||
qwen-plus-2025-07-28 also known as qwen-plus-0728 Part of the Qwen3 series | ||||||
qwen-plus-2025-07-14 also known as qwen-plus-0714 Part of the Qwen3 series | 131,072 | Thinking mode 98,304 Non-thinking mode 129,024 | 16,384 Max chain-of-thought: 38,912 | $0.115 | Thinking mode $1.147 Non-thinking mode $0.287 | |
qwen-plus-2025-04-28 also known as qwen-plus-0428 Part of the Qwen3 series | ||||||
Billing for qwen-plus, qwen-plus-latest, qwen-plus-2025-09-11, and qwen-plus-2025-07-28 is tiered based on the number of input tokens per request.
Input tokens per request | Input price (Million tokens) | Mode | Output price (Million tokens) |
0 < Tokens ≤ 128K | $0.115 | Non-thinking mode | $0.287 |
Thinking mode | $1.147 | ||
128K < Tokens ≤ 256K | $0.345 | Non-thinking mode | $2.868 |
Thinking mode | $3.441 | ||
256K < Tokens ≤ 1M | $0.689 | Non-thinking mode | $6.881 |
Thinking mode | $9.175 |
These models support both thinking and non-thinking modes. You can switch between them using the enable_thinking parameter. In addition, the models' capabilities are significantly improved:
Reasoning capabilities: In evaluations for math, code, and logical reasoning, the model significantly outperforms QwQ and other models of similar size without a reasoning mode. It achieves top-tier performance among models of its scale.
Human preference alignment: The model shows significant improvements in creative writing, role assumption, multi-turn conversation, and instruction following. Its general capabilities are significantly better than those of other models of similar size.
Agent capabilities: The model achieves industry-leading performance in both thinking and non-thinking modes and can accurately invoke external tools.
Multilingual capabilities: The model supports more than 100 languages and dialects. Its capabilities in multilingual translation, instruction understanding, and common-sense reasoning are significantly improved.
Response format: This version fixes response format issues from previous versions, such as incorrect Markdown formatting, premature truncation, and incorrect boxed output.
For the models listed above, if you enable thinking mode but no thought process is generated, you are charged based on the pricing for non-thinking mode.
Qwen-Flash
The fastest and most cost-effective model in the Qwen series, ideal for simple jobs. Qwen-Flash features flexible tiered pricing, making it more cost-effective than Qwen-Turbo. Usage | API reference | Try it online | Thinking mode
International (Singapore)
Model | Version | Mode | Context window | Max input | Max chain-of-thought | Max output | Input cost | Output cost Chain-of-thought + Outputs | Free quota |
(Tokens) | (1,000 tokens) | ||||||||
qwen-flash Same capabilities as qwen-flash-2025-07-28 Part of the Qwen3 series. Batch calls are charged at half the standard price. | Stable | Thinking | 1,000,000 | 995,904 | 81,920 | 32,768 | Tiered pricing. See the description below the table. | 1 million tokens each Valid for 90 days after activating Alibaba Cloud Model Studio. | |
Non-thinking | 997,952 | - | |||||||
qwen-flash-2025-07-28 Part of the Qwen3 series. | Snapshot | Thinking | 995,904 | 81,920 | |||||
Non-thinking | 997,952 | - | |||||||
Billing for the models listed above is tiered based on the number of input tokens per request. qwen-flash supports context cache and batch calls.
Input tokens per request | Input price (Million tokens) | Output price (Million tokens) |
0< Tokens ≤256K | $0.05 | $0.4 |
256K< Tokens ≤1M | $0.25 | $2 |
Chinese mainland (Beijing)
Model | Version | Mode | Context window | Max input | Max chain-of-thought | Max output | Input cost | Output cost Chain-of-thought + Outputs |
(Tokens) | (1,000 tokens) | |||||||
qwen-flash Same capabilities as qwen-flash-2025-07-28 Part of the Qwen3 series | Stable | Thinking | 1,000,000 | 995,904 | 81,920 | 32,768 | Tiered pricing. See the description below the table. | |
Non-thinking | 997,952 | - | ||||||
qwen-flash-2025-07-28 Part of the Qwen3 series | Snapshot | Thinking | 995,904 | 81,920 | ||||
Non-thinking | 997,952 | - | ||||||
Billing for the models listed above is tiered based on the number of input tokens per request. qwen-flash supports context cache.
Input tokens per request | Input price (Million tokens) | Output price (Million tokens) |
0< Tokens ≤128K | $0.022 | $0.216 |
128K< Tokens ≤256K | $0.087 | $0.861 |
256K< Tokens ≤1M | $0.173 | $1.721 |
Qwen-Turbo
Qwen-Turbo will no longer be updated. We recommend replacing it with Qwen-Flash. Qwen-Flash uses flexible tiered pricing, which offers a more granular pricing model. Usage | API reference | Try it online | Deep thinking
International (Singapore)
Model | Version | Context window | Max input | Max output | Input cost | Output cost | Free quota |
(Tokens) | (Million tokens) | ||||||
qwen-turbo Currently has the same capabilities as qwen-turbo-2025-04-28 Part of the Qwen3 series | Stable | Thinking mode 131,072 Non-thinking mode 1,000,000 | Thinking mode 98,304 Non-thinking mode 1,000,000 | 16,384 Max chain-of-thought is 38,912 | $0.05 Batch calls are half price | Thinking mode: $0.5 Non-thinking mode: $0.2 Batch calls are half price | 1 million tokens for each Validity: 90 days after you activate Alibaba Cloud Model Studio |
qwen-turbo-latest Always has the same capabilities as the latest snapshot version Part of the Qwen3 series | Latest | $0.05 | Thinking mode: $0.5 Non-thinking mode: $0.2 | ||||
qwen-turbo-2025-04-28 Also known as qwen-turbo-0428 Part of the Qwen3 series | Snapshot | ||||||
qwen-turbo-2024-11-01 Also known as qwen-turbo-1101 | 1,000,000 | 1,000,000 | 8,192 | $0.2 | |||
Mainland China (Beijing)
Model | Version | Context window | Max input | Max output | Input cost | Output cost |
(Tokens) | (Million tokens) | |||||
qwen-turbo Currently has the same capabilities as qwen-turbo-2025-04-28 Part of the Qwen3 series | Stable | Thinking mode 131,072 Non-thinking mode 1,000,000 | Thinking mode 98,304 Non-thinking mode 1,000,000 | 16,384 Max chain-of-thought is 38,912 | $0.044 | Thinking mode $0.431 Non-thinking mode $0.087 |
qwen-turbo-latest Always has the same capabilities as the latest snapshot version Part of the Qwen3 series | Latest | |||||
qwen-turbo-2025-07-15 Also known as qwen-turbo-0715 Part of the Qwen3 series | Snapshot | |||||
qwen-turbo-2025-04-28 Also known as qwen-turbo-0428 Part of the Qwen3 series | ||||||
QwQ
The QwQ reasoning model is trained on the Qwen2.5 model and uses reinforcement learning to significantly improve its reasoning capabilities. The model's core metrics for math and code, such as AIME 24/25 and LiveCodeBench, and some of its general metrics, such as IFEval and LiveBench, are comparable to the full-performance version of DeepSeek-R1. Usage
Singapore
Model | Version | Context window | Max input | Max chain-of-thought | Max response | Input cost | Output cost | Free quota |
(Tokens) | (Million tokens) | |||||||
qwq-plus | Stable | 131,072 | 98,304 | 32,768 | 8,192 | $0.8 | $2.4 | 1 million tokens Validity: Within 90 days after you activate Alibaba Cloud Model Studio. |
Mainland China (Beijing)
Model | Version | Context window | Max input | Max chain-of-thought | Max response | Input cost | Output cost |
(Tokens) | (Million tokens) | ||||||
qwq-plus Same capabilities as qwq-plus-2025-03-05. | Stable | 131,072 | 98,304 | 32,768 | 8,192 | $0.230 | $0.574 |
qwq-plus-latest Always has the same capabilities as the latest snapshot version. | Latest | ||||||
qwq-plus-2025-03-05 Also known as qwq-plus-0305. | Snapshot | ||||||
Qwen-Long
The Qwen-Long model has the longest context window in the Qwen series. It offers balanced performance at a low cost. This model is ideal for tasks such as long-text analysis, information extraction, summarization, classification, and tagging. Usage | Try it online
China (Beijing)
Model | Version | Context window | Max input | Max output | Input cost | Output cost |
(Tokens) | (Million tokens) | |||||
qwen-long-latest Always matches the capabilities of the latest snapshot version. | Stable | 10,000,000 | 10,000,000 | 8,192 | $0.072 | $0.287 |
qwen-long-2025-01-25 Also known as qwen-long-0125. | Snapshot | |||||
Qwen-Omni
The Qwen-Omni model accepts combined inputs from multiple modalities, such as text, images, audio, and video, and generates responses in text or speech format. It provides a variety of expressive, human-like voices and supports audio output in multiple languages and dialects. You can use it in audio and video chat scenarios, such as for visual recognition, sentiment analysis, and education and training. Usage | API reference
International (Singapore)
Qwen3-Omni-Flash
Model | Version | Mode | Context window | Max input | Max chain-of-thought | Max output | Free quota |
(Tokens) | |||||||
qwen3-omni-flash Same capabilities as qwen3-omni-flash-2025-09-15 | Stable | Thinking mode | 65,536 | 16,384 | 32,768 | 16,384 | 1 million tokens (modality-agnostic) Valid for 90 days after activating Alibaba Cloud Model Studio |
Non-thinking mode | 49,152 | - | |||||
qwen3-omni-flash-2025-09-15 Also known as qwen3-omni-flash-0915 | Snapshot | Thinking mode | 65,536 | 16,384 | 32,768 | 16,384 | |
Non-thinking mode | 49,152 | - | |||||
After the free quota is used up, the following billing rules apply to input and output. Billing is the same for thinking and non-thinking modes. Audio output is not supported in thinking mode.
|
|
Qwen-Omni-Turbo (based on Qwen2.5)
Model | Version | Context window | Max input | Max output | Free quota |
(Tokens) | |||||
qwen-omni-turbo Equivalent to qwen-omni-turbo-2025-03-26 | Stable | 32,768 | 30,720 | 2,048 | 1 million modality-agnostic tokens Valid for 90 days after you activate Alibaba Cloud Model Studio |
qwen-omni-turbo-latest Always equivalent to the latest snapshot version | Latest | ||||
qwen-omni-turbo-2025-03-26 Also known as qwen-omni-turbo-0326 | Snapshot | ||||
After the free quota for the commercial model is used up, the following billing rules apply to input and output:
|
|
Mainland China (Beijing)
Model | Version | Context window | Max input | Max output |
(Tokens) | ||||
qwen-omni-turbo Offers the same capabilities as qwen-omni-turbo-2025-03-26. | Stable | 32,768 | 30,720 | 2,048 |
qwen-omni-turbo-latest Always offers the same capabilities as the latest snapshot version. | Latest | |||
qwen-omni-turbo-2025-03-26 Also known as qwen-omni-turbo-0326. | Snapshot | |||
qwen-omni-turbo-2025-01-19 Also known as qwen-omni-turbo-0119. | ||||
The following billing rules apply to input and output:
|
| ||||||||||||||
Billing example: The cost for a request with 1,000 text input tokens, 1,000 image input tokens, 1,000 text output tokens, and 1,000 audio output tokens is: $0.000058 (text input) + $0.000216 (image input) + $0.007168 (audio output). | |||||||||||||||
The Qwen3-Omni-Flash model offers significant improvements over Qwen-Omni-Turbo, which is no longer updated:
It is a hybrid thinking model that supports both thinking and non-thinking modes. You can switch between the modes using the
enable_thinkingparameter. By default, thinking mode is disabled.Audio output is not supported in thinking mode. In non-thinking mode, the audio output from the model has the following features:
It supports 17 voices, an increase from the 4 supported by Qwen-Omni-Turbo.
It supports 10 languages, an increase from the 2 supported by Qwen-Omni-Turbo.
Qwen-Omni-Realtime
Compared to Qwen Omni, these models support audio stream input. They have a built-in Voice Activity Detection (VAD) feature that automatically detects the start and end of user speech. Usage | Client events | Server events
International (Singapore)
Qwen3-Omni-Flash-Realtime
Model | Version | Context window | Max input | Max output | Free quota |
(Tokens) | |||||
qwen3-omni-flash-realtime Its capabilities are equivalent to those of qwen3-omni-flash-realtime-2025-09-15. | Stable | 65,536 | 49,152 | 16,384 | 1 million tokens each for input and output (modality-agnostic) This quota is valid for 90 days after you activate Alibaba Cloud Model Studio. |
qwen3-omni-flash-realtime-2025-09-15 | Snapshot | ||||
After your free quota is used up, the following billing rules apply to input and output:
|
|
Qwen-Omni-Turbo-Realtime (based on Qwen2.5)
Model | Version | Context window | Max input | Max output | Free quota |
(Tokens) | |||||
qwen-omni-turbo-realtime Equivalent to qwen-omni-turbo-realtime-2025-05-08. | Stable | 32,768 | 30,720 | 2,048 | 1 million tokens each for input and output (modality-agnostic) Valid for 90 days after you activate Alibaba Cloud Model Studio. |
qwen-omni-turbo-realtime-latest Always equivalent to the latest snapshot version. | Latest | ||||
qwen-omni-turbo-realtime-2025-05-08 | Snapshot | ||||
After your free quota is used up, the following billing rules apply to input and output:
|
|
Chinese mainland (Beijing)
Model | Version | Context window | Max input | Max output |
(Tokens) | ||||
qwen-omni-turbo-realtime Currently has the same capabilities as qwen-omni-turbo-2025-05-08. | Stable | 32,768 | 30,720 | 2,048 |
qwen-omni-turbo-realtime-latest Always has the same capabilities as the latest snapshot version. | Latest | |||
qwen-omni-turbo-realtime-2025-05-08 | Snapshot | |||
The following billing rules apply to input and output:
|
|
The Qwen3-Omni-Flash-Realtime model is recommended. It offers significantly improved capabilities compared to Qwen-Omni-Turbo-Realtime, which will no longer be updated. For audio output from the model:
It supports 17 voices. Qwen-Omni-Turbo-Realtime supports only 4.
It supports 10 languages. Qwen-Omni-Turbo-Realtime supports only 2.
QVQ
QVQ is a visual reasoning model that supports visual inputs and chain-of-thought outputs. It delivers superior performance in math, programming, visual analysis, creative tasks, and general tasks. Usage | Try it online
International (Singapore)
Model | Version | Context window | Max input | Max chain-of-thought | Max response | Input cost | Output cost | Free quota |
(Tokens) | (Million tokens) | |||||||
qvq-max Equivalent to qvq-max-2025-03-25. | Stable | 131,072 | 106,496 Maximum of 16,384 tokens for a single image. | 16,384 | 8,192 | $1.2 | $4.8 | 1 million input tokens and 1 million output tokens. Valid for 90 days after you activate Alibaba Cloud Model Studio. |
qvq-max-latest Always equivalent to the latest snapshot version. | Latest | |||||||
qvq-max-2025-03-25 Also known as qvq-max-0325. | Snapshot | |||||||
Mainland China (Beijing)
Model | Version | Context window | Max input | Max chain-of-thought | Max response | Input cost | Output cost |
(Tokens) | (Million tokens) | ||||||
qvq-max Offers stronger visual reasoning and instruction-following capabilities than qvq-plus, providing optimal performance for more complex tasks. Has the same capabilities as qvq-max-2025-03-25. | Stable | 131,072 | 106,496 Maximum of 16,384 for a single image. | 16,384 | 8,192 | $1.147 | $4.588 |
qvq-max-latest Always has the same capabilities as the latest snapshot version. | Latest | ||||||
qvq-max-2025-05-15 Also known as qvq-max-0515. | Snapshot | ||||||
qvq-max-2025-03-25 Also known as qvq-max-0325. | |||||||
qvq-plus Has the same capabilities as qvq-plus-2025-05-15. | Stable | $0.287 | $0.717 | ||||
qvq-plus-latest Always has the same capabilities as the latest snapshot version. | Latest | ||||||
qvq-plus-2025-05-15 Also known as qvq-plus-0515. | Snapshot | ||||||
Qwen-VL
Qwen-VL is a text generation model with visual understanding (image) capabilities. It not only performs Optical Character Recognition (OCR) but also provides further summarization and reasoning, such as extracting properties from product photos or solving problems shown in diagrams. Usage | API reference | Try it online
Qwen-VL models are billed based on the total number of input and output tokens. For more information about how image tokens are calculated, see Visual understanding.
International (Singapore)
Model | Version | Mode | Context window | Max input | Max chain-of-thought | Max output | Input cost | Output cost (Chain-of-thought + Output) | Free quota |
(Tokens) | (Million tokens) | ||||||||
qwen3-vl-plus Same capabilities as qwen3-vl-plus-2025-09-23 | Stable | thinking | 262,144 | 258,048 Maximum of 16,384 tokens per image | 81,920 | 32,768 | Tiered pricing. For more information, see the description below the table. | 1 million input tokens and 1 million output tokens Valid for 90 days after you activate Alibaba Cloud Model Studio. | |
non-thinking | 260,096 Maximum of 16,384 tokens per image | - | |||||||
qwen3-vl-plus-2025-09-23 | Snapshot | thinking | 258,048 Maximum of 16,384 tokens per image | 81,920 | |||||
non-thinking | 260,096 Maximum of 16,384 tokens per image | - | |||||||
qwen3-vl-flash Same capabilities as qwen3-vl-flash-2025-10-15 | Stable | thinking | 258,048 Maximum of 16,384 tokens per image | 81,920 | |||||
non-thinking | 260,096 Maximum of 16,384 tokens per image | - | |||||||
qwen3-vl-flash-2025-10-15 | Snapshot | thinking | 258,048 Maximum of 16,384 tokens per image | 81,920 | |||||
non-thinking | 260,096 Maximum of 16,384 tokens per image | - | |||||||
The models listed above use tiered pricing based on the number of input tokens per request. The input and output prices are the same for both thinking and non-thinking modes.
qwen3-vl-plus series
Input tokens per request | Input price (Million tokens) | Output price (Million tokens) |
0 < Tokens ≤ 32K | $0.20 | $1.60 |
32K < Tokens ≤ 128K | $0.30 | $2.40 |
128K < Tokens ≤ 256K | $0.60 | $4.80 |
qwen3-vl-flash series
Input tokens per request | Input price (Million tokens) | Output price (Million tokens) |
0 < Tokens ≤ 32K | $0.05 | $0.40 |
32K < Tokens ≤ 128K | $0.075 | $0.60 |
128K < Tokens ≤ 256K | $0.12 | $0.96 |
Mainland China (Beijing)
Model | Version | Mode | Context window | Max input | Max chain-of-thought | Max output | Input cost | Output cost | Free quota |
(Tokens) | (Million tokens) | ||||||||
qwen3-vl-plus Same capabilities as qwen3-vl-plus-2025-09-23 | Stable | thinking | 262,144 | 258,048 Maximum of 16,384 tokens per image | 81,920 | 32,768 | Tiered pricing. For more information, see the description below the table. | No free quota | |
non-thinking | 260,096 Maximum of 16,384 tokens per image | - | |||||||
qwen3-vl-plus-2025-09-23 | Snapshot | thinking | 258,048 Maximum of 16,384 tokens per image | 81,920 | |||||
non-thinking | 260,096 Maximum of 16,384 tokens per image | - | |||||||
qwen3-vl-flash Same capabilities as qwen3-vl-flash-2025-10-15 | Stable | thinking | 258,048 Maximum of 16,384 tokens per image | 81,920 | |||||
non-thinking | 260,096 Maximum of 16,384 tokens per image | - | |||||||
qwen3-vl-flash-2025-10-15 | Snapshot | thinking | 258,048 Maximum of 16,384 tokens per image | 81,920 | |||||
non-thinking | 260,096 Maximum of 16,384 tokens per image | - | |||||||
The models listed above use tiered pricing based on the number of input tokens per request. The input and output prices are the same for both thinking and non-thinking modes.
qwen3-vl-plus series
Input tokens per request | Input price (Million tokens) | Output price (Million tokens) |
0 < Tokens ≤ 32K | $0.143353 | $1.433525 |
32K < Tokens ≤ 128K | $0.215029 | $2.150288 |
128K < Tokens ≤ 256K | $0.430058 | $4.300576 |
qwen3-vl-flash series
Input tokens per request | Input price (per 1M tokens) | Output price (per 1M tokens) |
0 < Tokens ≤ 32K | $0.022 | $0.215 |
32K < Tokens ≤ 128K | $0.043 | $0.43 |
128K < Tokens ≤ 256K | $0.086 | $0.859 |
Qwen-OCR
The Qwen-OCR model is designed for text extraction. Compared to the Qwen-VL model, it specializes in extracting text from images of documents, tables, exam papers, and handwriting. It can recognize multiple languages, such as English, French, Japanese, Korean, German, Russian, and Italian. Usage | API reference | Try it online
International (Singapore)
Model | Version | Context window | Max input | Max output | Unit price | Free quota |
(tokens) | (Million tokens) | |||||
qwen-vl-ocr | Stable | 34,096 | 30,000 A single graph can support up to 30,000. | 4,096 | $0.72 | 1 million input tokens and 1 million output tokens Validity: The quota is valid for 90 days after you activate Alibaba Cloud Model Studio. |
Mainland China (Beijing)
Model | Version | Context window | Max input | Max output | Input/output unit price |
(Tokens) | (Million tokens) | ||||
qwen-vl-ocr Offers the same capabilities as qwen-vl-ocr-2025-04-13. | Stable | 34,096 | 30,000 Maximum of 30,000 for a single image. | 4,096 | $0.717 |
qwen-vl-ocr-latest Offers the same capabilities as the latest snapshot version. | Latest | ||||
qwen-vl-ocr-2025-04-13 Also known as qwen-vl-ocr-0413. Significantly improves text recognition and includes six built-in OCR tasks and features, such as custom prompts and image rotation correction. | Snapshot | ||||
qwen-vl-ocr-2024-10-28 Also known as qwen-vl-ocr-1028. | Snapshot | ||||
Qwen-ASR
Built on the Qwen multi-modal base model, this model supports features such as multilingual recognition, singing recognition, and noise rejection.Usage
International (Singapore)
Model | Version | Supported languages | Supported sample rates | Unit price | Free quota (Note) |
qwen3-asr-flash Currently an alias for qwen3-asr-flash-2025-09-08 | Stable version | Chinese (including Mandarin, Sichuanese, Minnan, Wu, and Cantonese), English, Japanese, German, Korean, Russian, French, Portuguese, Arabic, Italian, and Spanish | 16 kHz | $0.000035/second | 36,000 seconds (10 hours) Valid for 90 days after you activate Alibaba Cloud Model Studio |
qwen3-asr-flash-2025-09-08 | Snapshot version |
Mainland China (Beijing)
Model | Version | Supported languages | Supported sample rates | Unit price |
qwen3-asr-flash Alias for qwen3-asr-flash-2025-09-08 | Stable version | Chinese (Mandarin, Sichuanese, Minnan, Wu, and Cantonese), English, Japanese, German, Korean, Russian, French, Portuguese, Arabic, Italian, and Spanish | 16 kHz | $0.000032/second |
qwen3-asr-flash-2025-09-08 | Snapshot version |
Qwen-Math
Qwen-Math is a language model designed for mathematical problem-solving. Usage | API reference | Try it online
This model is available only in the China (Beijing) region.
Model | Version | Context window | Max input | Max output | Input cost | Output cost |
(Tokens) | (Million tokens) | |||||
qwen-math-plus Same capabilities as qwen-math-plus-2024-09-19. | Stable | 4,096 | 3,072 | 3,072 | $0.574 | $1.721 |
qwen-math-plus-latest Same capabilities as the latest snapshot. | Latest | |||||
qwen-math-plus-2024-09-19 Also known as qwen-math-plus-0919. | Snapshot | |||||
qwen-math-plus-2024-08-16 Also known as qwen-math-plus-0816. | ||||||
qwen-math-turbo Same capabilities as qwen-math-turbo-2024-09-19. | Stable | $0.287 | $0.861 | |||
qwen-math-turbo-latest Same capabilities as the latest snapshot. | Latest | |||||
qwen-math-turbo-2024-09-19 Also known as qwen-math-turbo-0919. | Snapshot | |||||
Qwen-Coder
The latest Qwen3-Coder-Plus series models are Qwen code generation models built on Qwen3. They are powerful coding agents that excel at tool calling and environment interaction. These models can program autonomously and provide excellent coding and general-purpose capabilities. Usage | API reference | Try it online
International (Singapore)
Model | Version | Context window | Max input | Max output | Input cost (Million tokens) | Output cost (Million tokens) | Free quota |
Tokens | Per million tokens | ||||||
qwen3-coder-plus Currently equivalent to qwen3-coder-plus-2025-07-22 | Stable | 1,000,000 | 997,952 | 65,536 | Tiered pricing. See the description below the table. | 1 million input tokens and 1 million output tokens Valid for 90 days after you activate Alibaba Cloud Model Studio | |
qwen3-coder-plus-2025-09-23 | Snapshot | ||||||
qwen3-coder-plus-2025-07-22 | Snapshot | ||||||
qwen3-coder-flash Currently equivalent to qwen3-coder-flash-2025-07-28 | Stable | ||||||
qwen3-coder-flash-2025-07-28 | Snapshot | ||||||
These models use tiered billing based on the number of input tokens per request.
qwen3-coder-plus series
The prices for qwen3-coder-plus, qwen3-coder-plus-2025-09-23, and qwen3-coder-plus-2025-07-22 are as follows. The qwen3-coder-plus model supports context cache. Input text that hits the implicit cache is billed at 20% of the unit price. Input text that hits the explicit cache is billed at 10% of the unit price.
Input tokens per request | Input cost (Million tokens) | Output cost (Million tokens) |
0 < Tokens ≤ 32K | $1 | $5 |
32K < Tokens ≤ 128K | $1.8 | $9 |
128K < Tokens ≤ 256K | $3 | $15 |
256K < Tokens ≤ 1M | $6 | $60 |
qwen3-coder-flash series
The prices for qwen3-coder-flash and qwen3-coder-flash-2025-07-28 are as follows. The qwen3-coder-flash model supports context cache. Input text that hits the implicit cache is billed at 20% of the unit price.
Input tokens per request | Input cost (Million tokens) | Output cost (Million tokens) |
0 < Tokens ≤ 32K | $0.3 | $1.5 |
32K < Tokens ≤ 128K | $0.5 | $2.5 |
128K < Tokens ≤ 256K | $0.8 | $4 |
256K < Tokens ≤ 1M | $1.6 | $9.6 |
Mainland China (Beijing)
Model | Version | Context window | Max input | Max output | Input cost | Output cost |
(Tokens) | (Million tokens) | |||||
qwen3-coder-plus Provides the same functionality as qwen3-coder-plus-2025-07-22. | Stable | 1,000,000 | 997,952 | 65,536 | Tiered pricing. See the description below the table. | |
qwen3-coder-plus-2025-09-23 | Snapshot | |||||
qwen3-coder-plus-2025-07-22 | Snapshot | |||||
qwen3-coder-flash Currently an alias for qwen3-coder-flash-2025-07-28 | Stable | |||||
qwen3-coder-flash-2025-07-28 | Snapshot | |||||
These models use tiered billing based on the number of input tokens per request.
qwen3-coder-plus series
The prices for qwen3-coder-plus, qwen3-coder-plus-2025-09-23, and qwen3-coder-plus-2025-07-22 are as follows. The qwen3-coder-plus model supports context cache. Input text that hits the implicit cache is billed at 20% of the unit price. Input text that hits the explicit cache is billed at 10% of the unit price.
Input tokens per request | Input cost (Million tokens) | Output cost (Million tokens) |
0 < Tokens ≤ 32K | $0.574 | $2.294 |
32K < Tokens ≤ 128K | $0.861 | $3.441 |
128K < Tokens ≤ 256K | $1.434 | $5.735 |
256K < Tokens ≤ 1M | $2.868 | $28.671 |
qwen3-coder-flash series
The prices for qwen3-coder-flash and qwen3-coder-flash-2025-07-28 are as follows. The qwen3-coder-flash model supports context cache. Input text that hits the implicit cache is billed at 20% of the unit price.
Input tokens per request | Input cost (Million tokens) | Output cost (Million tokens) |
0 < Tokens ≤ 32K | $0.144 | $0.574 |
32K < Tokens ≤ 128K | $0.216 | $0.861 |
128K < Tokens ≤ 256K | $0.359 | $1.434 |
256K < Tokens ≤ 1M | $0.717 | $3.584 |
Qwen-MT
This flagship large translation model is a comprehensive upgrade to Qwen 3. It supports translation between 92 languages, including Chinese, English, Japanese, Korean, French, Spanish, German, Thai, Indonesian, Vietnamese, and Arabic. The model's performance and translation quality are significantly improved. It provides enhanced support for custom glossaries, format retention, and domain-specific prompts, resulting in more accurate and natural translations. Usage.
International (Singapore)
Model | Context window | Max input | Max output | Input cost | Output cost | Free quota |
(Tokens) | (Million tokens) | |||||
qwen-mt-plus Part of Qwen3-MT | 16,384 | 8,192 | 8,192 | $2.46 | $7.37 | 1 million tokens per model Expires 90 days after activating Alibaba Cloud Model Studio. |
qwen-mt-turbo Part of Qwen3-MT | $0.16 | $0.49 | ||||
Mainland China (Beijing)
Model | Context window | Max input | Max output | Input cost | Output cost |
(Tokens) | (Million tokens) | ||||
qwen-mt-plus Part of Qwen3-MT | 16,384 | 8,192 | 8,192 | $0.259 | $0.775 |
qwen-mt-turbo Part of Qwen3-MT | $0.101 | $0.280 | |||
Qwen data mining model
The Qwen data mining model extracts structured information from documents for use in domains such as data annotation and content moderation. Usage | API reference
Available only in the China (Beijing) region.
Model | Context window | Max input | Max output | Input cost | Output cost | Free quota |
(Tokens) | (Million tokens) | |||||
qwen-doc-turbo | 131,072 | 129,024 | 8,192 | $0.087 | $0.144 | No free quota |
Qwen deep research model
The Qwen deep research model breaks down complex problems, performs inference and analysis using web search, and generates research reports. Usage | API reference
Available only in the China (Beijing) region.
Model | Context window | Max input | Max output | Input cost | Output cost |
(Tokens) | (Per 1,000 tokens) | ||||
qwen-deep-research | 1,000,000 | 997,952 | 32,768 | $0.007742 | $0.023367 |
Open source models
Text generation - Qwen open-source versions
In the model names, xxb indicates the parameter size. For example, qwen2-72b-instruct indicates a parameter size of 72 billion (72B).
Alibaba Cloud Model Studio supports invoking the open-source versions of Qwen. You do not need to deploy the models locally. For open-source versions, we recommend using the Qwen3 and Qwen2.5 models.
Qwen3
qwen3-next-80b-a3b-thinking, released in September 2025, supports only thinking mode. Compared to qwen3-235b-a22b-thinking-2507, it offers improved instruction-following capabilities and more concise summaries.
qwen3-next-80b-a3b-instruct, released in September 2025, supports only non-thinking mode. It offers enhanced Chinese comprehension, logical reasoning, and text generation capabilities compared to qwen3-235b-a22b-instruct-2507.
The qwen3-235b-a22b-thinking-2507 and qwen3-30b-a3b-thinking-2507 models, released in July 2025, support only thinking mode. They are upgraded versions of qwen3-235b-a22b (thinking mode) and qwen3-30b-a3b (thinking mode).
The qwen3-235b-a22b-instruct-2507 and qwen3-30b-a3b-instruct-2507 models, released in July 2025, support only non-thinking mode. They are upgraded versions of qwen3-235b-a22b (non-thinking mode) and qwen3-30b-a3b (non-thinking mode).
The Qwen3 models, released in April 2025, support both thinking and non-thinking modes. You can switch between the modes using the enable_thinking parameter. The Qwen3 models also feature significant capability enhancements:
Inference capabilities: In evaluations for math, code, and logical reasoning, the models significantly outperform QwQ and other non-reasoning models of a similar scale. Their performance is top-tier in the industry for models of their scale.
Human preference alignment: The models show major improvements in creative writing, role assumption, multi-turn conversation, and instruction following. Their general capabilities are significantly better than other models of a similar scale.
Agent capabilities: The models deliver industry-leading performance in both thinking and non-thinking modes and can perform precise external tool calling.
Multilingual capabilities: The models support over 100 languages and dialects. They show significant improvements in multilingual translation, instruction comprehension, and common-sense reasoning.
Response format fixes: This update fixes response format issues from previous versions, such as incorrect Markdown, truncated responses, and incorrect boxed output.
The open-source Qwen3 models released in April 2025 do not support non-streaming output in thinking mode.
If an open-source Qwen3 model is in thinking mode but does not output a thought process, it is billed at the non-thinking mode rate.
Thinking mode | Non-thinking mode | Usage
International (Singapore)
Model | Mode | Context window | Max input | Max chain-of-thought | Max response | Input cost | Output cost | Free quota |
(Tokens) | (Million tokens) | |||||||
qwen3-next-80b-a3b-thinking | Thinking only | 131,072 | 126,976 | 81,920 | 32,768 | $0.5 | $6 | 1 million tokens Valid for 90 days after you activate Alibaba Cloud Model Studio |
qwen3-next-80b-a3b-instruct | Non-thinking only | 129,024 | - | $0.5 | $2 | |||
qwen3-235b-a22b-thinking-2507 | Thinking only | 126,976 | 81,920 | $0.7 | $8.4 | |||
qwen3-235b-a22b-instruct-2507 | Non-thinking only | 129,024 | - | $0.7 | $2.8 | |||
qwen3-30b-a3b-thinking-2507 | Thinking only | 126,976 | 81,920 | $0.2 | $2.4 | |||
qwen3-30b-a3b-instruct-2507 | Non-thinking only | 129,024 | - | $0.8 | ||||
qwen3-235b-a22b This model and the following models were released in April 2025. | Non-thinking mode | 129,024 | - | 16,384 | $0.7 | $2.8 | ||
Thinking mode | 98,304 | 38,912 | $8.4 | |||||
qwen3-32b | Non-thinking mode | 129,024 | - | $2.8 | ||||
Thinking mode | 98,304 | 38,912 | $8.4 | |||||
qwen3-30b-a3b | Non-thinking mode | 129,024 | - | $0.2 | $0.8 | |||
Thinking mode | 98,304 | 38,912 | $2.4 | |||||
qwen3-14b | Non-thinking mode | 129,024 | - | 8,192 | $0.35 | $1.4 | ||
Thinking mode | 98,304 | 38,912 | $4.2 | |||||
qwen3-8b | Non-thinking mode | 129,024 | - | $0.18 | $0.7 | |||
Thinking mode | 98,304 | 38,912 | $2.1 | |||||
qwen3-4b | Non-thinking mode | 129,024 | - | $0.11 | $0.42 | |||
Thinking mode | 98,304 | 38,912 | $1.26 | |||||
qwen3-1.7b | Non-thinking mode | 32,768 | 30,720 | - | $0.42 | |||
Thinking mode | 28,672 | The total value cannot exceed 30,720. | $1.26 | |||||
qwen3-0.6b | Non-thinking mode | 30,720 | - | $0.42 | ||||
Thinking mode | 28,672 | The total of the value and the input cannot exceed 30,720. | $1.26 | |||||
Mainland China (Beijing)
Model | Mode | Context window | Max input | Max chain-of-thought | Max response | Input cost | Output cost |
(Tokens) | (Million tokens) | ||||||
qwen3-next-80b-a3b-thinking | Thinking only | 131,072 | 126,976 | 81,920 | 32,768 | $0.144 | $1.434 |
qwen3-next-80b-a3b-instruct | Non-thinking only | 129,024 | - | $0.574 | |||
qwen3-235b-a22b-thinking-2507 | Thinking only | 126,976 | 81,920 | $0.287 | $2.868 | ||
qwen3-235b-a22b-instruct-2507 | Non-thinking only | 129,024 | - | $1.147 | |||
qwen3-30b-a3b-thinking-2507 | Thinking only | 126,976 | 81,920 | $0.108 | $1.076 | ||
qwen3-30b-a3b-instruct-2507 | Non-thinking only | 129,024 | - | $0.431 | |||
qwen3-235b-a22b | Non-thinking | 129,024 | - | 16,384 | $0.287 | $1.147 | |
Thinking | 98,304 | 38,912 | $2.868 | ||||
qwen3-32b | Non-thinking | 129,024 | - | $0.287 | $1.147 | ||
Thinking | 98,304 | 38,912 | $2.868 | ||||
qwen3-30b-a3b | Non-thinking | 129,024 | - | $0.108 | $0.431 | ||
Thinking | 98,304 | 38,912 | $1.076 | ||||
qwen3-14b | Non-thinking | 129,024 | - | 8,192 | $0.144 | $0.574 | |
Thinking | 98,304 | 38,912 | $1.434 | ||||
qwen3-8b | Non-thinking | 129,024 | - | $0.072 | $0.287 | ||
Thinking | 98,304 | 38,912 | $0.717 | ||||
qwen3-4b | Non-thinking | 129,024 | - | $0.044 | $0.173 | ||
Thinking | 98,304 | 38,912 | $0.431 | ||||
qwen3-1.7b | Non-thinking | 32,768 | 30,720 | - | $0.173 | ||
Thinking | 28,672 | The sum of input and chain-of-thought tokens must not exceed 30,720. | $0.431 | ||||
qwen3-0.6b | Non-thinking | 30,720 | - | $0.173 | |||
Thinking | 28,672 | The sum of input and chain-of-thought tokens must not exceed 30,720. | $0.431 | ||||
QwQ-Open-source
QwQ reasoning model trained on Qwen2.5-32B. Reinforcement learning has significantly improved its inference capabilities. Core metrics for math and code (AIME 24/25, LiveCodeBench) and some general metrics (IFEval, LiveBench) are on par with the full-power version of DeepSeek-R1. All metrics significantly exceed those of DeepSeek-R1-Distill-Qwen-32B, which is also based on Qwen2.5-32B. Usage | API reference
This feature is only available in the China (Beijing) region.
Model | Context window | Max input | Max chain-of-thought | Max output | Input price | Output price |
(Tokens) | (Million tokens) | |||||
qwq-32b | 131,072 | 98,304 | 32,768 | 8,192 | $0.287 | $0.861 |
QwQ-Preview
The qwq-32b-preview model is an experimental research model developed by the Qwen team in 2024. It focuses on enhancing AI reasoning capabilities, especially in math and programming. For more information about the limitations of the qwq-32b-preview model, see the QwQ official blog. Usage | API reference | Try it online
This feature is only available in the China (Beijing) region.
Model | Context window | Max input | Max output | Input cost | Output cost |
(Tokens) | (Million tokens) | ||||
qwq-32b-preview | 32,768 | 30,720 | 16,384 | $0.287 | $0.861 |
Qwen2.5
QVQ
The qvq-72b-preview model is an experimental research model developed by the Qwen team. It focuses on enhancing visual reasoning capabilities, especially in mathematical reasoning. For more information about the limitations of the qvq-72b-preview model, see the QVQ official blog. Usage | API reference
To have the model output the thinking process before the final answer, you can use the commercial version of the QVQ model.
This feature is only available in the China (Beijing) region.
Model | Context window | Max input | Max output | Input Cost | Output cost |
Tokens | Per million tokens | ||||
qvq-72b-preview | 32,768 | 16,384 Maximum 16,384 tokens per image | 16,384 | $1.721 | $5.161 |
Qwen-Omni
This is a new multimodal large model for understanding and generation, trained on Qwen2.5. It supports text, image, speech, and video inputs, and can generate text and speech simultaneously in a stream. The speed of multimodal content understanding is significantly improved. Usage | API reference
International (Singapore)
Model | Context window | Max input | Max output | Free quota |
(Tokens) | ||||
qwen2.5-omni-7b | 32,768 | 30,720 | 2,048 | 1 million tokens (regardless of modality) Valid for 90 days after activating Alibaba Cloud Model Studio. |
After the free quota is used up, the following billing rules apply to inputs and outputs:
|
|
Mainland China (Beijing)
Model | Context window | Max input | Max output |
(Tokens) | |||
qwen2.5-omni-7b | 32,768 | 30,720 | 2,048 |
The billing rules for inputs and outputs are as follows:
|
|
Qwen3-Omni-Captioner
Qwen3-Omni-Captioner is an open-source model based on Qwen3-Omni. Without any prompts, it automatically generates accurate and comprehensive descriptions for complex audio, including speech, ambient sounds, music, and sound effects. It can identify speaker emotions, musical elements (such as style and instruments), and sensitive information, making it suitable for applications such as audio content analysis, security audits, intent recognition, and audio editing. Usage | API reference
This model is available only in the Singapore region.
Model | Context window | Max input | Max output | Input cost | Output cost | Free quota |
(Tokens) | (Million tokens) | |||||
qwen3-omni-30b-a3b-captioner | 65,536 | 32,768 | 32,768 | $3.81 | $3.06 | 1 million tokens Validity: 90 days after you activate Alibaba Cloud Model Studio |
Qwen-VL
This is the open-source version of Alibaba Cloud's Qwen-VL. Usage | API reference
The Qwen3-VL model offers significant improvements over Qwen2.5-VL:
Agent interaction: It operates computer and mobile phone interfaces, detects graphical user interface (GUI) elements, understands features, and invokes tools to perform tasks. It achieves top-tier performance in evaluations such as OS World.
Visual encoding: It generates code from images or videos. You can use this feature to create HTML, CSS, and JS code from design drafts or website screenshots.
Spatial intelligence: It supports 2D and 3D positioning and accurately determines object orientation, perspective changes, and occlusion relationships.
Long video understanding: It understands video content up to 20 minutes long and can pinpoint specific moments with second-level accuracy.
Deep thinking: It excels at capturing details and analyzing causality, achieving top-tier performance in evaluations such as MathVista and MMMU.
OCR: It supports 33 languages and performs more stably in scenarios that involve complex lighting, blur, or tilt. It also significantly improves recognition accuracy for rare characters, ancient script, and technical terms.
International (Singapore)
Model | Mode | Context window | Max input | Max chain-of-thought | Max response | Input cost | Output cost CoT + response | Free quota |
(Tokens) | (Million tokens) | |||||||
qwen3-vl-235b-a22b-thinking | Thinking only | 126,976 | 81,920 | $0.7 | $8.4 | 1 million tokens each Valid for 90 days after Model Studio is activated. | ||
qwen3-vl-235b-a22b-instruct | Non-thinking only | 129,024 | - | $2.8 | ||||
qwen3-vl-32b-thinking | Thinking only | 131,072 | 126,976 | 81,920 | 32,768 | $0.7 | $8.4 | |
qwen3-vl-32b-instruct | Non-thinking only | 129,024 | - | $2.8 | ||||
qwen3-vl-30b-a3b-thinking | Thinking only | 126,976 | 81,920 | $0.2 | $2.4 | |||
qwen3-vl-30b-a3b-instruct | Non-thinking only | 129,024 | - | $0.8 | ||||
qwen3-vl-8b-thinking | Thinking only | 126,976 | 81,920 | $0.18 | $2.1 | |||
qwen3-vl-8b-instruct | Non-thinking only | 129,024 | - | $0.7 | ||||
Mainland China (Beijing)
Model | Mode | Context window | Max input | Max chain-of-thought | Max response | Input cost | Output cost CoT + response | Free quota |
(Tokens) | (Million tokens) | |||||||
qwen3-vl-235b-a22b-thinking | Thinking only | 131,072 | 126,976 | 81,920 | $0.286705 | $2.867051 | No free quota | |
qwen3-vl-235b-a22b-instruct | Non-thinking only | 129,024 | - | $1.146820 | ||||
qwen3-vl-32b-thinking | Thinking only | 131,072 | 126,976 | 81,920 | 32,768 | $0.287 | $2.868 | |
qwen3-vl-32b-instruct | Non-thinking only | 129,024 | - | $1.147 | ||||
qwen3-vl-30b-a3b-thinking | Thinking only | 126,976 | 81,920 | $0.108 | $1.076 | |||
qwen3-vl-30b-a3b-instruct | Non-thinking only | 129,024 | - | $0.431 | ||||
qwen3-vl-8b-thinking | Thinking only | 126,976 | 81,920 | $0.072 | $0.717 | |||
qwen3-vl-8b-instruct | Non-thinking only | 129,024 | - | $0.287 | ||||
Rate limiting
For more information about rate limiting for models, see Rate limiting.