All Products
Search
Document Center

Dataphin:Built-in models for intelligent applications

Last Updated:Nov 21, 2025

Dataphin offers a diverse selection of built-in large models, including those from Alibaba Cloud Model Studio and DeepSeek, to support a wide range of business requirements.

Some models support the deep thinking mode. If the deep thinking mode cannot be disabled for a model, this is noted in the table.

Model service provider

Model display name

Model ID

Description

Is deep thinking supported?

Alibaba Cloud (Model Studio)

Qwen-Max

qwen-max

The best-performing model in the Qwen series. It is suitable for complex, multi-step tasks.

No

Qwen-Max-Latest

qwen-max-latest

The best-performing model in the Qwen series. It is suitable for complex, multi-step tasks. Its capabilities are always the same as the latest snapshot version.

No

Qwen-Plus

qwen-plus

A balanced model. Its inference performance, cost, and speed are between those of Qwen-Max and Qwen-Turbo. It is suitable for moderately complex tasks.

Yes

Qwen-Plus-Latest

qwen-plus-latest

A balanced model. Its inference performance, cost, and speed are between those of Qwen-Max and Qwen-Turbo. It is suitable for moderately complex tasks. Its capabilities are always the same as the latest snapshot version.

Yes

Qwen-Long

qwen-long

The model in the Qwen series with the longest context window. It offers balanced capabilities at a low cost. It is suitable for tasks such as long-text analysis, information extraction, summarization, and classification.

No

Qwen-Long-Latest

qwen-long-latest

The model in the Qwen series with the longest context window. It offers balanced capabilities at a low cost. It is suitable for tasks such as long-text analysis, information extraction, summarization, and classification. Its capabilities are always the same as the latest snapshot version.

No

Qwen3-32b

qwen3-32b

An open-source Qwen model. It excels in inference, human preference alignment, agent capabilities, and multilingual abilities. It has 32 billion (32B) parameters.

Yes

Qwen3-235b-a22b

qwen3-235b-a22b

An open-source Qwen model. It excels in inference, human preference alignment, agent capabilities, and multilingual abilities. It has 235 billion (235B) parameters.

Yes

Qwen-Coder

qwen-coder-plus

The Qwen code model.

No

DeepSeek-R1

deepseek-r1

This high-performance version has powerful reasoning capabilities. It delivers strong performance on tasks such as mathematics, coding, and natural language inference.

Yes (deep thinking mode cannot be disabled)

DeepSeek-V3

deepseek-v3

A self-developed Mixture of Experts (MoE) model. It excels in long-text processing, coding, mathematics, encyclopedic knowledge, and Chinese language capabilities.

No

DeepSeek-V3.1

deepseek-v3.1

A 685B high-performance model released on August 20, 2025. It excels in long-text processing, coding, mathematics, encyclopedic knowledge, and Chinese language capabilities.

No

Kimi-K2

Moonshot-Kimi-K2-Instruct

The Kimi series models are MoE language models from Moonshot AI. They show excellent performance in cutting-edge knowledge, inference, and coding tasks.

No

Qwen3-Next-80B-A3B(Thinking)

qwen3-next-80b-a3b-thinking

A new-generation open-source model based on Qwen3 that uses the thinking mode. Compared to the previous version (Qwen3-235B-A22B-Thinking-2507), it has improved instruction-following capabilities and provides more concise summaries.

Yes (thinking mode cannot be disabled)

Qwen3-Next-80B-A3B(Instruct)

qwen3-next-80b-a3b-instruct

A new-generation open-source model based on Qwen3 that uses the non-thinking mode. Compared to the previous version (Qwen3-235B-A22B-Instruct-2507), it has better Chinese text understanding, enhanced logical reasoning, and improved performance on text generation tasks.

No

Qwen3-235B-A22B(Thinking-2507)

qwen3-235b-a22b-thinking-2507

A new-generation open-source model based on Qwen3 that uses the thinking mode. It is an upgraded version of qwen3-235b-a22b (thinking mode).

Yes (thinking mode cannot be disabled)

Qwen3-235B-A22B(Instruct-2507)

qwen3-235b-a22b-instruct-2507

A new-generation open-source model based on Qwen3 that uses the non-thinking mode. It is an upgraded version of qwen3-235b-a22b (non-thinking mode).

No

DeepSeek

DeepSeek-thinking mode

deepseek-reasoner

This high-performance version has powerful reasoning capabilities. It delivers strong performance on tasks such as mathematics, coding, and natural language inference.

Yes (deep thinking mode cannot be disabled)

DeepSeek-non-thinking mode

deepseek-chat

A self-developed MoE model. It excels in long-text processing, coding, mathematics, encyclopedic knowledge, and Chinese language capabilities.

No

Alibaba Cloud (AI Stack)

Qwen3-32B

Qwen3-32B

An open-source Qwen model. It excels in inference, human preference alignment, agent capabilities, and multilingual abilities. It has 32 billion (32B) parameters.

No

Qwen3-235B-A22B-Instruct-2507

Qwen3-235B-A22B-Instruct-2507

A high-performance language model in the Qwen series designed for complex tasks. Released in July 2025, this model is an upgraded version of Qwen3-235B-A22B and supports the non-thinking mode. It excels in inference, general capabilities, and tool calling. It is suitable for scenarios that require high precision and complex logical processing.

No