×
Community Blog Qwen3.6-Max-Preview: Smarter, Sharper, Still Evolving

Qwen3.6-Max-Preview: Smarter, Sharper, Still Evolving

Following the release of Qwen3.6-Plus, we are sharing an early preview of our next proprietary model: Qwen3.6-Max-Preview.

3_6_max_preview_banner

Following the release of Qwen3.6-Plus, we are sharing an early preview of our next proprietary model: Qwen3.6-Max-Preview. Compared to Qwen3.6-Plus, this preview release brings stronger world knowledge and instruction following, along with significant agentic coding improvements across a wide range of benchmarks. As a preview, the model is still under active development — we are continuing to iterate and expect further gains in subsequent versions.

  • Qwen3.6-Max-Preview is the hosted proprietary model available via Alibaba Cloud Model Studio, featuring:

    • improved agentic coding capability over Qwen3.6-Plus
    • stronger world knowledge and instruction following
    • improved real-world agent and knowledge reliability performance
  • You can chat interactively on Qwen Studio or call via API as qwen3.6-max-preview on Alibaba Cloud Model Studio API (coming soon).

Performance

Below we present evaluations of Qwen3.6-Max-Preview against leading frontier models. Compared to Qwen3.6-Plus, the preview release delivers significant improvements in agentic coding (e.g., SkillsBench +9.9, SciCode +6.3, NL2Repo +5.0, Terminal-Bench 2.0 +3.8), stronger world knowledge (SuperGPQA +2.3, QwenChineseBench +5.3), and better instruction following (ToolcallFormatIFBench +2.8).

qwen3_6_max_preview_score

Build with Qwen3.6-Max-Preview

Qwen3.6-Max-Preview is coming soon to Alibaba Cloud Model Studio. Please stand by until we are fully ready.

Qwen3.6-Max-Preview is available through the Alibaba Cloud Model Studio API as qwen3.6-max-preview. You can also try it instantly on Qwen Studio.

API Usage

This release supports the preserve_thinking feature: preserving thinking content from all preceding turns in messages, which is recommended for agentic tasks.

Alibaba Cloud Model Studio

Alibaba Cloud Model Studio supports industry-standard protocols, including chat completions and responses APIs compatible with OpenAI’s specification, as well as an API interface compatible with Anthropic.

Example code for chat completions API is provided below:

"""
Environment variables (per official docs):
  DASHSCOPE_API_KEY: Your API Key from https://modelstudio.console.alibabacloud.com
  DASHSCOPE_BASE_URL: (optional) Base URL for compatible-mode API.
    - Beijing: https://dashscope.aliyuncs.com/compatible-mode/v1
    - Singapore: https://dashscope-intl.aliyuncs.com/compatible-mode/v1
    - US (Virginia): https://dashscope-us.aliyuncs.com/compatible-mode/v1
  DASHSCOPE_MODEL: (optional) Model name; override for different models.
"""
from openai import OpenAI
import os

api_key = os.environ.get("DASHSCOPE_API_KEY")
if not api_key:
    raise ValueError(
        "DASHSCOPE_API_KEY is required. "
        "Set it via: export DASHSCOPE_API_KEY='your-api-key'"
    )

client = OpenAI(
    api_key=api_key,
    base_url=os.environ.get(
        "DASHSCOPE_BASE_URL",
        "https://dashscope-intl.aliyuncs.com/compatible-mode/v1",
    ),
)

messages = [{"role": "user", "content": "Introduce vibe coding."}]

model = os.environ.get(
    "DASHSCOPE_MODEL",
    "qwen3.6-max-preview",
)
completion = client.chat.completions.create(
    model=model,
    messages=messages,
    extra_body={
        "enable_thinking": True,
        # "preserve_thinking": True,
    },
    stream=True
)

reasoning_content = ""  # Full reasoning trace
answer_content = ""  # Full response
is_answering = False  # Whether we have entered the answer phase
print("\n" + "=" * 20 + "Reasoning" + "=" * 20 + "\n")

for chunk in completion:
    if not chunk.choices:
        print("\nUsage:")
        print(chunk.usage)
        continue

    delta = chunk.choices[0].delta

    # Collect reasoning content only
    if hasattr(delta, "reasoning_content") and delta.reasoning_content is not None:
        if not is_answering:
            print(delta.reasoning_content, end="", flush=True)
        reasoning_content += delta.reasoning_content

    # Received content, start answer phase
    if hasattr(delta, "content") and delta.content:
        if not is_answering:
            print("\n" + "=" * 20 + "Answer" + "=" * 20 + "\n")
            is_answering = True
        print(delta.content, end="", flush=True)
        answer_content += delta.content

For more information, please visit the API doc.

Summary

Qwen3.6-Max-Preview is an early preview of our next proprietary model, delivering meaningful improvements over Qwen3.6-Plus in agentic coding, world knowledge, and instruction following. It achieves the top score on six major coding benchmarks — SWE-bench Pro, Terminal-Bench 2.0, SkillsBench, QwenClawBench, QwenWebBench, and SciCode — with substantial gains over its predecessor. It also demonstrates stronger knowledge (SuperGPQA, QwenChineseBench) and better instruction following (ToolcallFormatIFBench).

As a preview release, Qwen3.6-Max-Preview is still under active development. We are continuing to iterate on the model and expect further improvements in subsequent versions. We welcome community feedback and look forward to seeing what you build. Stay tuned!

Citation

Feel free to cite the following article if you find Qwen3.6-Max-Preview helpful:

@misc{qwen36_max_preview,
    title = {{Qwen3.6-Max-Preview}: Smarter, Sharper, Still Evolving},
    url = {https://qwen.ai/blog?id=qwen3.6-max-preview},
    author = {{Qwen Team}},
    month = {April},
    year = {2026}
}
0 0 0
Share on

Alibaba Cloud Community

1,381 posts | 491 followers

You may also like

Comments