Tongyi Qianwen (Qwen)

Full-range, open-source, multimodal, and multi-functional

About Qwen

Alibaba Cloud provides Tongyi Qianwen (Qwen) models, a series of large language models (LLMs) and multimodal models (MLLMs), to the open-source community. The latest Qwen 2.5 models are pre-trained on up to 20 trillion tokens of data, and fine-tuned via supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF) to enhance alignment with user needs, with improvements in following instructions, understanding structured data, and generating long texts and structured outputs.

Multimodal models, including Qwen-VL (large language vision model) and Qwen-Audio (large language audio model), support cross-modal processing, while the latest Qwen2.5-Omni enables real-time streaming responses to text, images, audio, and video via its Thinker-Talker architecture.

Open-source Qwen models are licensed under Apache 2.0 and available on Hugging Face, ModelScope, and GitHub. You can also try Qwen models and easily customize and deploy them in Alibaba Cloud Model Studio.

Leading Performance in Multiple Dimensions

Qwen outperforms other open-source baseline models of similar sizes on a series of benchmark datasets that evaluate natural language understanding, mathematical problem-solving, coding, etc.

Easy and Low-Cost Customization

You can deploy Qwen models with a few clicks in PAI-EAS, and fine-tune them with your data stored on Alibaba Cloud, or external sources, to perform industry or enterprise-specific tasks.

Applications for Generative AI Era

You can leverage Qwen APIs to build generative AI applications for a broad range of scenarios such as writing, image generation, audio analysis, etc. to improve work efficiency in your organization and transform customer experience.

What Qwen Can Do

Try Qwen Models on Alibaba Cloud Model Studio