Dify is an open-source platform for developing large language model (LLM) applications. You can build your applications using the model APIs provided by Alibaba Cloud Model Studio.
Prerequisites
Create an API key and ensure that Model Studio has been activated.
1. Configure the model
1.1. Install the model provider
Go to the Dify Marketplace. Under Models, find TONGYI and install the latest version of the plugin.

The TONGYI plugin is not provided by Alibaba Cloud. It is maintained by the Dify team. If an error occurs when you install the latest version, try installing an earlier version.
You can also use the TONGYI plugin for the DeepSeek models from Model Studio.
1.2. Configure the API key
Click your profile picture in the upper-right corner of the page and then click Settings. Under Model Provider, find the TONGYI card and click Config.
If you use a model in an international region (Singapore), enter the API key for International Edition in API Key. Then, set Use International Endpoint to Yes.
If you use a model in the China (Beijing) region, enter the API key for mainland China Edition in API Key. Then, set Use International Endpoint to No.
If the error Invalid API-key provided occurs during API key configuration, try installing an earlier version of the Qwen plugin.

1.3. Select a model
Click Show Models on the TONGYI card. Then, turn on the switch for the model that you want to use.
If the plugin does not include the latest Qwen model, try installing the OpenAI-API-compatible plugin. In the plugin settings, set API endpoint URL tohttps://dashscope-intl.aliyuncs.com/compatible-mode/v1(for the Singapore region) orhttps://dashscope.aliyuncs.com/compatible-mode/v1(for the Beijing region).

2. Get started
Dify supports several types of Large Language Model (LLM) applications. Follow the instructions for your application type.
Chatbot/Agent
1. Create a chatbot or agent
In the Studio, click Create from Blank. Then, click MORE BASIC APP TYPES, create a chatbot or agent.
2. Select a model
In the upper-right corner of the application page, select a model. For this example, under TONGYI, select qwen-plus-latest(Qwen3). Then, turn on the thinking mode and set it to True.

3. Test the conversation
Enter "Who are you?", and the model will think and then respond.

You can also use the Qwen-VL or QVQ model to ask questions about an image. After you select a vision model, a Vision switch appears on the left. Turn on the switch to upload an image in the dialog box on the right.

Chatflow/Workflow
1. Create a Chatflow or workflow
In the Studio, create and open a Chatflow or workflow.
2. Add an LLM node
Add an LLM node to the canvas. Select the node to open the editor. Then, select a model. For this example, select qwen-plus-2025-07-28(Qwen3), turn on the thinking mode, and set it to True.

If you use the Qwen-VL or QVQ model, turn on the VISION switch for the LLM node:

3. Run the LLM node
Click Add Message. In the USER message field, enter "Who are you?". Then, click the run button
in the upper-right corner of the node.
The
textfield returned by the LLM node contains the thinking process and the response. Use Dify's code execution node and a regular expression to extract them separately.
Knowledge base
1. Create a knowledge base
Create and open a knowledge base.
2. Select a data source
In this step, upload your knowledge base files.
3. Segment and clean text
In this step, configure the Embedding and Rerank models from Alibaba Cloud Model Studio. This example uses text-embedding-v4 and gte-rerank-v2. Configure the other parameters as needed.
gte-rerank-v2 is only supported in the China (Beijing) region.

You cannot select the multimodal-embedding-v1 model as the Embedding model at this time. Stay tuned for updates.
FAQ
Q1: Why do I get an error when configuring the API key in the TONGYI plugin?
A: Common reasons:
The latest version of the plugin might be unstable. Try an earlier version.
You are using an API key from a sub-workspace. Version
0.0.41of the Qwen plugin checks the permission to call theqwen-turbomodel. Add model call permission forqwen-turbo.The Qwen plugin is not officially maintained by Alibaba Cloud. The validation policy in future versions is subject to change. We recommend using an API key from the default workspace.
The endpoint is set incorrectly. Set Use International Endpoint based on the region of your API key.
Q2: How to use the Qwen-Omni/Qwen-OCR models?
A: These models cannot be configured directly in Dify. You can access them using the HTTP node in a Chatflow or workflow. For more information, see the curl command in the relevant documentation.
To reduce the risk of an HTTP node timeout, call the models using streaming output.
Q3: How to use Wan models?
A: Dify does not provide plugins for Wan models. You can use nodes in a Dify Chatflow or workflow for text-to-image and text-to-video generation. Follow these steps:
1. Download and import the workflow template
Download our template: Wan - Text-to-Image Demo.yml or Wan - Text-to-Video Demo.yml. In Studio, click Import DSL file and select the template file that you downloaded.
2. Configure environment variables
Go to the workflow interface, find
, and set the value of DASHSCOPE_API_KEYto your API key.3. Test the image generation
Click the Run button to generate a work. For example, entering "a kitten" in the text-to-image workflow produces the following image:

The text-to-video workflow returns a video URL.
Text-to-video generation usually takes more than 5 minutes. Please be patient.
4. Publish as a tool (Optional)
To use the Wan text-to-image or text-to-video features in other LLM applications, click Publish in the upper-right corner and select Workflow as Tool.
The template uses the Singapore region'swan2.2-t2i-flashmodel for text-to-image and thewan2.1-t2v-turbomodel for text-to-video. You can change the model in the STEP1 node. You can also change the regional API endpoint in the STEP1 and STEP3 nodes.
Q4: How to deploy a private instance of Dify?
A: The Dify cloud service has several limitations, such as a maximum of five applications. For more information about private deployment, see the Alibaba Cloud Dify deployment solutions.