After you deploy Dify, you can use the DMS AI plug-in on the Model Provider page to integrate private model services and Alibaba Cloud Model Studio services into Dify, such as LLM, Embedding, and Rerank services.
Integrate LLM services into Dify
Integrate private LLM services
The following models are supported:
DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B, DeepSeek-R1-Distill-Qwen-32B, and DeepSeek-R1-Distill-Llama-70B.
Go to the Dify workspace. On the Studio tab, click the account name in the upper-right corner, and then click Settings.

On the Model Provider page, configure relevant parameters such as Model Name, API Key, and Endpoint URL, and then click Save.
The endpoint URL is in the
http://172.17.XXX.XXX: XX/v1format.
Integrate the LLM services provided by Alibaba Cloud Model Studio
Before using the LLM services provided by Alibaba Cloud Model Studio, you need to enable Internet access for the VPC where the Dify instance is deployed. In this example, DeepSeek-R1 is used.
Go to the workspace, click the
icon in the left-side navigation pane, and then click the Studio tab.On the Studio tab, click the account name in the upper-right corner, and then click Settings.
On the Model Provider page, set the Model Type parameter to LLM and configure relevant parameters such as Model Name, API Key, and Endpoint URL. Then, click Save.
Model Name: the name of the model deployed in Alibaba Cloud Model Studio.
API Key: Obtain an API key.
Endpoint URL: the endpoint used to connect to the Alibaba Cloud Model Studio server. The endpoint of model services provided by Alibaba Cloud Model Studio is
https://dashscope.aliyuncs.com/compatible-mode/v1.

Integrate Embedding or Rerank services into Dify
Use private Embedding or Rerank services
The following Embedding and Rerank services are supported by DMS AI.
Model type | Model name | Maximum context size |
Embedding | bge-m3 | 8192 |
bge-large-zh-v1.5 | 512 | |
Rerank | bge-reranker-v2-m3 | 8192 |
Step 1: Register model information in DMS AI
Go to the Dify workspace. On the Studio tab, click the account name in the upper-right corner, and then click Settings.
On the Model Provider page, set the Model Type parameter to Text Embedding or Rerank and configure relevant parameters. Then, click Save.
NoteYou only need to configure the Model Name, Service Provider, API Key, and Endpoint URL parameters.

Step 2: Select the model on the knowledge base configuration page
Go to the Dify workspace.
On the Knowledge tab, click Create Knowledge.

On the knowledge base configuration page that appears, select the model that you want to integrate into Dify.
Use Embedding or Rerank services provided by AnalyticDB for PostgreSQL
Step 1: Register model information in DMS AI
Go to the Dify workspace. On the Studio tab, click the account name in the upper-right corner, and then click Settings.
On the Model Provider page, set the Model Type parameter to Text Embedding or Rerank and configure relevant parameters. Then, click Save.
NoteYou only need to configure the Model Name, Service Provider, Access Key, Secret Key, Instance ID, Region ID, and Context Size parameters. For more information, see View the information about AccessKey pairs of a RAM user and Rerank.
The maximum context size is 2048.
For more information about parameter description of models, see Rerank.
Step 2: Select the model on the knowledge base configuration page
Go to the Dify workspace.
On the Knowledge tab, click Create Knowledge.

On the knowledge base configuration page that appears, select the model that you want to integrate into Dify.