This topic describes the core features provided by XXL-JOB to schedule Dify workflows and provides detailed steps.
Background information
Dify workflows are suitable for a wide range of business scenarios that require time-based scheduling. Sample scenarios:
Risk monitoring: Scan risk data every minute, analyze potential risk events by using large language models (LLMs), and trigger alerts in a timely manner.
Data analysis: Pull financial data on a daily basis, analyze the data by using LLMs, and provide investment ideas for investors.
Content generation: Automatically summarize daily work content and generate daily reports.
Dify workflows do not natively support time-based scheduling. You can use XXL-JOB for workflow scheduling and status monitoring.
Core features
XXL-JOB provides the following core features to schedule Dify workflows.
Dify workflows: Self-managed Dify workflows over the Internet and Alibaba Cloud Dify workflows over a virtual private cloud (VPC) are supported.
Flexible time configuration: The
cron,api,fixed_rate,fixed_delay, andone_timetime types, custom time zones, and custom calendars are supported.Enterprise-level alerting and monitoring
Flexible alert policies: Job-level failure alerts, timeout alerts, success notifications, and threshold-triggered alerts at instance and application levels are supported.
Multiple message notification methods: Text messages, phone calls, webhooks, and emails are supported.
Enterprise-level observability
Scheduling dashboard: Instance-level and application-level scheduling statuses are displayed, including scheduling, success, and failure curves.
Execution history: The execution history of Dify workflows, including status, basic information, inputs and outputs, time consumption, and token consumption, is recorded.
Scheduling events: Scheduling events for each Dify workflow, including workflow-related and node-related events, are recorded.
Node tracking: The information about one-time execution of a Dify workflow, including the execution results of all nodes, is recorded. Drill-down of loop nodes, iteration nodes, and conditional branch nodes are supported.
Prerequisites
The Dify service is deployed. For more information, see Install ack-dify.
An instance is created. The engine version must be later than 2.2.0.
Procedure
1. (Optional) Configure an internal endpoint of the API server for the Dify service
Log on to the Container Service for Kubernetes (ACK) console.
In the left-side navigation pane of the Clusters page, click the cluster name to access the cluster where the Dify service is deployed.
In the left-side navigation pane, choose . Then, click Create.
In the Create Service dialog box, select SLB for Service Type, select Internal Access from the Access Method drop-down list, and then enter
componentandproxyin Backend, as shown in the following figure. Then, configure other parameters and click OK.

Confirm that the internal endpoint of the API server is generated.

2. Create a Dify workflow
Log on to the Dify console, click Studio in the upper-right corner, and then select Create from Template to create a workflow.
Log on to the MSE console, and select a region in the top navigation bar.
In the left-side navigation pane, choose .
Click the ID of the desired instance. In the left-side navigation pane, click Task Management. On the page that appears, click Create Task. In the Create Task panel, select Dify Workflow for Job Type. For more information about how to create a job, see Task management.

Endpoint: Set this parameter to the endpoint of the API server that corresponds to the Dify workflow. You can obtain the endpoint of the API server from the upper-right corner of the API Access page after you log on to the Dify console. If you use an Alibaba Cloud self-managed Dify workflow, we recommend that you change the parameter value to an internal endpoint, as instructed in "1. (Optional) Configure an internal endpoint of the API server for the Dify service".
API Key: Set this parameter to the API key of the Dify workflow. Different workflows have different keys. You can click API Key in the upper-right corner of the API Access page to obtain it.
Input: Enter the input of the workflow in the JSON format. The input is the same as the value of
inputsinBody.
The following code provides an example of the
inputsparameter value.{"input_text": "what is your name"}
3. Verify the result
Find the job that you created, and click Run once in the Operation column to manually perform a test.

Click More in the Operation column of the created job. In the list that appears, select Scheduling Records to view job execution records.

Click details in the Operation column of the job to view workflow information on the Basic information, Input and Output, and Track tabs.
Basic information tab

Input and Output tab

Track tab. Drill-down is supported if iteration nodes, loop nodes, and branch nodes are involved.

