All Products
Search
Document Center

Microservices Engine:Schedule Dify workflows by using XXL-JOB

Last Updated:Jun 06, 2025

This topic describes the core features provided by XXL-JOB to schedule Dify workflows and provides detailed steps.

Background information

Dify workflows are suitable for a wide range of business scenarios that require time-based scheduling. Sample scenarios:

  • Risk monitoring: Scan risk data every minute, analyze potential risk events by using large language models (LLMs), and trigger alerts in a timely manner.

  • Data analysis: Pull financial data on a daily basis, analyze the data by using LLMs, and provide investment ideas for investors.

  • Content generation: Automatically summarize daily work content and generate daily reports.

Important

Dify workflows do not natively support time-based scheduling. You can use XXL-JOB for workflow scheduling and status monitoring.

Core features

XXL-JOB provides the following core features to schedule Dify workflows.

  • Dify workflows: Self-managed Dify workflows over the Internet and Alibaba Cloud Dify workflows over a virtual private cloud (VPC) are supported.

  • Flexible time configuration: The cron, api, fixed_rate, fixed_delay, and one_time time types, custom time zones, and custom calendars are supported.

  • Enterprise-level alerting and monitoring

    • Flexible alert policies: Job-level failure alerts, timeout alerts, success notifications, and threshold-triggered alerts at instance and application levels are supported.

    • Multiple message notification methods: Text messages, phone calls, webhooks, and emails are supported.

  • Enterprise-level observability

    • Scheduling dashboard: Instance-level and application-level scheduling statuses are displayed, including scheduling, success, and failure curves.

    • Execution history: The execution history of Dify workflows, including status, basic information, inputs and outputs, time consumption, and token consumption, is recorded.

    • Scheduling events: Scheduling events for each Dify workflow, including workflow-related and node-related events, are recorded.

    • Node tracking: The information about one-time execution of a Dify workflow, including the execution results of all nodes, is recorded. Drill-down of loop nodes, iteration nodes, and conditional branch nodes are supported.

Prerequisites

Procedure

1. (Optional) Configure an internal endpoint of the API server for the Dify service

  1. Log on to the Container Service for Kubernetes (ACK) console.

  2. In the left-side navigation pane of the Clusters page, click the cluster name to access the cluster where the Dify service is deployed.

  3. In the left-side navigation pane, choose Network > Services. Then, click Create.

  4. In the Create Service dialog box, select SLB for Service Type, select Internal Access from the Access Method drop-down list, and then enter component and proxy in Backend, as shown in the following figure. Then, configure other parameters and click OK.

    image

    image

  5. Confirm that the internal endpoint of the API server is generated.

    image

2. Create a Dify workflow

  1. Log on to the Dify console, click Studio in the upper-right corner, and then select Create from Template to create a workflow.

  2. Log on to the MSE console, and select a region in the top navigation bar.

  3. In the left-side navigation pane, choose Task Scheduling > XXL-JOB Version.

  4. Click the ID of the desired instance. In the left-side navigation pane, click Task Management. On the page that appears, click Create Task. In the Create Task panel, select Dify Workflow for Job Type. For more information about how to create a job, see Task management.

    image

    1. Endpoint: Set this parameter to the endpoint of the API server that corresponds to the Dify workflow. You can obtain the endpoint of the API server from the upper-right corner of the API Access page after you log on to the Dify console. If you use an Alibaba Cloud self-managed Dify workflow, we recommend that you change the parameter value to an internal endpoint, as instructed in "1. (Optional) Configure an internal endpoint of the API server for the Dify service".

    2. API Key: Set this parameter to the API key of the Dify workflow. Different workflows have different keys. You can click API Key in the upper-right corner of the API Access page to obtain it.

    3. Input: Enter the input of the workflow in the JSON format. The input is the same as the value of inputs in Body.

      image

      The following code provides an example of the inputs parameter value.

      {"input_text": "what is your name"}

3. Verify the result

  1. Find the job that you created, and click Run once in the Operation column to manually perform a test.

    image

  2. Click More in the Operation column of the created job. In the list that appears, select Scheduling Records to view job execution records.

    image

  3. Click details in the Operation column of the job to view workflow information on the Basic information, Input and Output, and Track tabs.

    • Basic information tab

      image

    • Input and Output tab

      image

    • Track tab. Drill-down is supported if iteration nodes, loop nodes, and branch nodes are involved.

      image

      image