All Products
Search
Document Center

Microservices Engine:Schedule Dify workflows by using XXL-JOB

Last Updated:Mar 10, 2026

Dify workflows do not natively support time-based scheduling. XXL-JOB on Microservices Engine (MSE) fills this gap -- schedule Dify workflows on a recurring basis, monitor execution status, and get alerts on failures or timeouts.

Use cases

Scheduled Dify workflows fit any scenario where a large language model (LLM) task needs to run on a recurring basis:

  • Risk monitoring: Scan risk data every minute, analyze potential threats with an LLM, and trigger alerts in real time.

  • Data analysis: Pull financial data daily, generate LLM-driven insights, and deliver them to stakeholders.

  • Content generation: Summarize the day's work and produce daily reports automatically.

Scheduling options

Both self-managed Dify workflows over the Internet and Alibaba Cloud Dify workflows over a Virtual Private Cloud (VPC) are supported.

Choose the schedule type that best fits your use case:

Schedule typeBest forExample
cronCalendar-based recurring tasks0 0 8 * * ? (every day at 08:00)
fixed_rateConsistent intervals, regardless of how long each run takesEvery 5 minutes
fixed_delayIntervals measured from the end of the previous run10 seconds after each run completes
one_timeA single future executionRun once at 2026-03-15 14:00
apiOn-demand triggering from an external systemTriggered by a CI/CD pipeline or monitoring system

Custom time zones and custom calendars are also supported.

Alerting and monitoring

Configure alert policies at the job, instance, or application level:

  • Failure alerts -- triggered when a job execution fails.

  • Timeout alerts -- triggered when execution exceeds the expected duration.

  • Success notifications -- sent after successful completion.

  • Threshold-triggered alerts -- based on custom metrics at the instance or application level.

Notifications can be delivered by text message, phone call, webhook, or email.

Observability

  • Scheduling dashboard: View instance-level and application-level scheduling curves for scheduled, successful, and failed executions.

  • Execution history: Review the status, basic information, inputs, outputs, time consumption, and token consumption of each Dify workflow run.

  • Scheduling events: Inspect workflow-related and node-related events for each scheduled execution.

  • Node tracking: Trace the execution result of every node in a single workflow run. Drill-down is supported for loop nodes, iteration nodes, and conditional branch nodes.

Prerequisites

Before you begin, make sure that you have:

Step 1: (Optional) Configure an internal endpoint for the Dify API server

If your Dify service runs on Alibaba Cloud, create an internal Server Load Balancer (SLB) endpoint so that XXL-JOB can reach the Dify API server over the VPC instead of the public Internet.

  1. Log on to the Container Service for Kubernetes (ACK) console.

  2. In the left-side navigation pane of the Clusters page, click the name of the cluster where Dify is deployed.

  3. In the left-side navigation pane, choose Network > Services, and then click Create.

  4. In the Create Service dialog box, configure the following settings: Configure the remaining parameters as needed, and then click OK.

    • Set Service Type to SLB.

    • Set Access Method to Internal Access.

    • In Backend, enter component and proxy.

    Create Service - Service Type

    Create Service - Backend

  5. Verify that the internal endpoint is generated.

    Internal endpoint generated

Step 2: Create a scheduled Dify workflow task

This step walks you through preparing a Dify workflow and then creating a scheduled task in XXL-JOB to run it.

2a. Prepare a Dify workflow

Log on to the Dify console, click Studio in the upper-right corner, and select Create from Template to create a workflow.

2b. Create a task in XXL-JOB

  1. Log on to the MSE console and select a region in the top navigation bar.

  2. In the left-side navigation pane, choose Task Scheduling > XXL-JOB Version.

  3. Click the ID of the target instance. In the left-side navigation pane, click Task Management, and then click Create Task.

  4. In the Create Task panel, set Job Type to Dify Workflow. For details on other task parameters, see Task management.

    Create Task panel

  5. Configure the Dify-specific parameters: Example input:

    ParameterDescriptionWhere to find it
    EndpointThe endpoint of the Dify API server. If your Dify service runs on Alibaba Cloud, use the internal endpoint from Step 1.

    In the Dify console, go to the API Access page. The endpoint is displayed in the upper-right corner.
    API KeyThe API key for the target workflow. Each workflow has its own key.

    On the API Access page, click API Key in the upper-right corner.
    InputThe workflow input in JSON format. This value corresponds to the inputs field in the request body.Refer to the workflow's input schema.
       {"input_text": "what is your name"}

    API Access page

Step 3: Verify the result

After you create the task, run it once manually to confirm that the Dify API connection, input parameters, and scheduling configuration all work as expected before automatic scheduling begins.

  1. On the Task Management page, find the task and click Run once in the Operation column.

    Run once

  2. Click More in the Operation column and select Scheduling Records to view execution records.

    Scheduling Records

  3. Click details in the Operation column of an execution record to inspect the workflow run. Three tabs are available:

    • Basic information -- displays execution status, timing, and metadata. Basic information tab

    • Input and Output -- shows the JSON input sent to the workflow and the output returned. Input and Output tab

    • Track -- traces every node in the workflow execution. If iteration nodes, loop nodes, or branch nodes are involved, click a node to drill down into its sub-executions. Track tab Track tab - drill-down