The task orchestration feature of Data Management (DMS) is used to orchestrate and schedule tasks. You can create a task flow that contains one or more task nodes to implement complex scheduling and improve data development efficiency.
Supported database types
- Relational databases
- MySQL: ApsaraDB RDS for MySQL, PolarDB for MySQL, MyBase for MySQL, PolarDB for Xscale, and MySQL databases from other sources
- SQL Server: ApsaraDB RDS for SQL Server, MyBase for SQL Server, and SQL Server databases from other sources
- PostgreSQL: ApsaraDB RDS for PostgreSQL, PolarDB for PostgreSQL, MyBase for PostgreSQL, and PostgreSQL databases from other sources
- OceanBase: ApsaraDB for OceanBase in MySQL mode, ApsaraDB for OceanBase in Oracle mode, and self-managed OceanBase databases
- PolarDB for PostgreSQL(Compatible with Oracle)
- Oracle
- DM
- Db2
- NoSQL database: ApsaraDB for Lindorm
- Data warehouses
- AnalyticDB for MySQL
- AnalyticDB for PostgreSQL
- DLA
- MaxCompute
- Hologres
- Object storage: OSS
Task orchestration flowchart

Procedure
- Log on to the DMS console V5.0.
- In the top navigation bar, click DTS. In the left-side navigation pane, choose .
- Create a task flow.
- Click Create Task Flow.
- In the Create Task Flow dialog box, specify the Task Flow Name and Description parameters and click OK.
- Create task nodes.
- In the lower part of the page, configure and view information about the task flow.
- Publish the task flow. For more information, see Publish or unpublish a task flow.
Task node types
Category | Task node type | Description | References |
---|---|---|---|
Data integration | DTS data migration | Migrates data of selected tables or all tables from a database to another database. This type of node supports full data migration and can migrate both data and schemas. | Configure a DTS data migration node |
Batch Integration | Synchronizes data between data sources. You can use this type of node in scenarios such as data migration and data transmission. | Configure a batch integration node | |
Data processing | Single Instance SQL | Executes SQL statements in a specific relational database. Note If you enable the lock-free schema change feature for the specified database instance, DMS uses this feature when you run Single Instance SQL tasks. This prevents tables from being locked. For more information, see Enable the lock-free schema change feature. | N/A |
Cross-Database Spark SQL | Uses the Spark engine to process and transmit a large amount of data across databases. You can use this type of node for cross-database data synchronization and processing. | Configure a cross-database Spark SQL node | |
DLA Serverless Spark | Configures Spark jobs based on the serverless Spark engine of Data Lake Analytics (DLA). | Create and run Spark jobs | |
Lock-free Data Change | Uses the lock-free data change feature of DMS to perform related operations such as Update and Delete operations. Note You can use this type of node only if the lock-free schema change feature is enabled for the database instance. For more information, see Enable the lock-free schema change feature. | Overview | |
DLA Spark SQL | Uses SQL statements to submit jobs to the Spark clusters of DLA. | N/A | |
General operations | SQL Assignment for Single Instance | Assigns the data that is obtained by using the SELECT statement to the output variables. The output variables can be used as the input variables of the downstream node. | Configure an SQL assignment node |
Conditional Branch | Makes conditional judgment in task flows. During the execution of a task flow, if the conditional expression of a conditional branch node evaluates to true, the subsequent tasks are run. Otherwise, the subsequent tasks are not run. | Configure a conditional branch node | |
DLA one-click DW | Uploads the data in a database to Object Storage Service (OSS) to build a data warehouse by using the one-click data warehousing feature of DLA. | One-click data warehousing | |
DBS Backup | Uses Database Backup (DBS) to back up data from a database to the OSS bucket provided by DBS. | DBS | |
Script | Uses script tasks based on Database Gateway to execute scripts periodically or at a specific point in time. | Configure a script node | |
Status check | Check Whether Data Exists in Table After Specified Time | Checks whether incremental data exists in a table after a specific point in time. | N/A |
Lindorm File Check | Checks whether a file exists in an ApsaraDB for Lindorm instance that supports Hadoop Distributed File System (HDFS). | N/A | |
SQL Status Check | Checks the status of data by using SQL statements. For example, you can check whether more than 10 boys are in the class. | N/A | |
Audit Task | Checks the data quality of a table. After you specify a quality rule for the table and a scheduling cycle for the audit task, DMS checks the data quality of the table and generates a report. | N/A | |
Check for Task Flow Dependency | Configures self-dependency for a task flow and dependencies across task flows. You can configure the task flow to depend on another task flow or a task node. | Configure a dependency check node for a task flow |