DTS supports calling API operations to configure or query the migration, synchronization, or subscription objects of DTS tasks. This topic describes the related API operations and provides the definitions and configuration examples of objects.
Related API operations and parameters
API | Description |
Configure the migration, synchronization, or subscription objects of a DTS task in the | |
Query the migration, synchronization, or subscription objects of a DTS task in the |
Definition of migration, synchronization, or subscription objects
The value of an object-related parameter is a JSON string. The following section describes object-related parameters.
If the migration, synchronization, or subscription objects include multiple databases, refer to the following definition:
ImportantSubscription instances do not support the mapping feature. The value of the
nameparameter for a subscription instance must be the same as the name of the database or table to be subscribed to.{ "Name of database 1 to be migrated, synchronized, or subscribed to": { "name": "Name of database 1 in the destination instance", "all": true (Indicates that the entire database is migrated, synchronized, or subscribed to) }, "Name of database 2 to be migrated, synchronized, or subscribed to": { "name": "Name of database 2 in the destination instance", "all": false (Indicates that the entire database is not migrated, synchronized, or subscribed to), "Table": { "Name of table A to be migrated, synchronized, or subscribed to": { "name": "Name of table A in the destination instance", "all": true (Indicates that the entire table is migrated, synchronized, or subscribed to), "dml_op": "DML operations to be incrementally migrated or synchronized", "ddl_op": "DDL operations to be incrementally migrated or synchronized" } } }, "Name of database 3 to be migrated, synchronized, or subscribed to": { "name": "Name of database 3 in the destination instance", "all": true (Indicates that the entire database is migrated, synchronized, or subscribed to), "dml_op": "DML operations to be incrementally migrated or synchronized", "ddl_op": "DDL operations to be incrementally migrated or synchronized" } }If the migration or synchronization objects are at the column level or include filter conditions, refer to the following definition:
{ "Name of database to be migrated, synchronized, or subscribed to": { "name": "Name of database in the destination instance", "all": false (Indicates that the entire database is not migrated, synchronized, or subscribed to), "Table": { "Name of table A to be migrated, synchronized, or subscribed to": { "name": "Name of table A in the destination instance", "all": false (Indicates that the entire table is not migrated, synchronized, or subscribed to), "filter": "id>10" "column": { "id": { "key": "PRI", "name": "id", "type": "int(11)", "sharedKey": false, "state": "checked" } }, "shard": 12 } } } }If the destination database instance of the migration or synchronization objects is AnalyticDB for MySQL or AnalyticDB for PostgreSQL, refer to the following definition:
{ "Name of database to be migrated or synchronized": { "name": "Name of database in the destination instance", "all": false (Fixed as false. Regardless of whether the objects are at the database or table level, if the destination instance is AnalyticDB for MySQL or AnalyticDB for PostgreSQL, this parameter is fixed as false, and you must also specify information such as the partition key of the table), "Table": { "Name of table A to be migrated or synchronized": { "all": true (Indicates that the entire table is migrated or synchronized), "name": "Name of table A in the destination instance", "primary_key": "id (Specifies the primary key)", "type": "dimension (Type of the table)", } "Name of table B to be migrated or synchronized": { "all": true (Indicates that the entire table is migrated or synchronized), "name": "Name of table B in the destination instance", "part_key": "id (Specifies the partition key)", "primary_key": "id (Specifies the primary key)", "type": "partition (Type of the table)", "tagColumnValue": "Value of the tag column" } } } }If you need to set an independent conflict resolution policy for synchronization objects, refer to the following definition:
NoteThis feature is supported only for bidirectional synchronization instances between MySQL instances or between PolarDB for MySQL clusters.
You can set independent conflict resolution policies at the database or table level.
For columns that are configured with an independent conflict resolution policy, the global conflict resolution policy does not take effect.
Table-level settings
{ "Name of database 1 to be synchronized": { "name": "Name of database 1 in the destination instance", "all": true (Indicates that the entire database is synchronized), "conflict": "Task-level conflict resolution policy" }, "Name of database 2 to be synchronized": { "name": "Name of database 2 in the destination instance", "all": false (Indicates that the entire database is not synchronized), "conflict": "overwrite", "Table": { "Name of table A to be synchronized": { "name": "Name of table A in the destination instance", "all": true (Indicates that the entire table is synchronized), "cdr_cmp_col": "Conflict detection column", "cdr_rslv_col": "Conflict detection column", "resolve_method": "Table-level conflict resolution policy" } } } }Database-level settings
When the synchronization object is an entire database:
"Name of database 1 to be synchronized": { "name": "Name of database 1 in the destination instance", "all": true (Indicates that the entire database is synchronized), "conflict": "Task-level conflict resolution policy", "cdr_cmp_col": "Conflict detection column", "cdr_rslv_col": "Conflict detection column", "resolve_method": "Database-level conflict resolution policy" } }When the synchronization object is not an entire database:
"Name of database 2 to be synchronized": { "name": "Name of database 2 in the destination instance", "all": false (Indicates that the entire database is not synchronized), "conflict": "Task-level conflict resolution policy", "cdr_cmp_col": "Conflict detection column", "cdr_rslv_col": "Conflict detection column", "resolve_method": "Database-level conflict resolution policy", "Table": { "Name of table A to be synchronized": { "name": "Name of table A in the destination instance", "all": true (Indicates that the entire table is synchronized) } } }
If you need to configure a data integration task to a data lake, you can refer to the following definition:
Parameter
Description
write_operationThe method used to write data when a data conflict occurs.
append: Retains the current data in the destination database and adds new data.overwrite: Overwrites the conflicting data in the destination database.errorIfExists: The task reports an error and exits.ignore: Skips the current data write operation, continues execution, and uses the conflicting data in the destination database.
targetTypeThe format of data after it is written to OSS (forced conversion). The following formats are supported:
Byte,Integer,Long,Double,String,Binary,Boolean,Timestamp, andDate.NoteIf this parameter is not specified, DTS automatically converts the data type from the source to a supported type.
etl_dateThe name of the additional column (constant) to be added.
NoteThe values of the two
etl_dateparameters must be the same.syntacticTypeFixed as ADD, which indicates that a column is added.
NoteThe value of the added column (
value) can only be a constant and must be enclosed in single quotation marks ('').part_keyThe partition key of the destination table. This parameter has two possible values.
NoteThis parameter is required only when the destination table is a partitioned table.
A column to be integrated from the source.
A constant column added to the destination and set as the partition key.
NoteThe format is
<Key>=<Value>. For example,dt=2025-07-07indicates a constant column nameddtwith a value of2025-07-07.
{ "Name of database 1 to be integrated": { "all": false (Fixed as false), "Table": { "Name of table A to be integrated": { "all": false (Indicates that the entire table is not integrated), "filter": "", "write_operation": "Method used to write data", "name": "Name of table A in the destination instance", "column": { "Name of column a to be integrated": { "name": "Name of column a in the destination instance", "targetType": "Type of the column in OSS" }, ******, "etl_date": { "syntacticType": "ADD (Fixed as ADD)", "name": "etl_date", "type": "String (Fixed as String)", "value": "'2025-07-08 03:30:00'" } }, "part_key": "dt=2025-07-07" } }, "name": "dtstestdata (Name of database 1 in the destination instance)" } }
Parameter | Description |
| The name to which the source database, table, or column is mapped in the destination. For example, if you want to migrate a database named dtssource to a database named dtstarget, you must set the name parameter to dtstarget. |
| Specifies whether to select all tables or columns. Valid values:
|
| The information of the source table. |
| The filter condition used to filter the data to be migrated, synchronized, or subscribed to. This parameter can only be set at the table level. For example, you can set this parameter to Note Subscription tasks do not support setting filter conditions. |
| The information of the source column. |
| Specifies whether the column is a primary key. Valid values:
|
| Specifies whether the column is a shard key. Valid values:
Note This parameter is required only when the database type of the migration or synchronization objects is Kafka. |
| The data type of the field. |
| If the value is |
| The number of shards for the table to be migrated or synchronized. Note This parameter is required only when the database type of the migration or synchronization data is Kafka. |
| The DML operations to be incrementally migrated or synchronized. Valid values:
Note To query the DML operations supported by different migration or synchronization tasks, see the configuration documents for specific tasks in Migration solutions or Synchronization solutions. |
| The DDL operations to be incrementally migrated or synchronized. Valid values:
Note To query the DDL operations supported by different migration or synchronization tasks, see the configuration documents for specific tasks in Migration solutions or Synchronization solutions. |
| Specifies the primary key. This parameter is available and required only when the destination instance is AnalyticDB for MySQL or AnalyticDB for PostgreSQL. |
| Specifies the partition key. This parameter is available and must be specified when the destination instance is AnalyticDB for MySQL or AnalyticDB for PostgreSQL. |
Important This | When the destination instance is AnalyticDB for MySQL or AnalyticDB for PostgreSQL, you need to specify the table type of the objects to be migrated or synchronized:
|
| The custom value of the __dts_data_source tag column. When the destination instance is AnalyticDB for MySQL, this parameter is available and must be passed in. |
| The global conflict resolution policy at the task level. This parameter must be included in each database to be synchronized, and the value must be the same. Valid values:
|
| The independent conflict resolution policy at the table level (only supported for incremental synchronization). Valid values:
|
|
|
|
Examples of configuring migration, synchronization, or subscription objects
Example 1: Migrate, synchronize, or subscribe to all tables in the dtstestdata database.
{"dtstestdata": { "name": "dtstestdata", "all": true }}Example 2: Migrate or synchronize the dtstestdata database and rename it to dtstestdata_new.
{"dtstestdata": { "name": "dtstestdata_new", "all": true }}Example 3: Migrate, synchronize, or subscribe to specific tables (such as customer) in the dtstestdata database.
{"dtstestdata": { "name": "dtstestdata", "all": false, "Table": { "customer": { "name": "customer", "all": true, "column": { "id": { "key": "PRI", "name": "id", "type": "int(11)", "sharedKey": false, "state": "checked" }, "gmt_create": { "key": "", "name": "gmt_create", "type": "datetime", "sharedKey": false, "state": "checked" }, "gmt_modify": { "key": "", "name": "gmt_modify", "type": "datetime", "sharedKey": false, "state": "checked" }, "valid_time": { "key": "", "name": "valid_time", "type": "datetime", "sharedKey": false, "state": "checked" }, "creator": { "key": "", "name": "creator", "type": "varchar(200)", "sharedKey": false, "state": "checked" } }, "shard": 12 } } } }Example 4: Migrate or synchronize specific columns of tables (such as customer and order) in the dtstestdata database.
{"dtstestdata": { "name": "dtstestdata", "all": false, "Table": { "customer": { "name": "customer", "all": false, "column": { "id": { "key": "PRI", "name": "id", "type": "int(11)", "sharedKey": false, "state": "checked" }, "level": { "key": "", "name": "level", "type": "varchar(5000)", "sharedKey": false, "state": "checked" }, "name": { "key": "", "name": "name", "type": "varchar(500)", "sharedKey": false, "state": "checked" }, }, "shard": 12 }, "order": { "name": "order", "all": false, "column": { "id": { "key": "PRI", "name": "id", "type": "int(11)", "sharedKey": false, "state": "checked" } }, "shard": 12 } } } }Example 5: Migrate or synchronize tables (such as customer, order, and commodity) from the dtstestdata database to a destination AnalyticDB for MySQL or AnalyticDB for PostgreSQL instance.
{ "dtstestdata": { "name": "dtstestdatanew", "all": false, "Table": { "order": { "name": "ordernew", "all": true, "part_key": "id", "primary_key": "id", "type": "partition" }, "customer": { "name": "customernew", "all": true, "primary_key": "id", "type": "dimension" }, "commodity": { "name": "commoditynew", "all": false, "filter": "id>10", "column": { "id": { "key": "PRI", "name": "id", "type": "int(11)" } }, "part_key": "id", "primary_key": "id", "type": "partition" } } } }Example 6: Set independent conflict resolution policies for synchronization objects.
Table-level settings
Set the global conflict resolution policy for the synchronization task objects to
interrupt. Set the independent conflict resolution policy for the primary key columns, unique key columns, andnamecolumn of the customer table in the dtstestdata2 database tooverwrite.{ "dtstestdata1": { "name": "dtstestdata1", "all": true, "conflict": "interrupt" }, "dtstestdata2": { "name": "dtstestdata2", "all": false, "conflict": "interrupt", "Table": { "customer": { "name": "customer", "all": true, "cdr_cmp_col": "name", "cdr_rslv_col": "name", "resolve_method": "overwrite" } } } }Database-level settings
When the synchronization object is an entire database: Set the independent conflict resolution policy for the
nameandaddrcolumns of all tables to be synchronized in the dtstestdata1 database touse_max."dtstestdata1": { "name": "dtstestdata1", "all": true, "conflict": "overwrite", "cdr_cmp_col": "name,addr", "cdr_rslv_col": "name,addr", "resolve_method": "use_max" } }When the synchronization object is not an entire database: Set the independent conflict resolution policy for the
nameandaddrcolumns of all tables to be synchronized in the dtstestdata2 database touse_max."dtstestdata2": { "name": "dtstestdata2", "all": false, "conflict": "overwrite", "cdr_cmp_col": "name,addr", "cdr_rslv_col": "name,addr", "resolve_method": "use_max", "Table": { "person": { "name": "person", "all": true }, "class": { "name": "class", "all": true } } }
Example 7: For a synchronization instance where the source database type is Tair/Redis, synchronize only data with the key prefix
HPropfrom DBs named 0 and 1 (that is, the Prefixes of Keys to Be Synchronized isHProp). For the DB named 2, synchronize only data with the key prefixdtsbut not includingdtstest(that is, the Prefixes of Keys to Be Synchronized isdts, and the Prefixes of Keys to Be Filtered Out isdtstest).{ "0": { "name": "0", "all": true, "filter": "[{"condition":"HProp","filterType":"white","filterPattern":"prefix"}]" }, "1": { "name": "1", "all": true, "filter": "[{"condition":"HProp","filterType":"white","filterPattern":"prefix"}]" }, "2": { "name": "2", "all": true, "filter": "[{"condition":"dts","filterType":"white","filterPattern":"prefix"},{"condition":"dtstest","filterType":"black","filterPattern":"prefix"}]" } }Example 8: Integrate the commodity table in the dtstestdata database into the destination OSS in Delta format.
{ "dtstestdata(": { "all": false, "Table": { "commodity": { "all": false(), "filter": "", "write_operation": "overwrite", "name": "commodity", "column": { "IS_VALID": { "name": "is_valid", "targetType": "String" }, "BuiltinArchiveDate": { "name": "builtinarchivedate", "targetType": "String" }, "PRODUCT_NAME": { "name": "product_name", "targetType": "String" }, "PRODUCT_CODE": { "name": "product_code", "targetType": "String" }, "etl_date": { "syntacticType": "ADD", "name": "etl_date", "type": "String", "value": "'2025-07-08 03:30:00'" } }, "part_key": "dt=2025-07-07" } }, "name": "dtstestdata" } }