When you call an API operation to configure a Data Transmission Service (DTS) task or query the information about a DTS task, you can specify or query the Reserve parameter. The value of the Reserve parameter is a JSON string. The Reserve parameter allows you to supplement or view the configurations of the source or destination database instance. For example, you can specify the data storage format of the destination Kafka cluster or the ID of the Cloud Enterprise Network (CEN) instance that is used to access a database instance in the Reserve parameter. This topic describes the scenarios and settings of the Reserve parameter.

Usage notes

  • You must specify common parameters based on the type of the DTS instance and the access methods of the source and destination databases, and then specify other parameters based on the actual situation, such as the types of the source and destination databases.
  • If the source and destination databases of the DTS instance that you want to configure contain the same parameter settings, you need to specify these parameters in the Reserve parameter only once.
  • If you want to specify a numeric value, you must enclose the numeric value in double quotation marks (") to convert it into a string.
  • To view the parameters that were specified when an API operation was called to configure a DTS instance, perform the following operations: Log on to the DTS console and configure the DTS instance. In the Advanced Settings step, move the pointer over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.

Related API operations

Common parameters

Specify the following parameters in the Reserve parameter based on the type of the DTS instance and the access methods of the source and destination databases.

Table 1. Data migration or synchronization instance
ParameterRequiredDescription
targetTableModeYesThe method of processing conflicting tables. Valid values:
  • 0: performs a precheck and reports errors.
  • 2: ignores errors and proceeds.
dts.datamove.source.bps.maxNoThe amount of data to be synchronized or migrated during full data synchronization or migration per second. Unit: MB. Valid values: 0 to 9007199254740991. The value must be an integer.
conflictNoThe conflict processing policy of the two-way data synchronization task. Valid values:
  • overwrite: If a data conflict occurs during data synchronization, the conflicting data in the destination database is overwritten.
  • interrupt: If a data conflict occurs during data synchronization, an error is reported and the data synchronization task fails. You must manually modify and resume the data synchronization task.
  • ignore: If a data conflict occurs during data synchronization, the conflicting data in the destination database is retained and the data synchronization task continues.
filterDDLNoSpecifies whether to ignore DDL operations in the forward task of the two-way data synchronization task. Valid values:
  • true: does not synchronize DDL operations.
  • false: synchronizes DDL operations.
    Important By default, the reverse task ignores DDL operations.
autoStartModulesAfterConfigNoSpecifies whether to automatically start a precheck after the DTS task is configured. Valid values:
  • none (default): does not start a precheck or subsequent operations after the DTS task is configured. In this case, you must manually start the DTS task.
  • auto: automatically starts a precheck and all subsequent operations after the DTS task is configured.
etlOperatorCtlNoSpecifies whether to configure the extract, transform, and load (ETL) feature. Valid values:
  • Y
  • N
etlOperatorSettingNoThe ETL statements. For more information, see DSL syntax.
etlOperatorColumnReferenceNoThe ETL operator that is dedicated to T+1 business.
configKeyMapNoThe configuration information of the ETL operator.
syncArchitectureNoThe synchronization topology. Valid values:
  • oneway: one-way synchronization.
  • bidirectional: two-way synchronization.
dataCheckConfigureNoThe data verification settings. For more information, see DataCheckConfigure parameter description.
dbListCaseChangeModeNoThe capitalization of object names in the destination database. Valid values:
  • default: uses the default capitalization policy of DTS.
  • source: uses the capitalization policy of the source database.
  • dest_upper: uses the uppercase.
  • dest_lower: uses the lowercase.
maxRetryTimeNoThe retry time range for a failed connection to the source or destination database. Unit: minutes. Default value: 720. Valid values: 10 to 1440. The value must be an integer. We recommend that you set this parameter to a value greater than 30.
Table 2. Change tracking instance
ParameterRequiredDescription
vpcIdYesThe ID of the virtual private cloud (VPC) in which the change tracking instance is deployed.
vswitchIdYesThe ID of the vSwitch in the specified VPC.
startTimeNoThe beginning of the time range to track data changes. Specify a UNIX timestamp representing the number of seconds that have elapsed since January 1, 1970, 00:00:00 UTC.
endTimeNoThe end of the time range to track data changes. Specify a UNIX timestamp representing the number of seconds that have elapsed since January 1, 1970, 00:00:00 UTC.
Table 3. Database instance that is accessed by using a CEN instance
ParameterRequiredDescription
srcInstanceIdNoThe ID of the CEN instance that is used to access the source database instance. Example:
{
     "srcInstanceId": "cen-9kqshqum*******"  }
Note You must specify this parameter if the source database instance is accessed by using a CEN instance.
destInstanceIdNoThe ID of the CEN instance that is used to access the destination database instance. Example:
{
     "destInstanceId": "cen-9kqshqum*******"  }
Note You must specify this parameter if the destination database instance is accessed by using a CEN instance.

Source database parameters

Specify the following parameters in the Reserve parameter based on the type of the source database.

Table 4. ApsaraDB RDS for MySQL instance and self-managed MySQL database
ParameterConfiguration conditionDescription
privilegeMigrationThe source and destination databases are ApsaraDB RDS for MySQL instances. Specifies whether to migrate accounts. Valid values:
  • true
  • false (default)
privilegeDbListThe accounts to be migrated.
definerSpecifies whether to retain the original definers of database objects. Valid values: true and false.
amp.increment.generator.logmnr.mysql.heartbeat.modeThe source database is a self-managed MySQL database. Specifies whether to delete SQL operations on heartbeat tables of the forward and reverse tasks. Valid values:
  • none: does not write SQL operations on heartbeat tables to the source database.
  • N: writes SQL operations on heartbeat tables to the source database.
whitelist.dms.online.ddl.enableThe DTS task is a data migration or synchronization task. The destination database is an ApsaraDB RDS for MySQL instance, a self-managed MySQL database, a PolarDB for MySQL cluster, an AnalyticDB for MySQL cluster, or an AnalyticDB for PostgreSQL instance. Specifies whether to copy the temporary tables that are generated by online DDL operations performed on source tables to the destination database. The six parameters must be used together.
  • Copy the temporary tables that are generated by online DDL operations performed on source tables to the destination database.
    {
         "whitelist.dms.online.ddl.enable": "true",
         "sqlparser.dms.original.ddl": "false",
         "whitelist.ghost.online.ddl.enable": "true",
         "sqlparser.ghost.original.ddl": "false"
    }
  • Do not copy the temporary tables that are generated by online DDL operations performed on source tables to the destination database, but synchronize only the original DDL operations that are performed by using Data Management (DMS) in the source database.
    {
         "whitelist.dms.online.ddl.enable": "false",
         "sqlparser.dms.original.ddl": "true",
         "whitelist.ghost.online.ddl.enable": "false",
         "sqlparser.ghost.original.ddl": "false"
    }
  • Do not copy the temporary tables that are generated by online DDL operations performed on source tables to the destination database, but synchronize only the original DDL operations that are performed by using gh-ost in the source database.
    {
         "whitelist.dms.online.ddl.enable": "false",
         "sqlparser.dms.original.ddl": "false",
         "whitelist.ghost.online.ddl.enable": "false",
         "sqlparser.ghost.original.ddl": "true",
         "online.ddl.shadow.table.rule": "^_(.+)_(?:gho|new)$",
         "online.ddl.trash.table.rule": "^_(.+)_(?:ghc|del|old)$"
    }
    Note You can specify the default or custom regular expressions for online.ddl.shadow.table.rule and online.ddl.trash.table.rule to filter out the shadow tables of the gh-ost tool and tables that are not required.
sqlparser.dms.original.ddl
whitelist.ghost.online.ddl.enable
sqlparser.ghost.original.ddl
online.ddl.shadow.table.rule
online.ddl.trash.table.rule
isAnalyzerThe source and destination databases are ApsaraDB RDS for MySQL instances or self-managed MySQL databases. Specifies whether to enable the migration assessment feature. The feature checks whether the schemas in the source and destination databases meet the migration requirements. Valid values: true and false.
srcSSLThe source database is an Alibaba Cloud database instance or a self-managed database hosted on an Elastic Compute Service (ECS) instance. Specifies whether to encrypt the connection to the source database. Valid values:
  • 0: does not encrypt the connection.
  • 1: encrypts the connection by using SSL.
Table 5. PolarDB for MySQL cluster
ParameterConfiguration conditionDescription
amp.increment.generator.logmnr.mysql.heartbeat.modeThis parameter is required for all scenarios.Specifies whether to delete SQL operations on heartbeat tables of the forward and reverse tasks. Valid values:
  • none: does not write SQL operations on heartbeat tables to the source database.
  • N: writes SQL operations on heartbeat tables to the source database.
whitelist.dms.online.ddl.enableThe DTS task is a data migration or synchronization task. The destination database is an ApsaraDB RDS for MySQL instance, a self-managed MySQL database, a PolarDB for MySQL cluster, an AnalyticDB for MySQL cluster, or an AnalyticDB for PostgreSQL instance. Specifies whether to copy the temporary tables that are generated by online DDL operations performed on source tables to the destination database. The six parameters must be used together.
  • Copy the temporary tables that are generated by online DDL operations performed on source tables to the destination database.
    {
         "whitelist.dms.online.ddl.enable": "true",
         "sqlparser.dms.original.ddl": "false",
         "whitelist.ghost.online.ddl.enable": "true",
         "sqlparser.ghost.original.ddl": "false"
    }
  • Do not copy the temporary tables that are generated by online DDL operations performed on source tables to the destination database, but synchronize only the original DDL operations that are performed by using DMS in the source database.
    {
         "whitelist.dms.online.ddl.enable": "false",
         "sqlparser.dms.original.ddl": "true",
         "whitelist.ghost.online.ddl.enable": "false",
         "sqlparser.ghost.original.ddl": "false"
    }
  • Do not copy the temporary tables that are generated by online DDL operations performed on source tables to the destination database, but synchronize only the original DDL operations that are performed by using gh-ost in the source database.
    {
         "whitelist.dms.online.ddl.enable": "false",
         "sqlparser.dms.original.ddl": "false",
         "whitelist.ghost.online.ddl.enable": "false",
         "sqlparser.ghost.original.ddl": "true",
         "online.ddl.shadow.table.rule": "^_(.+)_(?:gho|new)$",
         "online.ddl.trash.table.rule": "^_(.+)_(?:ghc|del|old)$"
    }
    Note You can specify the default or custom regular expressions for online.ddl.shadow.table.rule and online.ddl.trash.table.rule to filter out the shadow tables of the gh-ost tool and tables that are not required.
sqlparser.dms.original.ddl
whitelist.ghost.online.ddl.enable
sqlparser.ghost.original.ddl
online.ddl.shadow.table.rule
online.ddl.trash.table.rule
Table 6. ApsaraDB RDS for MariaDB instance
ParameterConfiguration conditionDescription
srcSSLThe source database is an Alibaba Cloud database instance or a self-managed database hosted on an ECS instance. Specifies whether to encrypt the connection to the source database. Valid values:
  • 0: does not encrypt the connection.
  • 1: encrypts the connection by using SSL.
Table 7. Oracle database
ParameterConfiguration conditionDescription
isTargetDbCaseSensitiveThe destination database is an AnalyticDB for PostgreSQL instance. Specifies whether to enclose the names of destination objects in double quotation marks ("). Valid values: true and false.
isNeedAddRowIdThe destination database is an AnalyticDB for PostgreSQL instance, and the objects to be synchronized or migrated include tables without primary keys. Specifies whether to set the primary keys and distribution keys of all tables that have no primary keys to the row ID. Valid values: true and false.
srcOracleTypeThis parameter is required for all scenarios.The architecture type of the Oracle database. Valid values:
  • sid: non-Real Application Cluster (RAC).
  • serviceName: RAC or pluggable database (PDB).
source.column.encodingThe actual encoding format is required. The actual encoding format. Valid values:
  • default (default)
  • GB 2312
  • GBK
  • GB 18030
  • UTF-8
  • UTF-16
  • UTF-32
Table 8. ApsaraDB RDS for SQL Server instance and self-managed SQL Server database
ParameterConfiguration conditionDescription
isTargetDbCaseSensitiveThe destination database is an AnalyticDB for PostgreSQL instance. Specifies whether to enclose the names of destination objects in double quotation marks ("). Valid values: true and false.
source.extractor.typeThe destination database is not a DataHub project, and an incremental migration or synchronization task needs to be configured. The mode in which incremental data is migrated or synchronized from the SQL Server database. Valid values:
  • cdc: performs incremental data synchronization or migration by parsing the logs of the source database for non-heap tables, and performs change data capture (CDC)-based incremental data synchronization or migration for heap tables.
  • log: performs incremental data synchronization or migration by parsing the logs of the source database.
src.sqlserver.schema.mapper.modeThe destination database is an ApsaraDB RDS for MySQL instance, a self-managed MySQL database, a PolarDB for MySQL cluster, or an AnalyticDB for MySQL cluster. The schema mapping mode between the source and destination databases. Valid values:
  • schema.table: uses Source schema name.Source table name as the name of the destination table.
  • without.schema: uses the source table name as the name of the destination table.
    Warning If multiple schemas in the source database contain tables that have the same name, data inconsistency may occur or the DTS task may fail.
Table 9. ApsaraDB for Redis instance, Tair instance, and self-managed Redis database
ParameterConfiguration conditionDescription
srcKvStoreModeThe access method of the source database is not Alibaba Cloud Instance. The deployment mode of the source self-managed Redis database. Valid values:
  • single: standalone deployment.
  • cluster: cluster deployment.
any.sink.redis.expire.extension.secondsThis parameter is required for all scenarios.The extended time period for keys migrated from the source database to the destination database to remain valid. If specific commands are used, we recommend that you set the parameter to a value greater than 600 to ensure data consistency. Unit: seconds. The specific commands include the following commands:
EXPIRE key seconds
PEXPIRE key milliseconds
EXPIREAT key timestamp
PEXPIREAT key timestampMs
any.source.redis.use.slave.nodeThe value of srcKvStoreMode is set to cluster. Specifies whether to pull data from master or replica nodes if the source self-managed Redis database is deployed in a cluster. Valid values:
  • true: pulls data from replica nodes.
  • false (default): pulls data from master nodes.
Table 10. ApsaraDB for MongoDB instance and self-managed MongoDB database
ParameterConfiguration conditionDescription
srcEngineArchTypeThis parameter is required for all scenarios.The architecture type of the source MongoDB database. Valid values:
  • 0: standalone architecture.
  • 1: replica set architecture.
  • 2: sharded cluster architecture.
sourceShardEndpointUsernameThe value of srcEngineArchType is set to 2. The account used to log on to a shard of the source MongoDB database.
sourceShardEndpointPasswordThe password used to log on to a shard of the source MongoDB database.
Table 11. PolarDB-X 2.0 instance
ParameterConfiguration conditionDescription
amp.increment.generator.logmnr.mysql.heartbeat.modeThis parameter is required for all scenarios.Specifies whether to delete SQL operations on heartbeat tables of the forward and reverse tasks. Valid values:
  • none: does not write SQL operations on heartbeat tables to the source database.
  • N: writes SQL operations on heartbeat tables to the source database.
Table 12. PolarDB for PostgreSQL (Compatible with Oracle) cluster
ParameterConfiguration conditionDescription
srcHostPortCtlThe source database is accessed by using a public IP address. Specifies whether to select multiple data sources for the PolarDB for PostgreSQL(Compatible with Oracle) cluster. Valid values:
  • single
  • multiple
srcHostPortsThe value of srcHostPortCtl is set to multiple. The IP addresses and port numbers of the nodes in the source PolarDB for PostgreSQL(Compatible with Oracle) cluster. Separate multiple values in the IP address:Port number format with commas (,).
Table 13. TiDB database
ParameterConfiguration conditionDescription
amp.increment.generator.logmnr.mysql.heartbeat.modeThis parameter is required for all scenarios.Specifies whether to delete SQL operations on heartbeat tables of the forward and reverse tasks. Valid values:
  • none: does not write SQL operations on heartbeat tables to the source database.
  • N: writes SQL operations on heartbeat tables to the source database.
isIncMigrationThis parameter is required for all scenarios.Specifies whether to migrate incremental data. Valid values: yes and no.
Important You can select only yes for data synchronization tasks.
srcKafkaThe value of isIncMigration is set to yes. The information about the downstream Kafka cluster of the TiDB database.
taskTypeThe type of the Kafka cluster. Specify this parameter based on the deployment location of the Kafka cluster. Valid values:
  • EXPRESS: connected over Express Connect, VPN Gateway, or Smart Access Gateway.
  • ECS: self-managed cluster hosted on an ECS instance.
bisId
  • The ID of the ECS instance if the value of taskType is set to ECS.
  • The ID of the VPC that is connected to the source database if the value of taskType is set to EXPRESS.
portThe service port number of the Kafka cluster.
userThe account of the Kafka cluster. If authentication is disabled for the Kafka cluster, you do not need to specify this parameter.
passwdThe password of the Kafka cluster. If authentication is disabled for the Kafka cluster, you do not need to specify this parameter.
versionThe version of the Kafka cluster.
sslSpecifies whether to encrypt the connection to the Kafka cluster. Valid values:
  • 0: does not encrypt the connection.
  • 3: encrypts the connection by using the SCRAM-SHA-256 algorithm.
topicThe topic of the objects to be migrated or synchronized.
hostThe value of taskType is set to EXPRESS. The IP address of the Kafka cluster.
vpcIdThe value of taskType is set to ECS. The ID of the VPC in which the ECS instance resides.

Destination database parameters

Specify the following parameters in the Reserve parameter based on the type of the destination database.

Table 14. ApsaraDB RDS for MySQL instance and self-managed MySQL database
ParameterConfiguration conditionDescription
privilegeMigrationThe source and destination databases are ApsaraDB RDS for MySQL instances. For more information, see ApsaraDB RDS for MySQL instance and self-managed MySQL database. Specifies whether to migrate accounts.
privilegeDbListThe accounts to be migrated.
definerSpecifies whether to retain the original definers of database objects.
whitelist.dms.online.ddl.enableThe DTS task is a data migration or synchronization task. The source database is an ApsaraDB RDS for MySQL instance, a self-managed MySQL database, or a PolarDB for MySQL cluster. For more information, see Source database parameters. Specifies whether to copy the temporary tables that are generated by online DDL operations performed on source tables to the destination database. The six parameters must be used together.
sqlparser.dms.original.ddl
whitelist.ghost.online.ddl.enable
sqlparser.ghost.original.ddl
online.ddl.shadow.table.rule
online.ddl.trash.table.rule
isAnalyzerThe source and destination databases are ApsaraDB RDS for MySQL instances or self-managed MySQL databases. Specifies whether to enable the migration assessment feature. The feature checks whether the schemas in the source and destination databases meet the migration requirements. Valid values: true and false.
triggerModeThis parameter is required for all scenarios.The method used to migrate triggers from the source database. Valid values:
  • manual
  • auto
destSSLThe destination database is an Alibaba Cloud database instance or a self-managed database hosted on an ECS instance. Specifies whether to encrypt the connection to the destination database. Valid values:
  • 0: does not encrypt the connection.
  • 1: encrypts the connection by using SSL.
src.sqlserver.schema.mapper.modeThe source database is an ApsaraDB RDS for SQL Server instance or a self-managed SQL Server database. The schema mapping mode between the source and destination databases. For more information, see ApsaraDB RDS for SQL Server instance and self-managed SQL Server database.
Table 15. PolarDB for MySQL cluster
ParameterConfiguration conditionDescription
whitelist.dms.online.ddl.enableThe DTS task is a data migration or synchronization task. The source database is an ApsaraDB RDS for MySQL instance, a self-managed MySQL database, or a PolarDB for MySQL cluster. For more information, see Source database parameters. Specifies whether to copy the temporary tables that are generated by online DDL operations performed on source tables to the destination database. The six parameters must be used together.
sqlparser.dms.original.ddl
whitelist.ghost.online.ddl.enable
sqlparser.ghost.original.ddl
online.ddl.shadow.table.rule
online.ddl.trash.table.rule
anySinkTableEngineTypeThis parameter is required for all scenarios.The engine type of the PolarDB for MySQL cluster. Valid values:
  • innodb: the default storage engine.
  • xengine: the database storage engine for online transaction processing (OLTP).
triggerModeThis parameter is required for all scenarios.The method used to migrate triggers from the source database. Valid values:
  • manual
  • auto
src.sqlserver.schema.mapper.modeThe source database is an ApsaraDB RDS for SQL Server instance or a self-managed SQL Server database. The schema mapping mode between the source and destination databases. For more information, see ApsaraDB RDS for SQL Server instance and self-managed SQL Server database.
Table 16. AnalyticDB for MySQL cluster
ParameterConfiguration conditionDescription
whitelist.dms.online.ddl.enableThe DTS task is a data migration or synchronization task. The source database is an ApsaraDB RDS for MySQL instance, a self-managed MySQL database, or a PolarDB for MySQL cluster. For more information, see Source database parameters. Specifies whether to copy the temporary tables that are generated by online DDL operations performed on source tables to the destination database. The six parameters must be used together.
sqlparser.dms.original.ddl
whitelist.ghost.online.ddl.enable
sqlparser.ghost.original.ddl
online.ddl.shadow.table.rule
online.ddl.trash.table.rule
triggerModeThis parameter is required for all scenarios.The method used to migrate triggers from the source database. Valid values:
  • manual
  • auto
src.sqlserver.schema.mapper.modeThe source database is an ApsaraDB RDS for SQL Server instance or a self-managed SQL Server database. The schema mapping mode between the source and destination databases. For more information, see ApsaraDB RDS for SQL Server instance and self-managed SQL Server database.
traceDatasourceThis parameter is required for all scenarios.Specifies whether to enable the multi-table merging feature. Valid values: true and false.
tagColumnValueYou need to specify whether to customize the tag column. Specifies whether to customize the value of __dts_data_source. Valid values:
  • tagColumnValue: customizes the tag column.
    Important If you set this parameter to tagColumnValue, you must specify the value of the custom tag column in the DbList parameter. For more information, see Objects of DTS tasks.
  • notTagColumnValue: does not customize the tag column.
    Important The tag column can be customized only for DTS instances that are configured after purchase.
adsSqlTypeYou need to select the SQL operations that you want to incrementally synchronize or migrate at the instance level. The SQL operations that you want to incrementally synchronize or migrate at the instance level. Separate multiple SQL operations with commas (,). Valid values:
  • insert
  • update
  • delete
  • alterTable
  • truncateTable
  • createTable
  • dropTable
Table 17. AnalyticDB for PostgreSQL instance
ParameterConfiguration conditionDescription
whitelist.dms.online.ddl.enableThe DTS task is a data migration or synchronization task. The source database is an ApsaraDB RDS for MySQL instance, a self-managed MySQL database, or a PolarDB for MySQL cluster. For more information, see Source database parameters. Specifies whether to copy the temporary tables that are generated by online DDL operations performed on source tables to the destination database. The six parameters must be used together.
sqlparser.dms.original.ddl
whitelist.ghost.online.ddl.enable
sqlparser.ghost.original.ddl
online.ddl.shadow.table.rule
online.ddl.trash.table.rule
isTargetDbCaseSensitiveThe source database is an ApsaraDB RDS for MySQL instance, a self-managed MySQL database, an Oracle database, an ApsaraDB RDS for SQL Server instance, or a self-managed SQL Server database. Specifies whether to enclose the names of destination objects in double quotation marks ("). Valid values: true and false.
syncOperationYou need to select the SQL operations that you want to incrementally synchronize or migrate at the instance level. The SQL operations that you want to incrementally synchronize or migrate at the instance level. Separate multiple SQL operations with commas (,). Valid values:
  • insert
  • update
  • delete
  • alterTable
  • truncateTable
  • createTable
  • dropTable
  • createDB
  • dropDB
Table 18. ApsaraDB RDS for MariaDB instance
ParameterConfiguration conditionDescription
triggerModeThis parameter is required for all scenarios.The method used to migrate triggers from the source database. Valid values:
  • manual
  • auto
destSSLThe destination database is an Alibaba Cloud database instance or a self-managed database hosted on an ECS instance. Specifies whether to encrypt the connection to the destination database. Valid values:
  • 0: does not encrypt the connection.
  • 1: encrypts the connection by using SSL.
Table 19. ApsaraDB for MongoDB instance and self-managed MongoDB database
ParameterConfiguration conditionDescription
destEngineArchTypeThis parameter is required for all scenarios.The architecture type of the destination MongoDB database. Valid values:
  • 0: standalone architecture.
  • 1: replica set architecture.
  • 2: sharded cluster architecture.
destinationShardEndpointUserNameThe value of destEngineArchType is set to 2. The account used to log on to a shard of the destination MongoDB database.
destinationShardEndpointPasswordThe password used to log on to a shard of the destination MongoDB database.
Table 20. ApsaraDB for Redis instance, Tair instance, and self-managed Redis database
ParameterConfiguration conditionDescription
destKvStoreModeThe access method of the destination database is not Alibaba Cloud Instance. The deployment mode of the destination self-managed Redis database. Valid values:
  • single: standalone deployment.
  • cluster: cluster deployment.
any.sink.redis.expire.extension.secondsThis parameter is required for all scenarios.The extended time period for keys migrated from the source database to the destination database to remain valid. If specific commands are used, we recommend that you set the parameter to a value greater than 600 to ensure data consistency. Unit: seconds. The specific commands include the following commands:
EXPIRE key seconds
PEXPIRE key milliseconds
EXPIREAT key timestamp
PEXPIREAT key timestampMs
Table 21. PolarDB for PostgreSQL (Compatible with Oracle) cluster
ParameterConfiguration conditionDescription
destHostPortCtlThe destination database is accessed by using a public IP address. Specifies whether to select multiple data sources for the PolarDB for PostgreSQL(Compatible with Oracle) cluster. Valid values:
  • single
  • multiple
destHostPortsThe value of destHostPortCtl is set to multiple. The IP addresses and port numbers of the nodes in the destination PolarDB for PostgreSQL(Compatible with Oracle) cluster. Separate multiple values in the IP address:Port number format with commas (,).
Table 22. Oracle database
ParameterConfiguration conditionDescription
destOracleTypeThis parameter is required for all scenarios.The architecture type of the Oracle database. Valid values:
  • sid: non-RAC.
  • serviceName: RAC or PDB.
Table 23. DataHub project
ParameterConfiguration conditionDescription
isUseNewAttachedColumnThis parameter is required for all scenarios.The naming rules for additional columns. Valid values:
  • true: uses the new naming rules.
  • false: uses the original naming rules.
Table 24. MaxCompute project
ParameterConfiguration conditionDescription
isUseNewAttachedColumnThis parameter is required for all scenarios.The naming rules for additional columns. Valid values:
  • true: uses the new naming rules.
  • false: uses the original naming rules.
partitionThis parameter is required for all scenarios.The name of the partition of incremental data tables.
  • Valid values if isUseNewAttachedColumn is set to true:
    • modifytime_year
    • modifytime_month
    • modifytime_day
    • modifytime_hour
    • modifytime_minute
  • Valid values if isUseNewAttachedColumn is set to false:
    • new_dts_sync_modifytime_year
    • new_dts_sync_modifytime_month
    • new_dts_sync_modifytime_day
    • new_dts_sync_modifytime_hour
    • new_dts_sync_modifytime_minute
Table 25. Elasticsearch cluster
ParameterConfiguration conditionDescription
indexMappingThis parameter is required for all scenarios.The name of the index to be created in the destination Elasticsearch cluster. Valid values:
  • tb: The name of the index to be created is the same as that of the table.
  • db_tb: The name of the index to be created consists of the database name, an underscore (_), and the table name in sequence.
Table 26. Kafka cluster
ParameterConfiguration conditionDescription
destTopicThis parameter is required for all scenarios.The topic of the migrated or synchronized objects in the destination Kafka cluster.
destVersionThis parameter is required for all scenarios.The version of the destination Kafka cluster. Valid values: 1.0, 0.9, and 0.10.
Note If the version of the Kafka cluster is 1.0 or later, set this parameter to 1.0.
destSSLThis parameter is required for all scenarios.Specifies whether to encrypt the connection to the destination Kafka cluster. Valid values:
  • 0: does not encrypt the connection.
  • 3: encrypts the connection by using the SCRAM-SHA-256 algorithm.
sink.kafka.ddl.topicYou need to specify the topic that stores the DDL information. The topic that stores the DDL information. If you do not specify this parameter, the DDL information is stored in the topic that is specified by destTopic.
kafkaRecordFormatThis parameter is required for all scenarios.The storage format in which data is shipped to the destination Kafka cluster. Valid values:
  • canal_json: DTS uses Canal to parse the incremental logs of the source database and transfer the incremental data to the destination Kafka cluster in the Canal JSON format.
  • dts_avro: DTS uses the Avro format to transfer data. Avro is a data serialization format into which data structures or objects can be converted to facilitate storage and transmission.
  • shareplex_json: DTS uses the data replication software SharePlex to read the data in the source database and write the data to the destination Kafka cluster in the SharePlex JSON format.
  • debezium: DTS uses Debezium to transfer data. Debezium is a tool for capturing data changes. It streams data updates from the source database to the destination Kafka cluster in real time.
Note For more information, see Data formats of a Kafka cluster.
destKafkaPartitionKeyThis parameter is required for all scenarios.The policy that is used to synchronize data to Kafka partitions. Valid values:
  • none: DTS synchronizes all data and DDL statements to Partition 0 of the destination topic.
  • database_table: DTS uses the database and table names as the partition key to calculate the hash value. Then, DTS synchronizes the data and DDL statements of each table to the corresponding partition of the destination topic.
  • columns: DTS uses a table column as the partition key to calculate the hash value. By default, the primary key is used as the partition key. If a table does not have a primary key, the unique key is used as the partition key. DTS synchronizes each row to the corresponding partition of the destination topic. You can specify one or more columns as partition keys to calculate the hash value.
Note For more information about synchronization policies, see Specify the policy for synchronizing data to Kafka partitions.
destSchemaRegistryThis parameter is required for all scenarios.Specifies whether to use Kafka Schema Registry. Valid values: yes and no.
destKafkaSchemaRegistryUrlThe value of destSchemaRegistry is set to true. The URL or IP address of your Avro schema that is registered with Kafka Schema Registry.