When you call an API operation to configure or query a Data Transmission Service (DTS) task, you must specify or query the Reserve parameter. The Reserve parameter allows you to view the configurations of the source or destination instance or add more configurations to the DTS task. For example, you can specify the data storage format of the destination Kafka instance and the ID of the Cloud Enterprise Network (CEN) instance. This topic describes the use scenarios and sub-parameters of the Reserve parameter.

Related API operations

Description

The value of the Reserve parameter is a JSON string. You can specify the Reserve parameter in the following scenarios.

Note If you specify a numeric value, you must enclose the value in double quotation marks ("").
  • You can use the following Reserve parameter to set Processing Mode of Conflicting Tables and modify the writing rate of full data for a data migration or synchronization task.
    ParameterRequiredDescription
    targetTableModeYesThe processing mode of conflicting tables. Valid values:
    • 0: performs a precheck and reports errors.
    • 2: ignores errors and proceeds.
    dts.datamove.source.bps.maxNoThe amount of data that is synchronized or migrated during full synchronization or migration per second. The value must be an integer from 0 to 9007199254740991. Unit: MB.
  • If the database type of the destination instance is Kafka, you can use the Reserve parameter to configure the information about the Kafka instance and the storage format in which data is shipped to the Kafka instance.
    ParameterRequiredDescription
    destTopicYesThe topic to which the objects migrated or synchronized to the destination Kafka instance belong.
    destVersionYesThe engine version of the destination Kafka instance. Valid values: 1.0, 0.9, and 0.10.
    Note If the engine version of the destination Kafka instance is 1.0 or later, you must set the value to 1.0.
    destSSLYesSpecifies whether to encrypt the connection to the destination Kafka instance. Valid values:
    • 0: does not encrypt the connection.
    • 3: encrypts the connection by using the SCRAM-SHA-256 algorithm.
    dest.kafka.schema.registry.urlNoIf you use Kafka Schema Registry, you must enter the URL or IP address that is registered in Kafka Schema Registry for your Avro schemas.
    sink.kafka.ddl.topicNoThe topic that is used to store the DDL information. If you do not specify this parameter, the DDL information is stored in the topic that is specified by the destTopic parameter.
    kafkaRecordFormatYesThe storage format in which data is shipped to the destination Kafka instance. Valid values:
    • canal_json: Canal is used to parse the incremental logs of the source database and transfer the incremental data to the destination Kafka instance in the Canal JSON format.
    • dts_avro: Avro is a data serialization format into which data structures or objects can be converted to facilitate storage and transmission.
    • shareplex_json: The data replication software SharePlex is used to read the data in the source database and write the data to the destination Kafka instance in the SharePlex JSON format.
    • debezium: Debezium is a tool used to capture data changes. Debezium supports real-time streaming of data updates from the source PolarDB for Oracle cluster to the destination Kafka instance.
    Note For more information, see Data formats of a Kafka cluster.
    destKafkaPartitionKeyYesThe policy used to synchronize data to Kafka partitions. Valid values:
    • none: DTS synchronizes all data and DDL statements to Partition 0 of the destination topic.
    • database_table: DTS uses the database and table names as the partition key to calculate the hash value. Then, DTS synchronizes the data and DDL statements of each table to the corresponding partition of the destination topic.
    • columns: DTS uses a table column as the partition key to calculate the hash value. The table column is the primary key by default. If a table does not have a primary key, the unique key is used as the partition key. DTS synchronizes each row to the corresponding partition of the destination topic. You can specify one or more columns as partition keys to calculate the hash value.
    Note For more information about synchronization policies, see Specify the policy for synchronizing data to Kafka partitions.
    Example:
    {
        "destTopic": "dtstestdata",
        "destVersion": "1.0",
        "destSSL": "0",
        "dest.kafka.schema.registry.url": "http://12.1.12.**/api",
        "sink.kafka.ddl.topic": "dtstestdata",
        "kafkaRecordFormat": "canal_json",
        "destKafkaPartitionKey": "none"
    }
  • If the database type of the source or destination instance is MongoDB, you must use the Reserve parameter to specify the architecture type of the MongoDB database.
    ParameterRequiredDescriptionExample
    srcEngineArchTypeYesThe architecture type of the source MongoDB database.
    • 0: standalone architecture
    • 1: replica set architecture
    • 2: sharded cluster architecture
    {
         "srcEngineArchType": "1"  }
    destEngineArchTypeYesThe architecture type of the destination MongoDB database.
    • 0: standalone architecture
    • 1: replica set architecture
    • 2: sharded cluster architecture
    {
         "destEngineArchType": "1"  }
  • If the source or destination instance is a self-managed database connected over CEN, you must use the Reserve parameter to specify the ID of the CEN instance.
    ParameterRequiredDescriptionExample
    srcInstanceIdYesThe ID of the CEN instance for the source instance.
    {
         "srcInstanceId": "cen-9kqshqum*******"  }
    destInstanceIdYesThe ID of the CEN instance for the destination instance.
    {
         "destInstanceId": "cen-9kqshqum*******"  }
  • If the destination instance is a DataHub or MaxCompute project, you must specify the naming rules for additional columns.
    ParameterRequiredDescription
    isUseNewAttachedColumnYesThe naming rules for additional columns. Valid values:
    • true: uses the new naming rules.
    • false: uses the old naming rules. In this case, make sure that disableAttachedDTSColumn is not set to true.
    • To disable the default rules, set disableAttachedDTSColumn to true and isUseNewAttachedColumn to false.
    disableAttachedDTSColumnNo
  • If the source instance is an Oracle database, you must specify the type of the database.
    ParameterRequiredDescription
    srcOracleTypeYesThe type of the Oracle database.
    • sid: non-RAC
    • serviceName: RAC or PDB
  • If the destination instance is a PolarDB for MySQL cluster, you must specify the type of the cluster.
    ParameterRequiredDescription
    anySinkTableEngineTypeYesThe type of the PolarDB for MySQL cluster. Valid values:
    • InnoDB: the default storage engine.
    • X-Engine: an online transaction processing (OLTP) database storage engine.
  • If the source and destination instances are MySQL instances (including ApsaraDB RDS for MySQL instances and self-managed MySQL databases), PolarDB for MySQL clusters, AnalyticDB for MySQL V3.0 databases, or ApsaraDB RDS for MariaDB V3.0 instances, you must specify the Replicate Temporary Tables When DMS Performs DDL Operations parameter.
    ParameterRequiredDescription
    sqlparser.dms.original.ddlYesIf you use Data Management (DMS) to perform online DDL operations on the source database, you can specify whether to migrate temporary tables generated by online DDL operations. Valid values:
    • true: migrates the data of temporary tables generated by online DDL operations.
      Note If online DDL operations generate a large amount of data, the data migration task may take an extended period of time to complete.
    • false: does not migrate the data of temporary tables generated by online DDL operations. Only the original DDL data of the source database is migrated.
      Note If you select No, the tables in the destination database may be locked.