When you call an API operation to configure or query a Data Transmission Service (DTS) task, you need to specify or query the Reserve parameter. The Reserve parameter allows you to view the configurations of the source or destination instance or add more configurations to the DTS task. For example, you can specify the data storage format of the destination Kafka database and the ID of the Cloud Enterprise Network (CEN) instance. This topic describes the use scenarios and sub-parameters of the Reserve parameter.

Related API operations

Description

The value of the Reserve parameter is a JSON string. You can specify the Reserve parameter in the following scenarios.

Note If you specify a numeric value, you must enclose the value in double quotation marks ("").
  • If the database type of the destination instance is Kafka, you can use the Reserve parameter to configure the information about the Kafka database and the storage format in which data is shipped to the Kafka database.
    Parameter Required Description
    destTopic Yes The topic to which the objects migrated or synchronized to the destination Kafka database belong.
    destVersion Yes The version of the destination Kafka database. Valid values: "1.0", "0.9", and "0.10".
    Note If the version of the destination Kafka database is 1.0 or later, you must set the value to "1.0".
    destSSL Yes Specifies whether to encrypt the connection to the destination Kafka database. Valid values:
    • "0": The connection is not encrypted.
    • "3": SCRAM-SHA-256 is used to encrypt the connection.
    dest.kafka.schema.registry.url No If you use Kafka Schema Registry, you must enter the URL or IP address that is registered in Kafka Schema Registry for your Avro schemas.
    sink.kafka.ddl.topic No Enter the topic that is used to store the DDL information. If you do not specify this parameter, the DDL information is stored in the topic that is specified by the destTopic parameter.
    kafkaRecordFormat Yes The storage format in which data is shipped to the destination Kafka database. Valid values:
    • canal_json: Canal is used to parse the incremental logs of the source database and transfer the incremental data to the destination Kafka database in the Canal JSON format.
    • dts_avro: Avro is a data serialization format into which data structures or objects can be converted to facilitate storage and transmission.
    • shareplex_json: The data replication software SharePlex is used to read the data in the source database and write the data to the destination Kafka database in the SharePlex JSON format.
    • debezium: Debezium is a tool used to capture data changes. Debezium supports real-time streaming of data updates from the source PolarDB for Oracle cluster to the destination Kafka database.
    Note For more information, see Data formats of a Kafka cluster.
    destKafkaPartitionKey Yes The policy for synchronizing data to Kafka partitions. Valid values:
    • none: DTS synchronizes all data and DDL statements to Partition 0 of the destination topic.
    • database_table: DTS uses the database and table names as the partition key to calculate the hash value. Then, DTS synchronizes the data and DDL statements of each table to the corresponding partition of the destination topic.
    • columns: DTS uses a table column as the partition key to calculate the hash value. The table column is the primary key by default. If a table does not have a primary key, the unique key is used as the partition key. DTS synchronizes each row to the corresponding partition of the destination topic. You can specify one or more columns as partition keys to calculate the hash value.
    Note For more information about synchronization policies, see Specify the policy for synchronizing data to Kafka partitions.
    Example:
    {
        "destTopic": "dtstestdata",
        "destVersion": "1.0",
        "destSSL": "0",
        "dest.kafka.schema.registry.url": "http://12.1.12.3/api",
        "sink.kafka.ddl.topic": "dtstestdata",
        "kafkaRecordFormat": "canal_json",
        "destKafkaPartitionKey": "none"
    }
  • If the database type of the source or destination instance is MongoDB, you can use the Reserve parameter to specify the architecture type of the MongoDB database.
    Parameter Required Description Example
    srcEngineArchType Yes The architecture type of the source MongoDB database.
    • 0: standalone architecture
    • 1: replica set architecture
    • 2: sharded cluster architecture
    {
         "srcEngineArchType": "1"  }
    destEngineArchType Yes The architecture type of the destination MongoDB database.
    • 0: standalone architecture
    • 1: replica set architecture
    • 2: sharded cluster architecture
    {
         "destEngineArchType": "1"  }
  • If the source or destination instance is a self-managed database that is connected over CEN, you can use the Reserve parameter to specify the ID of the CEN instance.
Parameter Required Description Example
srcInstanceId Yes The ID of the CEN instance for the source instance.
{
     "srcInstanceId": "cen-9kqshqum*******"  }
destInstanceId Yes The ID of the CEN instance for the destination instance.
{
     "destInstanceId": "cen-9kqshqum*******"  }