All Products
Search
Document Center

Data Transmission Service:Synchronize data from an RDS for PostgreSQL instance to a SelectDB instance

Last Updated:Jan 17, 2026

Data Transmission Service (DTS) supports data synchronization from a PostgreSQL database, such as a self-managed PostgreSQL database or an RDS for PostgreSQL instance, to a SelectDB instance for large-scale data analytics. This topic describes the procedure using a source RDS for PostgreSQL instance as an example.

Prerequisites

  • You have created a destination SelectDB instance. The storage space of the destination instance must be larger than the storage space used by the source RDS for PostgreSQL instance. For more information, see Create an instance.

  • The wal_level parameter of the source RDS for PostgreSQL instance is set to logical. For more information, see Set instance parameters.

Precautions

Type

Description

Source database limits

  • Synchronization object requirements:

    • All tables to be synchronized have a primary key or a UNIQUE constraint:

      Make sure that the table fields are unique. Otherwise, duplicate data may appear in the destination database.

      Note

      If the destination table was not created by DTS (if you did not select Schema Synchronization for Synchronization Types), you must ensure that the table has the same primary key or non-empty UNIQUE constraint as the table to be synchronized in the source database. Otherwise, duplicate data may occur in the destination database.

    • The synchronization objects include tables that have neither a primary key nor a UNIQUE constraint:

      When you configure the instance, select Schema Synchronization for Synchronization Types. On the Configurations for Databases, Tables, and Columns step, set Engine to duplicate for the tables. Otherwise, the instance may fail or data may be lost.

      Note

      During initial schema synchronization, DTS adds columns to the destination table. For more information, see Additional column information.

    • The name of the database to be synchronized cannot contain hyphens (-), such as dts-testdata.

  • If you synchronize data at the table level and need to edit objects, such as mapping column names, and the number of tables in a single synchronization task exceeds 5,000, split the tables into multiple tasks or configure a task to synchronize the entire database. Otherwise, a request error may be reported after you submit the task.

  • Write-ahead log (WAL):

    • WAL must be enabled. Set the wal_level parameter to logical.

    • For an incremental synchronization task, DTS requires that the WAL logs in the source database are retained for more than 24 hours. For a task that performs both full and incremental synchronization, DTS requires that the WAL logs are retained for at least 7 days. You can change the log retention period to more than 24 hours after the initial full data synchronization is complete. If the task fails because DTS cannot obtain the required WAL logs, or in extreme cases, data inconsistency or data loss occurs, the issue is not covered by the DTS Service-Level Agreement (SLA) because the specified log retention period is shorter than required.

  • Source database operation limits:

    • If the source is a self-managed PostgreSQL database and a failover occurs, the data synchronization fails.

    • To ensure that the sync task runs properly and to prevent logical replication interruptions during a failover, you must enable Logical Replication Slot Failover for ApsaraDB RDS for PostgreSQL. For more information, see Logical Replication Slot Failover.

    • Due to the limits of logical subscription in the source database, if a single piece of data to be synchronized exceeds 256 MB after an incremental change, the synchronization instance may fail and cannot be recovered. You must reconfigure the synchronization instance.

    • During schema synchronization and full data synchronization, do not perform Data Definition Language (DDL) operations that change the schema of databases or tables. Otherwise, the data synchronization task fails.

      Note

      During the full synchronization phase, DTS queries the source database, which acquires metadata locks. This may block DDL operations on the source database.

  • If the source database has long-running transactions and the instance includes an incremental synchronization task, the write-ahead logs (WALs) generated before the long-running transactions are committed cannot be cleared and may accumulate. This can cause the disk space of the source database to become insufficient.

  • If you perform a major engine version upgrade on the source database while the synchronization instance is running, the instance fails and cannot be recovered. You must reconfigure the synchronization instance.

Other limits

  • Currently, you can only synchronize data to tables that use the Unique or Duplicate engine in the SelectDB instance.

    The destination table uses the Unique engine

    If the destination table uses the Unique engine, make sure that all unique keys in the destination table also exist in the source table and are included in the synchronization objects. Otherwise, data inconsistency may occur.

    The destination table uses the Duplicate engine

    If the destination table uses the Duplicate engine, duplicate data may appear in the destination database in the following cases. You can remove duplicates based on the additional columns (_is_deleted, _version, and _record_id):

    • The synchronization instance has been retried.

    • The synchronization instance has been restarted.

    • After the synchronization instance starts, two or more DML operations are performed on the same data record.

      Note

      When the destination table uses the Duplicate engine, DTS converts UPDATE or DELETE statements into INSERT statements.

  • When you configure parameters in the Selected Objects box, you can only set the bucket_count parameter.

    Note

    The value of bucket_count must be a positive integer. The default value is auto.

  • SelectDB instances only support database and table names that start with a letter. If the name of a database or table to be synchronized does not start with a letter, you must use the mapping feature to change the name.

  • If the name of a synchronization object (database, table, or column) contains Chinese characters, you must use the mapping feature to change the name, for example, to an English name. Otherwise, the task may fail.

  • DDL operations that modify multiple columns at once and consecutive DDL operations on the same table are not supported.

  • A single synchronization instance can synchronize only one database. To synchronize multiple databases, you must configure a synchronization instance for each database.

  • DTS does not synchronize TimescaleDB extension tables, tables with cross-schema inheritance, or tables that contain expression-based unique indexes.

  • In the following three scenarios, you must run the ALTER TABLE schema.table REPLICA IDENTITY FULL; command on the tables to be synchronized before you write data to them. This ensures data consistency. Do not perform table locking operations during the execution of this command. Otherwise, the tables may be locked. If you skip the related check items in the precheck, DTS automatically runs this command during the initialization of the instance.

    • When the instance runs for the first time.

    • When the synchronization granularity is schema, and a new table is created in the schema to be synchronized or a table to be synchronized is rebuilt using the RENAME command.

    • When you use the Modify Objects feature.

    Note
    • In the command, replace schema and table with the names of the schema and table to which the data to be synchronized belongs.

    • Perform this operation during off-peak hours.

  • During initial full data synchronization, DTS consumes some read and write resources on the source and destination databases, which may increase the database load. Therefore, evaluate the performance of the source and destination databases before you synchronize data, and perform the synchronization during off-peak hours (for example, when the CPU load of the source and destination databases is below 30%).

  • During data synchronization, do not add backend (BE) nodes to the SelectDB database. Otherwise, the task fails. You can try to restart the synchronization instance to resume the failed task.

  • In a multi-table merge scenario, where data from multiple source tables is synchronized to a single destination table, make sure that the source tables have the same schema. Otherwise, data inconsistency or task failure may occur.

  • During data synchronization, do not create a new cluster in the destination SelectDB instance. Otherwise, the task fails. You can try to restart the synchronization instance to resume the failed task.

  • DTS validates data content but does not validate metadata such as sequences. You must validate the metadata yourself.

  • DTS creates the following temporary tables in the source database to obtain the DDL statements of incremental data, the structure of incremental tables, and heartbeat information. During synchronization, do not delete these temporary tables. Otherwise, the DTS task becomes abnormal. The temporary tables are automatically deleted after the DTS instance is released.

    public.dts_pg_class, public.dts_pg_attribute, public.dts_pg_type, public.dts_pg_enum, public.dts_postgres_heartbeat, public.dts_ddl_command, public.dts_args_session, and public.aliyun_dts_instance.

  • During data synchronization, DTS creates a replication slot with the prefix dts_sync_ in the source database to replicate data. DTS uses this replication slot to obtain incremental logs from the source database within 15 minutes. When the data synchronization fails or the synchronization instance is released, DTS attempts to automatically clear this replication slot.

    Note
    • If you change the password of the database account used by the task or delete the DTS IP address whitelist from the source database during data synchronization, the replication slot cannot be automatically cleared. In this case, you must manually clear the replication slot in the source database to prevent it from accumulating and occupying disk space, which can make the source database unavailable.

    • If a failover occurs on the source database, you must log on to the secondary database to manually clear the replication slot.

    Amazon slot查询信息

  • When you synchronize partitioned tables, you must include both the parent table and its child tables as synchronization objects. Otherwise, data inconsistency may occur in the partitioned table.

    Note

    The parent table of a PostgreSQL partitioned table does not directly store data. All data is stored in the child tables. The synchronization task must include the parent table and all its child tables. Otherwise, data in the child tables may not be synchronized, leading to data inconsistency between the source and destination.

  • During incremental synchronization, DTS uses a batch synchronization policy to reduce the load on the destination instance. By default, for a single synchronization object, DTS writes data at most once every 5 seconds. Therefore, the DTS task may have a normal synchronization latency, usually within 10 seconds. To reduce this latency, modify the selectdb.reservoir.timeout.milliseconds parameter of the DTS instance in the console to adjust the batching time. The allowed range is [1000, 10000] milliseconds.

    Note

    When you adjust the batching time, a lower value increases the write frequency of DTS. This may increase the load and write response time (RT) of the destination instance, which in turn increases the DTS synchronization latency. Therefore, adjust the batching time based on the load of the destination instance.

  • If the task fails, DTS technical support will attempt to recover it within 8 hours. During the recovery process, operations such as restarting the task or adjusting its parameters may be performed.

    Note

    When parameters are adjusted, only DTS task parameters are modified. Database parameters remain unchanged.The parameters that may be modified include but are not limited to those described in Modify instance parameters.

Special cases

    • When the source instance is an ApsaraDB RDS for PostgreSQL instance

      During synchronization, do not change the endpoint or zone of the ApsaraDB RDS for PostgreSQL instance. Otherwise, the synchronization fails.

    • When the source instance is a self-managed PostgreSQL database

      Make sure that the values of the max_wal_senders and max_replication_slots parameters are greater than the sum of the number of replication slots in use and the number of DTS instances to be created with this self-managed PostgreSQL database as the source.

    • When the source instance is Google Cloud Platform Cloud SQL for PostgreSQL, the Database Account for the source database must have the `cloudsqlsuperuser` permission. When you select synchronization objects, you must select objects that this account is authorized to manage, or grant the Owner permission for the objects to be synchronized to this account (for example, by running the GRANT <owner_of_the_object_to_be_synchronized> TO <source_database_account_used_by_the_task> command to allow this account to perform related operations as the object owner).

      Note

      An account with the cloudsqlsuperuser permission cannot manage data whose owner is another account with the cloudsqlsuperuser permission.

Billing

Synchronization type

Pricing

Schema synchronization and full data synchronization

Free of charge.

Incremental data synchronization

Charged. For more information, see Billing overview.

Supported SQL operations for synchronization

Operation type

SQL statements

DML

INSERT, UPDATE, DELETE

DDL

ADD COLUMN, DROP COLUMN

Permissions required for database accounts

Database

Required permissions

Creation and authorization method

Source RDS for PostgreSQL instance

A privileged account that is the Owner of the database to be synchronized (authorized account).

Create an account and Create a database.

Destination SelectDB instance

Cluster access permissions (Usage_priv) and read and write permissions for the database (Select_priv, Load_priv, Alter_priv, Create_priv, Drop_priv).

Cluster Permission Management and Basic Permission Management.

Procedure

  1. Navigate to the list of synchronization tasks in the destination region. You can use one of the following methods:

    Go to the page from the DTS console

    1. Log on to the Data Transmission Service (DTS) console.

    2. In the left navigation pane, click Data Synchronization.

    3. In the upper-left corner of the page, select the region where the synchronization instance resides.

    Go to the page from the DMS console

    Note

    The actual operations may vary based on the mode and layout of the DMS console. For more information, see Simple mode console and Customize the layout and style of the DMS UI.

    1. Log on to Data Management (DMS).

    2. On the top menu bar, choose Data + AI > Data Transmission (DTS) > Data Synchronization.

    3. To the right of Data Synchronization Tasks, select the region where the synchronization instance is located.

  2. Click Create Task to go to the task configuration page.

  3. Configure the source and destination databases.

    Category

    Configuration

    Description

    None

    Task Name

    DTS automatically generates a task name. We recommend that you specify a descriptive name for easy identification. The name does not have to be unique.

    Source Database

    Select Existing Connection

    • If you want to use a database instance that is added to the system (newly created or saved), select the database instance from the drop-down list. The database information is automatically configured.

      Note

      In the DMS console, this configuration item is named Select a DMS database instance..

    • If you have not added the database instance to the system, or you do not need to use an instance that is already added, you must manually configure the following database information.

    Database Type

    Select PostgreSQL.

    Access Method

    Select Alibaba Cloud Instance.

    Instance Region

    Select the region where the source RDS for PostgreSQL instance resides.

    Replicate Data Across Alibaba Cloud Accounts

    In this example, a database instance that belongs to the current Alibaba Cloud account is used. Select No.

    Instance ID

    Select the ID of the source RDS for PostgreSQL instance.

    Database Name

    Enter the name of the database that contains the objects to be synchronized in the source RDS for PostgreSQL instance.

    Database Account

    Enter the database account of the source RDS for PostgreSQL instance. For information about the required permissions, see Permissions required for database accounts.

    Database Password

    Enter the password that corresponds to the database account.

    Destination Database

    Select Existing Connection

    • If you want to use a database instance that is added to the system (newly created or saved), select the database instance from the drop-down list. The database information is automatically configured.

      Note

      In the DMS console, this configuration item is named Select a DMS database instance..

    • If you have not added the database instance to the system, or you do not need to use an instance that is already added, you must manually configure the following database information.

    Database Type

    Select SelectDB.

    Access Method

    Select Alibaba Cloud Instance.

    Instance Region

    Select the region where the destination SelectDB instance resides.

    Replicate Data Across Alibaba Cloud Accounts

    In this example, a database instance that belongs to the current Alibaba Cloud account is used. Select No.

    Instance ID

    Select the ID of the destination SelectDB instance.

    Database Account

    Enter the database account of the destination SelectDB instance. For information about the required permissions, see Permissions required for database accounts.

    Database Password

    Enter the password that corresponds to the database account.

  4. After you have completed the configuration, click Test Connectivity and Proceed at the bottom of the page.

    Note
    • Ensure that the IP address blocks of DTS servers are added to the security settings of the source and destination databases to allow access from DTS servers. This can be done automatically or manually. For more information, see Add the IP address blocks of DTS servers to a whitelist.

    • If the source or destination database is a self-managed database (where the Access Method is not Alibaba Cloud Instance), you must also click Test Connectivity in the CIDR Blocks of DTS Servers dialog box.

  5. Configure task objects.

    1. On the Configure Objects page, you can configure the objects to be synchronized.

      Configuration

      Description

      Synchronization Types

      Incremental Data Synchronization is selected by default. You must also select Schema Synchronization and Full Data Synchronization. After the precheck is complete, DTS initializes the full data of the objects to be synchronized from the source instance to the destination cluster, which serves as the baseline data for subsequent incremental synchronization.

      Important

      When data is synchronized from a PostgreSQL database to a SelectDB instance, data types are converted. If you do not select the Schema Synchronization check box, you must create Unique or Duplicate model tables with the appropriate schemas in the destination SelectDB instance in advance. For more information, see Data type mapping, , and Data models.

      Processing Mode of Conflicting Tables

      • Precheck and Report Errors: Checks whether a table with the same name exists in the destination database. If a table with the same name does not exist, the check passes. If a table with the same name exists, the precheck fails and the data synchronization task does not start.

        Note

        If you cannot delete or rename the table with the same name in the destination database, you can map it to a different table name in the destination database. For more information, see Map schemas, tables, and columns.

      • Ignore Errors and Proceed: Skips the check for tables with the same name in the destination database.

        Warning

        If you select Ignore Errors and Proceed, data inconsistency may occur, which can pose risks to your business. For example:

        • If the table schemas are the same and a record in the destination database has the same primary key or unique key value as a record in the source database, the record from the source database overwrites the record in the destination database.

        • If the table schemas are different, the data may fail to be initialized, only some columns of data can be synchronized, or the synchronization may fail. Proceed with caution.

      Capitalization of Object Names in Destination Instance

      You can configure the case sensitivity policy for database, table, and column object names that are synchronized to the destination instance. By default, the DTS default policy is selected. You can also choose to use the default policies of the source and destination databases. For more information, see Case sensitivity policy for destination object names.

      Source Objects

      In the Source Objects box, click an object to sync, and then click 向右 to move it to the Selected Objects box.

      Note

      You can select objects at the schema or table level.

      Selected Objects

      • To set the name of a synchronization object in the destination instance or specify which object receives the data, right-click the synchronization object in the Selected Objects box and modify it. For more information, see Map table and column names.

      • To remove a synchronization object, click it in the Selected Objects box and then click image to move it to the Source Objects box.

      • If you select the Schema Synchronization check box for Synchronization Types and select objects at the table level, you can set the number of buckets (the bucket_count parameter). To do this, right-click the table in the Selected Objects box. In the Parameter Settings section, set Enable Parameter Settings to Yes, specify the Value, and then click OK.

      Note
      • If you use the object name mapping feature, the synchronization of other objects that depend on the mapped object may fail.

      • To set a WHERE clause to filter data, right-click the table to synchronize in the Selected Objects box. In the dialog box that appears, set the filter condition. For more information, see Set filter conditions.

      • To select SQL operations for incremental synchronization, right-click the synchronization object in the Selected Objects box and select the desired operations from the dialog box that appears.

    2. Click Next: Advanced Settings.

      Configuration

      Description

      Dedicated Cluster for Task Scheduling

      By default, DTS schedules tasks on a shared cluster, and you do not need to select a cluster. For more stable performance, you can purchase a dedicated cluster to run DTS synchronization tasks. For more information, see What is a DTS dedicated cluster?.

      Retry Time for Failed Connections

      After a synchronization task starts, if the connection to the source or destination database fails, DTS reports an error and immediately begins to retry the connection. The default retry duration is 720 minutes. You can also specify a custom retry duration from 10 to 1,440 minutes. We recommend that you set the duration to 30 minutes or more. If DTS successfully reconnects to the database within the specified duration, the synchronization task automatically resumes. Otherwise, the task fails.

      Note
      • If you have multiple DTS instances (for example, Instance A and Instance B) that share the same source or destination, and you set the network retry time to 30 minutes for Instance A and 60 minutes for Instance B, the shorter duration of 30 minutes is used for both.

      • Because DTS charges for task runtime during the connection retry period, we recommend that you customize the retry duration based on your business needs or release the DTS instance as soon as possible after the source and destination database instances are released.

      Retry Time for Other Issues

      After the synchronization task starts, if other non-connectivity issues occur with the source or destination database (such as DDL or DML execution exceptions), DTS reports an error and immediately starts continuous retry operations. The default retry duration is 10 minutes. You can also customize the retry duration within the range of 1 to 1,440 minutes. We recommend that you set it to 10 minutes or more. If the relevant operations are successful within the set retry duration, the synchronization task automatically resumes. Otherwise, the task fails.

      Important

      The value for Retry Time for Other Issues must be less than that for Retry Time for Failed Connections.

      Enable Throttling for Full Data Synchronization

      During the full synchronization phase, DTS uses read and write resources from the source and destination databases, which can increase the database load. To reduce the load on the destination database, you can set a rate limit for the full synchronization task by configuring the Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s) parameters.

      Note
      • This configuration item is available only when Synchronization Types is set to Full Data Synchronization.

      • You can also adjust the full synchronization rate after the synchronization instance is running.

      Enable Throttling for Incremental Data Synchronization

      You can also set a rate limit for the incremental synchronization task. To relieve pressure on the destination database, set the RPS of Incremental Data Synchronization and Data synchronization speed for incremental synchronization (MB/s).

      Environment Tag

      You can select an environment tag to identify the instance. This setting is optional.

      Configure ETL

      Choose whether to enable the extract, transform, and load (ETL) feature. For more information, see What is ETL? Valid values:

      Monitoring and Alerting

      Specify whether to configure alerts. If the synchronization fails or the latency exceeds the specified threshold, a notification is sent to an alert contact.

    3. Optional: After you complete the preceding configurations, click Next: Configure Database and Table Fields to set the Primary Key Column, Distribution Key, and Engine for the destination tables.

      Note
      • This step is available only if you select the Schema Synchronization check box for Synchronization Types. You can set Definition Status to All to modify the settings.

      • You can select multiple columns to form a composite primary key for Primary Key Column. You must select one or more columns from the Primary Key Column as the Distribution Key.

      • For a table that has no primary key or UNIQUE constraint, you must set Engine to duplicate. Otherwise, the synchronization task may fail or data may be lost.

  6. Save the task and run a precheck.

    • To view the API parameters for configuring this instance, hover over the Next: Save Task Settings and Precheck button and click Preview OpenAPI parameters in the bubble.

    • If you have finished viewing the API parameters, click Next: Save Task Settings and Precheck at the bottom of the page.

    Note
    • Before the synchronization job starts, DTS runs a precheck. The job can start only after all precheck items are passed.

    • If the precheck fails, click View Details for the failed item. Fix the issue as prompted, and then run the precheck again.

    • If the precheck generates a warning:

      • If a check item fails and cannot be ignored, click View Details next to the item. Follow the instructions to fix the issue, and then run the precheck again.

      • For check items that can be ignored, you can click Confirm Alert Details, Ignore, OK, and Precheck Again in sequence to skip the warning and rerun the precheck. If you choose to shield the warning item, it may cause issues such as data inconsistency and pose risks to your business.

  7. Purchase the instance.

    1. When the Success Rate is 100%, click Next: Purchase Instance.

    2. On the Purchase page, select the billing method and link specification for the data synchronization instance. The following table describes these options in detail.

      Category

      Parameter

      Description

      New Instance Class

      Billing Method

      • Subscription: You pay when you create the instance. This is suitable for long-term needs and is more cost-effective than pay-as-you-go. The longer the subscription duration, the higher the discount.

      • Pay-as-you-go: You are charged on an hourly basis. This is suitable for short-term needs. You can release the instance immediately after use to save costs.

      Resource Group Configuration

      The resource group to which the instance belongs. The default is default resource group. For more information, see What is Resource Management?.

      Link Specification

      DTS provides synchronization specifications with different performance levels. The synchronization link specification affects the synchronization rate. You can choose a specification based on your business scenario. For more information, see Data synchronization link specifications.

      Subscription Duration

      In subscription mode, select the duration and quantity for the subscription instance. You can choose a monthly subscription from 1 to 9 months, or a yearly subscription of 1, 2, 3, or 5 years.

      Note

      This option appears only when the billing method is Subscription.

    3. After you complete the configuration, read and check the Data Transmission Service (Pay-as-you-go) Service Terms.

    4. Click Buy and Start. In the OK dialog box, click OK.

      You can view the task progress on the Data Synchronization page.

Data type mapping

Category

PostgreSQL data type

SelectDB data type

NUMERIC

SMALLINT

SMALLINT

INTEGER

INT

BIGINT

BIGINT

DECIMAL

DECIMAL

NUMERIC

DECIMAL

REAL

DOUBLE

DOUBLE

DOUBLE

SMALLSERIAL

SMALLINT

SERIAL

INT

BIGSERIAL

BIGINT

MONETARY

MONEY

STRING

CHARACTER

  • CHAR(n)

  • VARCHAR(n)

VARCHAR

Important

To prevent data loss, data of the CHAR(n) and VARCHAR(n) types is converted to VARCHAR(4*n) when synchronized to a SelectDB instance.

  • If the data length is not specified, the SelectDB default value VARCHAR(65533) is used.

  • If the data length exceeds 65533, the data is converted to STRING after being synchronized to SelectDB.

TEXT

STRING

BINARY

BYTEA

STRING

DATE AND TIME

TIMESTAMP [(P)] [WITHOUT TIME ZONE]

DATETIMEV2

TIMESTAMP [(P)] WITH TIME ZONE

DATETIMEV2

DATE

DATEV2

TIME [(P)] [WITHOUT TIME ZONE]

VARCHAR(50)

TIME [(P)] WITH TIME ZONE

VARCHAR(50)

INTERVAL [FIELDS] [(P)]

STRING

BOOLEAN

BOOLEAN

BOOLEAN

GEOMETRIC

  • POINT

  • LINE

  • LSEG

  • BOX

  • PATH

  • POLYGON

  • CIRCLE

STRING

NETWORK ADDRESS

  • CIDR

  • INET

  • MACADDR

  • MACADDR8

STRING

TEXT SEARCH

TSVECTOR

STRING

XML

XML

STRING

JSON

JSON

JSON

Additional column information

Note

The following table describes the additional columns that DTS automatically adds or that you must manually add to the destination table that uses the Duplicate model.

Name

Data type

Default value

Description

_is_deleted

Int

0

Indicates whether the data is deleted.

  • Insert: The value is 0.

  • Update: The value is 0.

  • Delete: The value is 1.

_version

Bigint

0

  • For full data synchronization, the value is 0.

  • For incremental data synchronization, the value is the corresponding timestamp (in seconds) in the binary log of the source database.

_record_id

Bigint

0

  • For full data synchronization, the value is 0.

  • For incremental data synchronization, the value is the record ID of the incremental log. This ID is the unique identifier of the log.

    Note

    The ID value is unique and increments.