All Products
Search
Document Center

Data Transmission Service:Synchronize data from PolarDB for PostgreSQL to AnalyticDB for PostgreSQL

Last Updated:Mar 28, 2026

Data Transmission Service (DTS) synchronizes data from a PolarDB for PostgreSQL cluster to an AnalyticDB for PostgreSQL instance in real time, giving your analytics workloads continuous access to up-to-date OLTP data.

To set up the synchronization, complete these steps:

  1. Meet the prerequisites and grant the required permissions.

  2. Create the synchronization task and configure the source and destination databases.

  3. Select the objects to synchronize and configure advanced settings.

  4. Run the precheck and purchase the instance.

Prerequisites

Before you begin, ensure that you have:

  • A destination AnalyticDB for PostgreSQL instance. If you do not have one, see Create an instance.

  • The wal_level parameter of the source PolarDB for PostgreSQL cluster set to logical. See Set cluster parameters.

  • Enough disk space on the destination database — it must exceed the used disk space of the source database.

Permissions

Grant the following permissions to the database accounts used by DTS.

DatabaseRequired permissionsHow to create and authorize
PolarDB for PostgreSQLRead and write permissions on the objects to synchronize. A privileged account is required to synchronize DDL operations.Create a database account
AnalyticDB for PostgreSQLRead and write permissions on the destination database. An account with the RDS_SUPERUSER permission is also accepted. See User permission management.Create and manage users

Billing

Synchronization typePricing
Schema synchronization and full data synchronizationFree
Incremental data synchronizationCharged. See Billing overview.

Supported synchronization topologies

  • One-way one-to-one synchronization

  • One-way one-to-many synchronization

  • One-way many-to-one synchronization

For details on topology limits, see Data synchronization topologies.

Supported SQL operations

DML: INSERT, UPDATE, DELETE

DTS automatically converts UPDATE statements to REPLACE INTO statements when writing to the destination AnalyticDB for PostgreSQL instance. If an UPDATE targets primary key columns, DTS converts it to DELETE followed by INSERT.

DDL: Only tasks created after October 1, 2020 support DDL synchronization. DDL synchronization requires a privileged source database account.

Important

The following table lists supported DDL operations. TRUNCATE TABLE requires PostgreSQL 11 or later.

DDL operationSupported
CREATE TABLEYes
DROP TABLEYes
TRUNCATE TABLEYes (PostgreSQL 11+)
ALTER TABLE ... RENAME TABLEYes
ALTER TABLE ... ADD COLUMNYes
ALTER TABLE ... ADD COLUMN DEFAULTYes
ALTER TABLE ... ALTER COLUMN TYPEYes
ALTER TABLE ... DROP COLUMNYes
ALTER TABLE ... ADD CONSTRAINTYes
ALTER TABLE ... ADD CONSTRAINT CHECKYes
ALTER TABLE ... ALTER COLUMN DROP DEFAULTYes
CREATE INDEX ON TABLEYes

The following DDL statements are not synchronized:

  • Statements containing CASCADE or RESTRICT

  • Statements run in sessions using SET session_replication_role = replica

  • Statements executed by calling functions

  • Commits containing both DML and DDL statements

  • DDL on objects outside the synchronization scope

Limitations

Source database limits

  • Tables to synchronize must have a primary key or a non-null unique index.

  • Long-running transactions combined with an active incremental synchronization task cause write-ahead log (WAL) accumulation, which can exhaust disk space on the source database.

  • Enable Logical Replication Slot Failover on the PolarDB for PostgreSQL cluster to prevent synchronization interruption during a primary/secondary switchover. If the cluster uses PostgreSQL 14 as its database engine and does not support Logical Replication Slot Failover, a high-availability (HA) switchover can cause the synchronization instance to fail unrecoverably.

  • If a single incremental change exceeds 256 MB after processing, the synchronization instance may fail and cannot be recovered. Reconfigure the synchronization instance to resume.

  • Do not run DDL operations during schema synchronization or full data synchronization. DTS acquires metadata locks on the source database during the full synchronization phase, which can also block DDL operations on the source database.

Primary key and distribution key constraints

These constraints affect how DTS maps source table structures to the destination AnalyticDB for PostgreSQL instance:

  • If a table has a primary key, the destination table's primary key columns must match those of the source table.

  • If a table has no primary key, the destination table's primary key columns must match the distribution key.

  • The unique key (including the primary key) of the destination table must contain all distribution key columns.

Other limits

  • Each synchronization task covers one database. Configure a separate task for each additional database.

  • DTS does not support: TimescaleDB extension tables, tables with cross-schema inheritance, or tables with unique indexes based on expressions.

  • Schemas created by installing plugins cannot be synchronized and do not appear in the console during task configuration.

  • DTS does not synchronize the following object types: DATATYPE, SEQUENCE, INDEX, PROCEDURE, FUNCTION, VIEW, OPERATOR, DEFAULT_CONSTRAINT, UK, PK, RULE, DOMAIN, AGGREGATE, EXTENSION, FK, or TRIGGER.

  • Run ALTER TABLE schema.table REPLICA IDENTITY FULL; on the tables to synchronize before writing data to them in these three scenarios. Replace schema and table with the actual schema name and table name. Do not lock the tables while running this command, or deadlocks may occur. Run this command during off-peak hours. If you skip the related precheck items, DTS runs this command automatically during instance initialization.

    • The first time the instance runs.

    • When Schema is selected as the object granularity and a new table is created, or an existing table is rebuilt using the RENAME command.

    • When you use the modify synchronization objects feature.

  • Run the initial full data synchronization during off-peak hours. The process runs concurrent INSERT operations, which increases load on both databases and causes table fragmentation on the destination — resulting in the destination table space being larger than the source.

  • Do not write to the destination database from any source other than DTS during synchronization. Concurrent writes cause data inconsistency.

  • DTS creates a replication slot with the dts_sync_ prefix in the source database. This slot retains incremental logs for the last 15 minutes. When the synchronization fails or the instance is released, DTS attempts to clear the slot automatically. Amazon slot查询信息

    • If the source database account password is changed or the DTS IP address is removed from the source database whitelist, the slot cannot be cleared automatically. Clear it manually to prevent continuous disk accumulation.

    • If a failover occurs, log on to the secondary database to clear the slot manually.

  • DTS validates data content but not metadata (such as sequences). Validate metadata separately.

  • Do not delete the following temporary tables that DTS creates in the source database. They are deleted automatically when the DTS instance is released: public.dts_pg_class, public.dts_pg_attribute, public.dts_pg_type, public.dts_pg_enum, public.dts_postgres_heartbeat, public.dts_ddl_command, public.dts_args_session, and public.aliyun_dts_instance.

  • After switching your workloads to the destination instance, sequences do not increment from the maximum value of the source sequence. Before the switchover, query the maximum sequence values in the source database and set them as the initial values in the destination. Use the following command to query all sequence values:

    do language plpgsql $$
    declare
      nsp name;
      rel name;
      val int8;
    begin
      for nsp,rel in select nspname,relname from pg_class t2 , pg_namespace t3 where t2.relnamespace=t3.oid and t2.relkind='S'
      loop
        execute format($_$select last_value from %I.%I$_$, nsp, rel) into val;
        raise notice '%',
        format($_$select setval('%I.%I'::regclass, %s);$_$, nsp, rel, val+1);
      end loop;
    end;
    $$;

    The output contains setval statements for all sequences in the source database. Run only the statements that apply to your destination database.

  • If you use column mapping for a non-full table synchronization, or if the source and destination table schemas are inconsistent, data in columns that exist in the source but not in the destination is lost.

  • When synchronizing partitioned tables, include both the parent table and all its child partitions as synchronization objects. In PostgreSQL, the parent table does not store data directly — all data is in the child partitions. Missing a child partition causes data inconsistency.

  • If a task fails, DTS technical support attempts recovery within 8 hours. During recovery, DTS may restart the task or adjust task parameters. Database parameters are not changed. For adjustable parameters, see Modify instance parameters.

  • During schema synchronization, DTS synchronizes foreign keys from the source database to the destination. During full data synchronization and incremental data synchronization, DTS temporarily disables constraint checks and foreign key cascade operations at the session level. Data inconsistency may occur if cascade update or delete operations are performed on the source while the task is running.

Create a synchronization task

Step 1: Access the synchronization task list

Access the data synchronization task list in the destination region using either console.

DTS console

  1. Log on to the DTS console.

  2. In the left navigation pane, click Data Synchronization.

  3. In the upper-left corner, select the region where the synchronization instance resides.

DMS console

Note

Steps may vary depending on your DMS console mode and layout. See Simple mode console and Customize the layout and style of the DMS console.

  1. Log on to the DMS console.

  2. In the top menu bar, choose Data + AI > DTS (DTS) > Data Synchronization.

  3. To the right of Data Synchronization Tasks, select the region of the synchronization instance.

Step 2: Create a task and configure databases

  1. Click Create Task.

  2. Configure the source and destination databases using the following settings.

    CategoryParameterDescription
    (None)Task NameDTS generates a name automatically. Specify a descriptive name for easy identification. Names do not need to be unique.
    Source DatabaseSelect existing connectionSelect a registered database instance from the drop-down list to auto-fill the connection details. In the DMS console, this is Select a DMS database instance. If not using a registered instance, fill in the fields manually.
    Database TypeSelect PolarDB for PostgreSQL.
    Connection TypeSelect Cloud Instance.
    Instance RegionSelect the region of the source PolarDB for PostgreSQL cluster.
    Cross-accountSelect Non-cross-account for synchronization within the same Alibaba Cloud account.
    Instance IDSelect the ID of the source PolarDB for PostgreSQL cluster.
    Database nameEnter the name of the source database.
    Database AccountEnter the source database account. See Permissions.
    Database PasswordEnter the password for the database account.
    Destination DatabaseSelect existing connectionSelect a registered database instance from the drop-down list to auto-fill the connection details. In the DMS console, this is Select a DMS database instance. If not using a registered instance, fill in the fields manually.
    Database TypeSelect AnalyticDB PostgreSQL.
    Connection TypeSelect Cloud Instance.
    Instance RegionSelect the region of the destination AnalyticDB for PostgreSQL instance.
    Instance IDSelect the ID of the destination AnalyticDB for PostgreSQL instance.
    Database nameEnter the name of the destination database that contains the synchronization objects.
    Database AccountEnter the destination database account. See Permissions.
    Database PasswordEnter the password for the database account.
  3. Click Test Connectivity and Proceed at the bottom of the page.

    DTS needs access to the source and destination databases. Add the DTS IP address blocks to the security settings of both databases. See Add the IP address whitelist of DTS servers.

Step 3: Configure synchronization objects

On the Configure Objects page, set the following parameters.

ParameterDescription
Synchronization typesBy default, Incremental Data Synchronization is selected. Also select Schema Synchronization and Full Data Synchronization. After the precheck, DTS synchronizes historical data from the source to the destination as the baseline for incremental synchronization.
Processing mode of conflicting tablesPrecheck and Report Errors (default): DTS checks for tables with matching names in the destination. If any are found, the precheck fails and the task does not start. If you cannot delete or rename the conflicting table, use object name mapping to map it to a different name. Ignore Errors and Proceed: DTS skips the duplicate name check. During full synchronization, DTS retains destination records and skips conflicting source records. During incremental synchronization, DTS overwrites destination records with source records. If table schemas are inconsistent, initialization may fail. Use with caution.
Select DDL and DML to Sync at the Instance LevelSelect the DDL and DML operations to synchronize. See Supported SQL operations. To configure SQL operations per database or table, right-click the object in Selected Objects and select the operations.
Storage engine typeSelect the storage engine for destination tables. The default is Beam. This option is available only if the target AnalyticDB for PostgreSQL kernel version is v7.0.6.6 or later and Schema Synchronization is enabled.
Case Policy for Destination Object NamesConfigure how DTS handles the case of database, table, and column names in the destination. The default is DTS default policy. See Case policy for destination object names.
Source objectsClick the objects in the Source Objects box, then click 向右 to move them to the Selected Objects box. Tables are selectable as synchronization objects.
Selected objectsTo rename a single object in the destination, right-click it in the Selected Objects box. See Map a single object name. To rename multiple objects in bulk, click Batch Edit in the upper-right corner. See Map multiple object names in bulk. To filter data with a WHERE clause, right-click the table and set the filter condition. See Set a filter condition.

Step 4: Configure advanced settings

Click Next: Advanced Settings and configure the following.

ParameterDescription
Dedicated cluster for task schedulingDTS uses a shared cluster by default. For greater task stability, purchase a dedicated cluster. See What is a DTS dedicated cluster?
Retry time for failed connectionsIf the connection to the source or destination database fails, DTS retries immediately. The default retry duration is 720 minutes (range: 10–1,440 minutes; 30 minutes or more is recommended). If the connection is restored within this period, the task resumes automatically. If multiple DTS instances share a source or destination, DTS applies the shortest configured retry duration across all instances. DTS charges for task runtime during retries.
Retry time for other issuesIf a non-connection error occurs (such as a DDL or DML execution error), DTS retries immediately. The default is 10 minutes (range: 1–1,440 minutes; 10 minutes or more is recommended). This value must be less than Retry time for failed connections.
Enable throttling for full data synchronizationLimit the synchronization rate during full data synchronization to reduce load on the destination database. Set Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Synchronization is selected. You can also adjust the rate while the task is running.
Enable throttling for incremental data synchronizationLimit the incremental synchronization rate by setting RPS of Incremental Data Synchronization and Data synchronization speed for incremental synchronization (MB/s).
Environment tag(Optional) Attach an environment tag to identify the instance.
Configure ETLEnable the extract, transform, and load (ETL) feature to transform data during synchronization. See What is ETL? and Configure ETL in a data synchronization task.
Monitoring and alertingSet up alerts so DTS notifies alert contacts when the task fails or latency exceeds the threshold. See alert notificationsConfigure monitoring and alerting.

Step 5: Configure data verification (optional)

Click Data Verification to configure a data verification task. See Configure data verification.

Step 6: Configure table and field settings

Click Next: Configure Table And Field to set the primary key and distribution columns for each destination table in AnalyticDB for PostgreSQL.

Step 7: Run the precheck

Click Next: Save Task Settings and Precheck.

To view the OpenAPI parameters for this configuration, hover over the Next: Save Task Settings and Precheck button and click Preview OpenAPI parameters.

DTS runs a precheck before the task starts. The task only starts if the precheck passes.

  • If the precheck fails, click View Details next to the failed item, fix the issue, and rerun the precheck.

  • If the precheck generates warnings:

    • For non-ignorable warnings, fix the issue and rerun.

    • For ignorable warnings, click Confirm Alert Details > Ignore > OK, then click Precheck Again.

    Important

    Ignoring precheck warnings may cause data inconsistency. Proceed with caution.

Step 8: Purchase the instance

  1. When the Success Rate reaches 100%, click Next: Purchase Instance.

  2. On the Purchase page, select the billing method and instance class.

    ParameterDescription
    Billing methodSubscription: Pay upfront for a fixed term (1–9 months, or 1, 2, 3, or 5 years). Cost-effective for long-running tasks. Pay-as-you-go: Billed hourly. Release the instance at any time. Suitable for short-term or test tasks.
    Resource group settingsThe resource group for the instance. Default is default resource group. See What is Resource Management?
    Instance classSelect a specification based on your performance requirements. See Data synchronization link specifications.
  3. Read and accept the Data Transmission Service (Pay-as-you-go) Service Terms.

  4. Click Buy and Start, then click OK.

Monitor the task progress on the data synchronization page.

What's next

  • Monitor synchronization latency and task health from the data synchronization page.

  • Before switching your workloads to the destination instance, query and reset sequence values. See Limitations for the command.

  • To change the objects being synchronized after the task starts, use the modify synchronization objects feature. Run ALTER TABLE schema.table REPLICA IDENTITY FULL; on any newly added tables before writing data.