Data Transmission Service (DTS) supports continuous data synchronization from a PolarDB for PostgreSQL (Compatible with Oracle) cluster to AnalyticDB for MySQL 3.0. DTS handles schema synchronization, full data synchronization, and incremental data synchronization.
Supported DML operations: INSERT, UPDATE, and DELETE. Unsupported object types: TimescaleDB extension tables, tables with cross-schema inheritance, tables with unique indexes based on expressions, and schemas created by plugin installation.
Billing
| Synchronization type | Pricing |
|---|---|
| Schema synchronization and full data synchronization | Free of charge |
| Incremental data synchronization | Charged. For more information, see Billing overview. |
Prerequisites
Before you begin, make sure that you have:
Created a target AnalyticDB for MySQL 3.0 cluster. For more information, see Create a cluster.
Set the
wal_levelparameter tologicalin the source PolarDB for PostgreSQL (Compatible with Oracle) cluster. This adds the information required for logical decoding to the write-ahead log (WAL). For more information, see Set cluster parameters.Confirmed that the destination AnalyticDB for MySQL 3.0 instance has more disk space than the source PolarDB for PostgreSQL (Compatible with Oracle) instance currently uses.
Permissions required
| Database | Required permissions | How to create an account |
|---|---|---|
| Source PolarDB for PostgreSQL (Compatible with Oracle) instance | Privileged account | Create a database account |
| Destination AnalyticDB for MySQL 3.0 instance | Read and write permissions on the destination database for the synchronization objects | Create a database account |
Limitations
Source database limits
Bandwidth: The server where the source database resides must have sufficient outbound bandwidth. Insufficient bandwidth reduces synchronization speed.
Primary key or UNIQUE constraint: Tables to be synchronized must have a primary key or a UNIQUE constraint with unique fields. Without this, duplicate data may appear in the destination database.
Table count per task: If you need to map table or column names and the number of tables exceeds 1,000 in a single task, split them into multiple tasks or configure the task to synchronize the entire database. Otherwise, a request error may occur after you submit the task.
WAL log retention: Enable WAL and retain WAL logs for the required duration: If DTS cannot obtain WAL logs due to insufficient retention, the task fails. In extreme cases, data inconsistency or loss may occur. Issues caused by WAL log retention shorter than the DTS requirement are not covered by the Service-Level Agreement (SLA).
Incremental synchronization only: retain WAL logs for more than 24 hours.
Full + incremental synchronization: retain WAL logs for at least 7 days. After full synchronization completes, you can reduce the retention period to more than 24 hours.
Long-running transactions: Long-running transactions in the source database can cause WAL accumulation during incremental synchronization, which may exhaust source database disk space.
DDL restrictions during synchronization:
During schema synchronization and full data synchronization: do not perform DDL operations that change the database or table structure. The task fails if you do.
During full data synchronization only: do not write new data to the source. To maintain real-time data consistency, select schema synchronization, full data synchronization, and incremental data synchronization together.
Logical Replication Slot Failover: The source cluster must support and enable Logical Replication Slot Failover to prevent synchronization interruption from a primary/secondary switchover.
ImportantIf the source cluster's Database Engine is Oracle syntax compatible 2.0, Logical Replication Slot Failover is not supported. A high-availability (HA) switchover in the source database may cause the synchronization instance to fail unrecoverably.
Maximum incremental data size: If a single piece of data exceeds 256 MB after an incremental change, the synchronization instance may fail unrecoverably. Reconfigure the synchronization instance to resume.
Other limits
A single synchronization task can synchronize only one database. Configure separate tasks for each additional database.
The destination database must have a custom primary key, or you must configure the Primary Key Column in the Configurations for Databases, Tables, and Columns step. Otherwise, data synchronization may fail.
DTS does not synchronize foreign keys. During full and incremental synchronization, DTS temporarily disables constraint checks and foreign key cascade operations at the session level. If cascade update or delete operations occur in the source while the task is running, data inconsistency may occur.
UPDATE statements written to AnalyticDB for MySQL are automatically converted to REPLACE INTO statements. If the primary key is updated, they are converted to DELETE+INSERT statements.
Partitioned tables: Include both the parent table and all child partitions as synchronization objects. In PostgreSQL partitioned tables, the parent table does not store data directly — all data resides in child partitions. Omitting any child partition causes data inconsistency.
REPLICA IDENTITY FULL requirement: Run
ALTER TABLE schema.table REPLICA IDENTITY FULL;on the tables in the source database before writing data in the following scenarios. Run this during off-peak hours and do not lock the tables to prevent deadlocks. If you skip the related precheck items, DTS runs this command automatically during initialization.When the instance runs for the first time.
When you select Schema as the granularity for object selection, and a new table is created in the schema or a table is rebuilt using the RENAME command.
When you use the modify synchronization objects feature.
Temporary tables: DTS creates the following temporary tables in the source database to support DDL capture, incremental table structure, and heartbeat. Do not delete them during synchronization — they are removed automatically after the DTS instance is released:
public.dts_pg_class,public.dts_pg_attribute,public.dts_pg_type,public.dts_pg_enum,public.dts_postgres_heartbeat,public.dts_ddl_command,public.dts_args_session, andpublic.aliyun_dts_instance.Heartbeat table: DTS adds a heartbeat table named
dts_postgres_heartbeatto the source database to track incremental data synchronization latency.Replication slot: DTS creates a replication slot with the
dts_sync_prefix in the source database. This slot retains incremental logs for the past 15 minutes. When synchronization fails or the instance is released, DTS attempts to clear the slot automatically.ImportantIf you change the source database account password or remove the DTS IP address from the source database whitelist during synchronization, DTS cannot clear the replication slot automatically. Manually clear the slot to prevent continuous disk space accumulation. If a failover occurs in the source database, log on to the secondary database to manually clear the slot.

Backup conflicts: If the destination AnalyticDB for MySQL 3.0 cluster is being backed up while the DTS task is running, the task fails.
Performance during full synchronization: Full data synchronization consumes read and write resources on both the source and destination databases. Synchronize during off-peak hours (when the CPU load of both databases is below 30%). Full synchronization runs concurrent INSERT operations, which causes table fragmentation in the destination — the destination table space will be larger than the source after full synchronization completes.
FLOAT/DOUBLE precision: DTS reads FLOAT and DOUBLE values using
ROUND(COLUMN, PRECISION). Default precision is 38 for FLOAT and 308 for DOUBLE. Confirm that this precision meets your business requirements.Task auto-recovery: DTS automatically attempts to recover failed tasks for up to 7 days. Before switching your business to the destination instance, end or release the task, or use the
REVOKEcommand to revoke the write permissions of the DTS account on the destination instance. This prevents recovered tasks from overwriting destination data.DTS technical support recovery: If the task fails, DTS technical support attempts recovery within 8 hours. Recovery may involve restarting the task or adjusting DTS task parameters (database parameters are not modified). For parameters that may be adjusted, see Modify instance parameters.
Create a synchronization task
Step 1: Open the data synchronization task list
Go to the data synchronization task list page in the destination region using one of the following methods:
DTS console
Log on to the DTS console.
In the left navigation pane, click Data Synchronization.
In the upper-left corner, select the region where the synchronization instance is located.
DMS console
The actual steps may vary depending on the mode and layout of the DMS console. For more information, see Simple mode console and Customize the layout and style of the DMS console.
Log on to the DMS console.
In the top menu bar, choose Data + AI > DTS (DTS) > Data Synchronization.
To the right of Data Synchronization Tasks, select the region of the synchronization instance.
Step 2: Configure source and destination databases
Click Create Task.
Configure the source and destination databases using the following parameters:
Category Parameter Description None Task Name DTS generates a name automatically. Specify a descriptive name for easy identification. The name does not need to be unique. Source Database Select Existing Connection Select a registered database instance from the drop-down list to auto-populate the fields below. If no registered instance is available, configure the fields manually. NoteIn the DMS console, this field is Select a DMS database instance.
Database Type Select PolarDB (Compatible with Oracle). Connection Type Select Cloud Instance. Instance Region Select the region where the source instance resides. Instance ID Select the ID of the source PolarDB for PostgreSQL (Compatible with Oracle) instance. Database Name Enter the name of the database containing the objects to be synchronized. Database Account Enter the database account. For required permissions, see Permissions required. Database Password Enter the password for the database account. Destination Database Select Existing Connection Select a registered database instance from the drop-down list to auto-populate the fields below. If no registered instance is available, configure the fields manually. NoteIn the DMS console, this field is Select a DMS database instance.
Database Type Select AnalyticDB MySQL 3.0. Connection Type Select Cloud Instance. Instance Region Select the region where the destination instance resides. Instance ID Select the ID of the destination AnalyticDB for MySQL 3.0 instance. Database Account Enter the database account. For required permissions, see Permissions required. Database Password Enter the password for the database account. Click Test Connectivity and Proceed.
Add the CIDR blocks of DTS servers to the security settings of both the source and destination databases before testing. For more information, see Add the IP address whitelist of DTS servers. If the source or destination is a self-managed database, also click Test Connectivity in the CIDR Blocks of DTS Servers dialog box.
Step 3: Configure task objects
On the Configure Objects page, set the following parameters:
Parameter Description Synchronization Types By default, Incremental Data Synchronization is selected. Also select Schema Synchronization and Full Data Synchronization. After the precheck, DTS synchronizes historical data from the source to the destination as the baseline for incremental synchronization. Processing Mode of Conflicting Tables Precheck and Report Errors (default): checks for same-name tables in the destination before starting. If found, the precheck fails and the task does not start. NoteIf you cannot delete or rename the conflicting table, map it to a different name. For more information, see Database Table Column Name Mapping. Ignore Errors and Proceed: skips the check for same-name tables.
WarningThis mode may cause data inconsistency. During full synchronization, DTS skips source records that conflict with destination records on primary or unique keys. During incremental synchronization, DTS overwrites the destination record. If schemas are inconsistent, initialization may fail, resulting in partial or complete synchronization failure.
DDL and DML Operations to Be Synchronized Select the SQL operations for incremental synchronization at the instance level. For supported operations, see Limitations. To configure at the database or table level, right-click a synchronization object in the Selected Objects list and select the required operations. Merge Tables Yes: synchronizes multiple tables with the same schema (sharded tables) from the source into a single table in the destination. This is useful in online analytical processing (OLAP) scenarios. Use the object name mapping feature to rename source tables to the same target table name. DTS adds a __dts_data_sourcecolumn of TEXT type to the destination table, storing values in the formatDTS instance ID:database name:schema name:table name(for example,dts********:dtstestdata:testschema:customer1). Table merging applies at the task level — to merge only some tables, create a separate task for them.WarningDo not perform DDL operations that change the schema during table merging, or data inconsistency or task failure may occur. For details, see Enable table merging. No (default).
Capitalization of Object Names in Destination Instance Sets the case-sensitivity policy for database, table, and column names in the destination. The default is DTS Default Policy. For more information, see Case-sensitivity of object names in the destination database. Source Objects Click objects in the Source Objects box, then click
to move them to the Selected Objects box. Select at the granularity of database, table, or column. Selecting tables or columns excludes other objects such as views, triggers, and stored procedures. If you select an entire database: tables with a primary key use that column as the distribution key; tables without a primary key get an auto-generated primary key column, which may cause data inconsistency.Selected Objects Right-click an object to rename it in the destination. For more information, see Database Table Column Name Mapping. Click an object and then click
to move it back to the Source Objects box. To set WHERE conditions to filter data, right-click the table and configure the filter. For more information, see Set filter conditions. NoteObject name mapping may cause dependent objects to fail synchronization.
Click Next: Advanced Settings and configure the following:
Parameter Description Dedicated Cluster for Task Scheduling DTS uses a shared cluster by default. For greater task stability, purchase a dedicated cluster. For more information, see What is a DTS dedicated cluster?. Retry Time for Failed Connections If the connection to the source or destination fails after the task starts, DTS retries immediately. Default: 720 minutes. Range: 10–1,440 minutes. Set to 30 minutes or more for reliable recovery. If the connection is restored within this period, the task resumes automatically. NoteIf multiple DTS instances share a source or destination, DTS uses the shortest configured retry duration across all instances. DTS charges for task runtime during retries.
Retry Time for Other Issues For non-connection issues (such as DDL or DML execution errors), DTS retries immediately. Default: 10 minutes. Range: 1–1,440 minutes. Set to 10 minutes or more. This value must be less than Retry Time for Failed Connections. Enable Throttling for Full Data Synchronization Limit the full synchronization rate to reduce source and destination database load. Set Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Synchronization Types includes Full Data Synchronization. You can also adjust the rate while the instance is running. Enable Throttling for Incremental Data Synchronization Limit the incremental synchronization rate by setting RPS of Incremental Data Synchronization and Data synchronization speed for incremental synchronization (MB/s). Environment Tag Select a tag to identify the instance based on your needs. Configure ETL Choose whether to enable extract, transform, and load (ETL). For more information, see What is ETL?. Yes: enables ETL. Enter data processing statements in the code editor. For more information, see Configure ETL in a data migration or data synchronization task. No: disables ETL. Monitoring and Alerting Configure alerts for synchronization failures or latency exceeding a threshold. No (default): no alerts. Yes: set the alert threshold and notification contacts. For more information, see Configure monitoring and alerting during task configuration. Click Data Verification to configure a data verification task. For more information, see Configure data verification.
(Optional) Click Next: Configure Database and Table Fields to set the Type, Primary Key Column, Distribution Key, and partition key information (Partition Key, Partitioning Rules, and Partition Lifecycle) for destination tables.
This step is available only if Schema Synchronization is selected in Synchronization Types. Set Definition Status to All to make modifications. Use Primary Key Column to specify a composite primary key, then select one or more columns as the Distribution Key and Partition Key. For more information, see CREATE TABLE.
Step 4: Save the task and run the precheck
To preview the API parameters for this configuration, hover over the Next: Save Task Settings and Precheck button and click Preview OpenAPI parameters.
Click Next: Save Task Settings and Precheck.
DTS runs a precheck before starting. The task only starts if the precheck passes. If the precheck fails, click View Details next to the failed item, fix the issue, and rerun the precheck. For non-ignorable warnings, fix the issue and rerun. For ignorable warnings, click Confirm Alert Details > Ignore > OK, then click Precheck Again. Ignoring precheck warnings may cause data inconsistencies.
Step 5: Purchase the instance
When the Success Rate reaches 100%, click Next: Purchase Instance.
On the Purchase page, select the billing method and instance class:
Parameter Description Billing Method Subscription: pay upfront for a fixed duration. Cost-effective for long-term, continuous tasks. Pay-as-you-go: billed hourly for actual usage. Suitable for short-term or test tasks. Resource Group Settings The resource group for the instance. Default: default resource group. For more information, see What is Resource Management?. Instance Class DTS offers synchronization specifications at different performance levels. Select based on your requirements. For more information, see Data synchronization link specifications. Subscription Duration Available only for Subscription billing. Monthly: 1–9 months. Yearly: 1, 2, 3, or 5 years. Read and select the checkbox for Data Transmission Service (Pay-as-you-go) Service Terms.
Click Buy and Start, then click OK.
Monitor the task progress on the data synchronization page.