Use Data Transmission Service (DTS) to migrate data from a PolarDB for PostgreSQL (Compatible with Oracle) cluster to an AnalyticDB for MySQL 3.0 cluster. DTS supports schema migration, full data migration, and incremental data migration, so you can migrate data with minimal or no downtime.
Prerequisites
Before you begin, make sure that you have:
A destination AnalyticDB for MySQL V3.0 cluster. For more information, see Create a cluster.
The
wal_levelparameter set tologicalin the source PolarDB for PostgreSQL (Compatible with Oracle) cluster. This enables logical replication by adding the required information to the write-ahead log (WAL). For more information, see Set cluster parameters.
Migration types
| Migration type | Description |
|---|---|
| Schema migration | Migrates schema definitions of migration objects to the destination database. Currently supports tables only. |
| Full data migration | Migrates all historical data from the source database to the destination database. Do not run DDL operations on migration objects before schema migration and full data migration are complete. |
| Incremental data migration | After full data migration, DTS polls and captures redo logs from the source database and migrates incremental data to the destination database. This lets you migrate data without stopping your applications. |
Billing
| Migration type | Instance configuration fee | Internet traffic fee |
|---|---|---|
| Schema migration and full data migration | Free | Charged when the Access Method of the destination database is set to Public IP Address. For more information, see Billing overview. |
| Incremental data migration | Charged. For more information, see Billing overview. |
Required permissions
To run the migration, create accounts with the following permissions on the source and destination databases.
| Database | Required permissions | How to create accounts |
|---|---|---|
| PolarDB for PostgreSQL (Compatible with Oracle) source cluster | A privileged account | Create a database account |
| AnalyticDB for MySQL V3.0 destination cluster | Read and write permissions on the destination database containing the migration objects | Create a database account |
SQL operations supported for incremental migration
| Operation type | SQL statements |
|---|---|
| DML | INSERT, UPDATE, DELETE. Note When writing data to the destination AnalyticDB for MySQL cluster, DTS automatically converts UPDATE statements to REPLACE INTO statements. If the UPDATE targets the primary key, DTS converts it to DELETE and INSERT statements. |
Limitations
Review these limitations before starting the migration. Some require action on the source database before the migration task begins.
Source database limits
| Limit | Detail | Impact if not met |
|---|---|---|
| Sufficient outbound bandwidth required | The server that hosts the source database must have sufficient outbound bandwidth. | The data migration speed is affected. |
| WAL must be enabled for incremental migration | For incremental migration, the write-ahead log (WAL) must be enabled. | Incremental migration cannot proceed. |
| Primary key or unique constraint required | Tables to be migrated must have a primary key or a UNIQUE constraint with unique field values. | Duplicate data may appear in the destination database. |
| Table-level migration object cap | A single migration task supports a maximum of 1,000 tables when migrating at the table level with object name mapping. | An error is reported after task submission. Split the tables into multiple migration tasks, or migrate the entire database instead. |
| WAL log retention for incremental migration | For incremental-only tasks: retain WAL logs for more than 24 hours. For tasks that include both full and incremental migration: retain WAL logs for at least 7 days. You can reduce this to 24 hours after full migration completes. | Insufficient retention is not covered by the DTS Service-Level Agreement (SLA) and may cause task failure, data inconsistency, or data loss. |
| No DDL operations during schema and full migration | Do not run DDL operations on source tables while schema migration or full data migration is in progress. | The migration task fails. |
| Logical Replication Slot Failover required | Enable Logical Replication Slot Failover on your PolarDB for PostgreSQL (Compatible with Oracle) instance to prevent logical subscription interruptions during a primary/secondary switchover. For more information, see Enable Logical Replication Slot Failover. Note If the source cluster does not support this feature (for example, when the Database Engine is Oracle Syntax Compatible 2.0), the migration instance may fail and cannot be recovered after an HA failover. | The migration instance fails and cannot be recovered. |
| 256 MB per-record limit for incremental migration | If a single incremental data record exceeds 256 MB during incremental migration, the DTS instance fails and cannot be recovered. | Reconfigure the DTS instance to resume. |
| Long-running transactions accumulate WAL logs | If the source database has long-running transactions during an incremental migration task, WAL logs generated before those transactions are committed cannot be purged. | Logs accumulate and may lead to insufficient disk space in the source database. |
Schema migration limits
During schema migration, DTS does not migrate foreign keys from the source database to the destination database.
During full data migration and incremental data migration, DTS temporarily disables constraint checks and foreign key cascade operations at the session level. If cascade update or delete operations occur in the source database while the task is running, data inconsistency may occur.
Other limits
| Limit | Detail | Impact if not met |
|---|---|---|
| One database per migration task | A single migration task can migrate only one database. | Configure a separate migration task for each additional database. |
| Destination table must have a primary key | The destination database must have a custom primary key, or configure the Primary Key Column in the Configurations for Databases, Tables, and Columns step. | The migration may fail. |
| TimescaleDB and cross-schema inheritance not supported | DTS does not support migrating TimescaleDB extension tables or tables with cross-schema inheritance. | Those tables cannot be migrated using DTS. |
REPLICA IDENTITY FULL required for incremental migration | Before writing data to tables included in an incremental data migration task, run ALTER TABLE schema.table REPLICA IDENTITY FULL; on each source table. This applies when the instance runs for the first time, or when the migration object granularity is Schema and a new table is created or rebuilt using RENAME. Run this command during off-peak hours to avoid table locks. If you skip the precheck item, DTS runs this command automatically during instance initialization. | Without this setting, data consistency cannot be guaranteed for incremental migration. |
| Do not delete DTS temporary tables | DTS creates the following temporary tables in the source database to obtain DDL statements, incremental table schemas, and heartbeat information. Do not delete them during migration — DTS automatically deletes them when the instance is released: public.dts_pg_class, public.dts_pg_attribute, public.dts_pg_type, public.dts_pg_enum, public.dts_postgres_heartbeat, public.dts_ddl_command, public.dts_args_session, and public.aliyun_dts_instance. | The migration task fails. |
| Replication slot accumulation risk | During incremental data migration, DTS creates a replication slot with the prefix dts_sync_ in the source database. DTS retains up to 15 minutes of incremental logs via this slot. When migration fails or the instance is released, DTS attempts to clear the slot automatically. If you change the source database account password or remove the DTS IP address from the whitelist during migration, the slot cannot be cleared automatically — clear it manually to prevent disk space exhaustion. If a failover occurs in the source database, log on to the secondary database to clear the slot manually. | Replication slot accumulation can exhaust disk space and make the source database unavailable. |
| AnalyticDB for MySQL disk usage limit | If the disk usage of any node in the destination AnalyticDB for MySQL 3.0 cluster exceeds 80%, the DTS task becomes abnormal and latency occurs. Estimate the required storage space before starting migration. | The DTS task becomes abnormal and latency occurs. |
| AnalyticDB for MySQL backup conflict | If the destination AnalyticDB for MySQL 3.0 cluster is being backed up while the DTS task is running, the task fails. | Schedule migration to avoid overlap with backup windows. |
| FLOAT and DOUBLE migration precision | DTS migrates FLOAT and DOUBLE columns using ROUND(COLUMN,PRECISION). The default precision is 38 for FLOAT and 308 for DOUBLE. Confirm whether this meets your business requirements before starting. | Data precision may not meet business requirements. |
| DTS auto-resume window | DTS attempts to resume failed migration tasks within seven days. Before switching your business to the destination instance, end or release the task, or revoke the write permissions of the DTS account on the destination instance. | Source data may overwrite destination data after an automatic resume. |
| DDL failures are logged, not fatal | If a DDL statement fails to write to the destination database, the DTS task continues running. View failed DDL statements in the task logs. For more information, see Query task logs. | DDL changes are not applied to the destination; check logs to identify missed DDL operations. |
| Task recovery by DTS support | If the task fails, DTS technical support attempts to recover it within 8 hours. Recovery may involve restarting the task or adjusting DTS task parameters. Database parameters are not modified. For adjustable parameters, see Modify instance parameters. | Recovery time may extend up to 8 hours. |
| Partitioned table migration | Include both the parent table and all child partitions as migration objects. In PolarDB for PostgreSQL (Compatible with Oracle), the parent table does not store data directly — all data is in the child partitions. | Missing child partitions results in data inconsistency. |
| Evaluate performance and schedule off-peak | Before migration, evaluate the performance of source and destination databases. Schedule migration during off-peak hours. Full data migration consumes read and write resources on both databases and may increase the database load. | Increased database load during peak hours may impact production workloads. |
Create a migration task
Step 1: Go to the migration task list
Go to the migration task list for the destination region using one of the following methods.
From the DTS console
Log on to the Data Transmission Service (DTS) console.
In the left navigation pane, click Data Migration.
In the upper-left corner, select the region where the migration instance is located.
From the DMS console
The actual operations may vary based on the mode and layout of the DMS console. For more information, see Simple mode console and Customize the layout and style of the DMS console.
Log on to the Data Management (DMS) console.
In the top navigation bar, choose .
To the right of Data Migration Tasks, select the region where the migration instance is located.
Step 2: Configure source and destination databases
Click Create Task.
Configure the source and destination databases using the following parameters.
WarningAfter you select the source and destination instances, read the limits displayed at the top of the page carefully. Ignoring them may cause task failure or data inconsistency.
Source database
Parameter Description Task Name DTS generates a task name automatically. Specify a descriptive name for easy identification. The name does not need to be unique. Database Type Select PolarDB (Compatible with Oracle). Connection Type Select Cloud Instance. Instance Region Select the region where the source PolarDB for PostgreSQL (Compatible with Oracle) cluster resides. Instance ID Select the instance ID of the source PolarDB for PostgreSQL (Compatible with Oracle) cluster. Database Name Enter the name of the database that contains the migration objects in the source cluster. Database Account Enter the privileged account for the source cluster. For required permissions, see Required permissions. Database Password Enter the password for the database account. Destination database
Parameter Description Database Type Select AnalyticDB MySQL 3.0. Connection Type Select Cloud instance. Instance Region Select the region where the destination AnalyticDB for MySQL 3.0 cluster is located. Instance ID Select the ID of the destination AnalyticDB for MySQL 3.0 cluster. Database Account Enter the account for the destination AnalyticDB for MySQL 3.0 database. For required permissions, see Required permissions. Database Password Enter the password for the database account. Click Test Connectivity and Proceed at the bottom of the page.
- Ensure that the IP address segment of the DTS service is automatically or manually added to the security settings of the source and destination databases to allow access from DTS servers. For more information, see Add DTS server IP addresses to a whitelist. - If the source or destination database is a self-managed database (the Access Method is not Alibaba Cloud Instance), also click Test Connectivity in the CIDR Blocks of DTS Servers dialog box.
Step 3: Configure task objects
On the Configure Objects page, set the following parameters.
Parameter Description Migration Types Select the migration types based on your requirements: <br>- Full migration only: select Schema Migration and Full Data Migration. <br>- Migration with minimal downtime: select Schema Migration, Full Data Migration, and Incremental Data Migration. <br> NoteIf you skip Schema Migration, make sure the destination database and tables already exist. If you skip Incremental Data Migration, do not write new data to the source during migration — this may cause data inconsistency.
Processing Mode of Conflicting Tables Precheck and Report Errors (default): Checks for tables with duplicate names in the destination database. The precheck fails if duplicates exist. NoteIf a duplicate table cannot be renamed, use Object name mapping to rename it in the destination. <br><br>Ignore Errors and Proceed: Skips the duplicate name check.
WarningThis option may cause data inconsistency. During full migration, existing records in the destination are kept — source records with the same primary key are not migrated. During incremental migration, source records overwrite destination records with the same primary key. If table schemas are inconsistent, only some columns may be migrated, or migration may fail entirely.
DDL and DML operations to be synchronized Select the DDL or DML operations to migrate at the instance level. For supported operations, see SQL operations supported for incremental migration. NoteTo set SQL operations at the table level, right-click the migration object in the Selected Objects section.
Merge Tables Yes: DTS adds the __dts_data_sourcecolumn to each table to record data sources. For more information, see Enable multi-table merge. <br>No (default): Tables are not merged.NoteTable merging is a task-level setting. To merge some tables but not others, create two separate migration tasks.
WarningDo not run DDL operations to change the schema of source tables while table merging is active — doing so may cause data inconsistency or task failure.
Capitalization of object names in destination instance Configures the case sensitivity policy for migrated object names (databases, tables, columns) in the destination. The default is DTS default policy. You can also match the source or destination database default. For more information, see Case sensitivity of object names in the destination database. Source Objects In the Source Objects box, select the objects to migrate and click the right-arrow icon to move them to the Selected Objects box. Important<br>- If Incremental Data Migration is selected, you can select only one data table. <br>- If Incremental Data Migration is not selected, you can select databases, tables, and columns. <br>- When migrating an entire database: tables with a primary key use the primary key columns as distribution keys; tables without a primary key get an auto-increment primary key column added automatically, which may cause data inconsistency.
Selected Objects To rename a single object, right-click it in the Selected Objects section. For more information, see Individual table column mapping. To rename multiple objects at once, click Batch Edit in the upper-right corner. For more information, see Map multiple object names at a time. NoteObject name mapping may cause migration failures for objects that depend on the renamed objects. To set a WHERE clause to filter data, right-click a table in Selected Objects and set the filter condition. For more information, see Set filter conditions.
Click Next: Advanced Settings and configure the following parameters.
Parameter Description Dedicated Cluster for Task Scheduling By default, DTS schedules tasks on a shared cluster. To run tasks on a dedicated cluster for more stable performance, purchase one separately. Retry Time for Failed Connections The duration DTS retries after a connection to the source or destination database fails. Default: 720 minutes. Range: 10–1,440 minutes. Set this to at least 30 minutes. If DTS reconnects within the retry window, the task resumes automatically. NoteFor multiple DTS instances sharing the same source or destination, the retry time is determined by the most recently created task. Because instance charges continue during the retry period, set an appropriate retry duration or release the instance promptly after the source and destination databases are released.
Retry Time for Other Issues The duration DTS retries after non-connectivity issues (such as DDL or DML exceptions) in the source or destination. Default: 10 minutes. Range: 1–1,440 minutes. Set this to at least 10 minutes. ImportantThis value must be less than Retry Time for Failed Connections.
Enable Throttling for Full Data Migration Sets speed limits for the full migration phase to reduce database load. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected. You can also adjust the full migration speed after the instance starts. Enable Throttling for Incremental Data Migration Sets speed limits for the incremental migration phase. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected. You can also adjust the incremental migration speed after the instance starts. Environment Tag (Optional) Select an environment tag to identify the instance. Configure ETL Choose whether to enable the extract, transform, and load (ETL) feature. Yes: Enter data processing statements in the code editor. For more information, see Configure ETL in a data migration or data synchronization task. No (default): ETL is disabled. For more information, see What is ETL? Monitoring and Alerting No (default): No alerts are configured. Yes: Configure an alert thresholdalert notifications and notification recipients. The system sends an alert if the migration fails or latency exceeds the threshold. Click Next: Data Validation to configure a data validation task. For more information, see Configure data validation.
(Optional) If Schema Migration is selected, click Next: Configure Database and Table Fields to configure the Type, Primary Key Column, Distribution Key, and partition key parameters (Partition Key, Partitioning Rules, Partition Lifecycle) for destination tables.
- Select All for Definition Status to enable modifications. - In the Primary Key Column field, select multiple columns to form a composite primary key. For a composite primary key, also select one or more Primary Key Column values to serve as the Distribution Key and Partition Key. For more information, see CREATE TABLE.
Step 4: Save settings and run precheck
To preview OpenAPI parameters before saving, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.
Click Next: Save Task Settings and Precheck to save the task and start the precheck.
The migration task starts only after it passes the precheck.
If the precheck fails, click View Details next to the failed item, fix the issue, and run the precheck again.
If the precheck reports warnings: for items that cannot be ignored, click View Details, fix the issue, and run the precheck again. For items that can be ignored, click Confirm Alert Details, Ignore, OK, and Precheck Again to skip the warning. Shielded warnings may cause data inconsistency or other risks.
Step 5: Purchase the instance
When the Success Rate reaches 100%, click Next: Purchase Instance.
On the Purchase page, select the instance class.
Parameter Description Resource Group Settings Select the resource group for the instance. Default: default resource group. For more information, see What is Resource Management? Instance Class DTS provides multiple instance classes with different performance levels. The instance class affects migration speed. Select based on your workload. For more information, see Data migration link specifications. Read and select Data Transmission Service (Pay-as-you-go) Service Terms.
Click Buy and Start, then click OK in the confirmation dialog.
Monitor migration progress
View the migration progress on the Data Migration Tasks list page.
If the migration instance does not include an incremental migration task, it stops automatically when complete. The Status changes to Completed.
If the migration instance includes an incremental migration task, it continues running. The Status remains Running while the incremental migration is in progress.