Data Transmission Service (DTS) migrates data from a PolarDB-X 1.0 instance to an AnalyticDB for PostgreSQL instance, enabling centralized analytics on your operational data.
Prerequisites
Before you begin, make sure that you have:
A PolarDB-X 1.0 instance backed by ApsaraDB RDS for MySQL as its storage type. PolarDB for MySQL is not supported.
An AnalyticDB for PostgreSQL instance with storage capacity larger than the occupied storage of the source PolarDB-X 1.0 instance. For details, see Create an instance.
A database created in the destination AnalyticDB for PostgreSQL instance to receive the migrated data. For details, see the "CREATE DATABASE" section of the SQL syntax topic.
Billing
| Migration type | Task configuration fee | Data transfer cost |
|---|---|---|
| Schema migration and full data migration | Free | Free |
| Incremental data migration | Charged | — |
For pricing details, see Billing overview.
Supported SQL operations for incremental migration
| Operation type | SQL statements |
|---|---|
| DML | INSERT, UPDATE, DELETE |
Permissions required
| Database | Schema migration | Full data migration | Incremental data migration |
|---|---|---|---|
| Source PolarDB-X 1.0 instance | SELECT | SELECT | Read and write on objects to be migrated |
| Destination AnalyticDB for PostgreSQL instance | Read and write on the destination database. You can also use the initial account or an account with the RDS_SUPERUSER permission. | Read and write on the destination database. You can also use the initial account or an account with the RDS_SUPERUSER permission. | Read and write on the destination database. You can also use the initial account or an account with the RDS_SUPERUSER permission. |
To create database accounts and grant permissions, see:
PolarDB-X 1.0: Manage accounts
AnalyticDB for PostgreSQL: Create a database account and Manage users and permissions
Limitations
Before you start
Review these constraints before configuring the migration task:
Source storage type: The PolarDB-X 1.0 instance must use ApsaraDB RDS for MySQL as its storage type. PolarDB for MySQL is not supported.
Bandwidth requirements: The server to which the source database belongs must have sufficient outbound bandwidth. Otherwise, the data migration speed is affected.
Source table constraints: Tables to be migrated must have PRIMARY KEY or UNIQUE constraints, with all fields unique. Without these constraints, the destination database may contain duplicate records.
Table count limit: If you select tables as migration objects and need to rename tables or columns in the destination, a single task supports up to 1,000 tables. Tasks exceeding this limit return a request error. To migrate more than 1,000 tables, split the migration into multiple tasks or migrate the entire database instead.
Incremental migration — binlog configuration: For incremental data migration, set the
binlog_row_imageparameter of the attached ApsaraDB RDS for MySQL instance tofull. If this parameter is not set correctly, the precheck fails and the task cannot start.Unsupported destination table type: The destination table cannot be an append-optimized (AO) table.
Unsupported data types: GEOMETRY, CURVE, SURFACE, MULTIPOINT, MULTILINESTRING MULTIPOLYGON, and GEOMETRYCOLLECTION types cannot be migrated.
Read-only instances: Read-only instances at the PolarDB-X 1.0 compute layer are not supported.
Splitting method: PolarDB-X 1.0 storage resources support horizontal splitting only (for both databases and tables). Vertical splitting is not supported.
During migration
Binary log retention:
Incremental migration only: binary logs must be retained for more than 24 hours.
Full + incremental migration: binary logs must be retained for at least seven days.
After full data migration completes, you can reduce the retention period to more than 24 hours.
ImportantIf binary logs are not retained for the required duration, DTS may fail to read them, causing task failure, data inconsistency, or data loss. The DTS service level agreement (SLA) does not cover failures caused by insufficient binary log retention.
Network type changes: If you change the network type of the PolarDB-X 1.0 instance during migration, update the network connection settings in the migration task accordingly.
Prohibited operations on the source: Do not scale the capacity of the source instance (including the attached ApsaraDB RDS for MySQL instance), change the distribution of physical databases and tables, migrate frequently-updated tables, change shard keys, or perform DDL operations. These actions cause task failure or data inconsistency.
Write isolation: Write data to the destination database only through DTS. Writing from other sources during migration causes data inconsistency.
Column mapping: If column mapping is used for partial table migration, or if the source and destination schemas differ, data in source columns that have no counterpart in the destination is lost.
Task topology: DTS migrates PolarDB-X 1.0 data across all attached ApsaraDB RDS for MySQL instances, running a subtask for each instance. Track subtask status in Task Topology.
Performance impact: Full data migration uses read and write resources of both the source and destination databases, which can increase database load. Run migration tasks during off-peak hours. After full data migration, the destination tablespace is larger than the source due to fragmentation from concurrent INSERT operations.
Create a migration task
Step 1: Configure the source and destination databases
Log on to the Data Management (DMS) console.
In the top navigation bar, click DTS.
In the left-side navigation pane, choose DTS (DTS) > Data Migration.
You can also go directly to the Data Migration Tasks page of the new DTS console. The steps may vary slightly depending on the DMS console mode and layout. For details, see Simple mode and Customize the layout and style of the DMS console.
From the drop-down list next to Data Migration Tasks, select the region where the migration instance resides.
In the new DTS console, select the region in the upper-left corner.
Click Create Task. On the Create Task wizard page, configure the source and destination databases.
General settings
Parameter Description Task Name The task name. DTS assigns a name automatically. Specify a descriptive name to make the task easy to identify. The name does not need to be unique. Source database
Parameter Description Select an existing DMS database instance (Optional) Select a registered DMS database instance to auto-fill the connection parameters. If you do not select an existing instance, configure the parameters manually. Database Type Select PolarDB-X 1.0. Access Method Select Alibaba Cloud Instance. Instance Region The region where the source PolarDB-X 1.0 instance resides. Replicate Data Across Alibaba Cloud Accounts Select No for same-account migration. Instance ID The ID of the source PolarDB-X 1.0 instance. Database Account The account for the source instance. For required permissions, see Permissions required. Database Password The password for the database account. Destination database
Parameter Description Select an existing DMS database instance (Optional) Select a registered DMS database instance to auto-fill the connection parameters. If you do not select an existing instance, configure the parameters manually. Database Type Select AnalyticDB for PostgreSQL. Access Method Select Alibaba Cloud Instance. Instance Region The region where the destination AnalyticDB for PostgreSQL instance resides. Instance ID The ID of the destination AnalyticDB for PostgreSQL instance. Database Name The name of the database in the destination instance that receives the migrated objects. Database Account The account for the destination instance. For required permissions, see Permissions required. Database Password The password for the database account. Click Test Connectivity and Proceed. DTS automatically adds its CIDR blocks to the IP address whitelist of Alibaba Cloud database instances (such as ApsaraDB RDS for MySQL) or to the security group rules of Elastic Compute Service (ECS) instances hosting self-managed databases. For self-managed databases spread across multiple ECS instances, manually add the DTS CIDR blocks to the security group rules of each ECS instance. For self-managed databases in data centers or hosted by third-party providers, manually add the DTS CIDR blocks to the database IP address whitelist. For the full list of DTS CIDR blocks, see Add the CIDR blocks of DTS servers to the security settings of on-premises databases.
WarningAdding DTS CIDR blocks to IP address whitelists or security group rules introduces security risks. Before using DTS, take preventive measures including: strengthening account and password security, restricting exposed ports, authenticating API calls, regularly auditing whitelists and security group rules, and connecting the database to DTS through Express Connect, VPN Gateway, or Smart Access Gateway.
Step 2: Select migration types and objects
Configure the following parameters and click Next: Advanced Settings.
Choose a migration type
DTS supports three migration types. Select based on your requirements:
Schema Migration + Full Data Migration: Migrates the table schema and all existing data. Use this option when you can pause writes to the source database during migration.
Schema Migration + Full Data Migration + Incremental Data Migration: Migrates existing data and continuously replicates subsequent changes (INSERT, UPDATE, DELETE). Use this option to minimize downtime and maintain service continuity.
If you do not select Incremental Data Migration, avoid writing to the source database during migration to maintain data consistency between source and destination.
Objects and settings
| Parameter | Description |
|---|---|
| Migration Types | Select the migration types as described above. |
| Processing Mode of Conflicting Tables | Precheck and Report Errors (default): checks for tables with identical names in the source and destination before migration starts. If conflicts exist, the precheck fails and the task cannot start. Use Map object names to rename conflicting tables. Ignore Errors and Proceed: skips the precheck for identical table names. If schemas match, records with duplicate primary keys are not migrated. If schemas differ, only specific columns are migrated or the task fails. Use with caution. |
| Capitalization of Object Names in Destination Instance | Controls how database, table, and column names are capitalized in the destination. Default: DTS default policy. For details, see Specify the capitalization of object names in the destination instance. |
| Source Objects | Select tables from the Source Objects list and click |
| Selected Objects | To rename a single object, right-click it and use Map the name of a single object. To rename multiple objects, click Batch Edit in the upper-right corner and use Map multiple object names at a time. To filter data with SQL conditions, right-click an object and specify conditions. For details, see Use SQL conditions to filter data. Note Renaming an object may cause dependent objects to fail migration. |
Step 3: Configure advanced settings
Click Next: Advanced Settings and configure the following parameters.
| Parameter | Description |
|---|---|
| Select the dedicated cluster used to schedule the task | By default, DTS schedules the task on the shared cluster. To use a dedicated cluster, purchase one and specify it here. For details, see What is a DTS dedicated cluster. |
| Set Alerts | No: disables alerts. Yes: sends notifications to alert contacts when the task fails or migration latency exceeds the threshold. Specify the alert threshold and contacts. For details, see Configure monitoring and alerting. |
| Retry Time for Failed Connections | The time window for DTS to retry failed connections after the task starts. Range: 10–1,440 minutes. Default: 720 minutes. Set this to at least 30 minutes. If DTS reconnects within this window, the task resumes; otherwise, the task fails. Note When multiple tasks share the same source or destination database, the most recently set retry window takes precedence. DTS charges continue during retries. |
| The wait time before a retry when other issues occur in the source and destination databases | The time window for DTS to retry failed DDL or DML operations. Range: 1–1,440 minutes. Default: 10 minutes. Set this to at least 10 minutes. This value must be smaller than the Retry Time for Failed Connections value. |
| Enable Throttling for Full Data Migration | Limits the read and write load during full data migration. When enabled, configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). This parameter is available only when Full Data Migration is selected. |
| Enable Throttling for Incremental Data Migration | Limits the load during incremental data migration. When enabled, configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). This parameter is available only when Incremental Data Migration is selected. |
| Environment Tag | An optional tag to identify the DTS instance by environment. |
| Configure ETL | Yes: enables the extract, transform, and load (ETL) feature. Enter data processing statements in the code editor. For details, see Configure ETL in a data migration or data synchronization task and What is ETL? No: disables ETL. |
Step 4: (Optional) Configure database and table fields
Click Next: Configure Database and Table Fields to specify the Type, Primary Key Column, and Distribution Key for tables being migrated to the AnalyticDB for PostgreSQL instance.
This step is available only when Schema Migration is selected. Set Definition Status to All to view and modify all tables. For composite primary keys, one or more primary key columns must also be specified as distribution key columns. For details, see Manage tables and Define table distribution.
Step 5: Run the precheck
Click Next: Save Task Settings and Precheck.
To view the API parameters for configuring this task programmatically, move the pointer over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.
DTS runs a precheck before the migration starts. Wait for the precheck to complete:
If the precheck passes, proceed to the next step.
If the precheck fails, click View Details next to the failed item, fix the issue based on the error message, and click Precheck Again.
If a precheck alert appears:
If the alert cannot be ignored, click View Details, fix the issue, and run the precheck again.
If the alert can be ignored, click Confirm Alert Details, then click Ignore in the View Details dialog box, click OK, and click Precheck Again.
WarningIgnoring precheck alerts may result in data inconsistency or other risks to your workloads.
Step 6: Purchase and start the migration instance
Wait until the success rate reaches 100%, then click Next: Purchase Instance.
On the Purchase Instance page, configure the instance class.
Parameter Description Resource Group The resource group for the migration instance. Default: default resource group. For details, see What is Resource Management? Instance Class The instance class determines the migration speed. Select a class based on your workload. For details, see Specifications of data migration instances. Read the Data Transmission Service (Pay-as-you-go) Service Terms and select the check box to agree.
Click Buy and Start. The migration task starts and appears in the task list.
What's next
After the migration task starts, monitor its progress in the task list. For distributed PolarDB-X 1.0 sources, track the status of each subtask in Task Topology.
If you selected full data migration only, the task completes after all data is migrated. If you selected incremental data migration, the task continues to replicate changes from the source until you stop it manually.