All Products
Search
Document Center

Data Transmission Service:Migrate data from a PolarDB-X 1.0 instance to a DataHub project

Last Updated:Mar 28, 2026

Use Data Transmission Service (DTS) to migrate schemas and stream incremental data from a PolarDB-X 1.0 instance to a DataHub project without interrupting your applications.

Full data migration is not supported for this source-destination pair. Only schema migration and incremental data migration are available.

Prerequisites

Before you begin, make sure you have:

Migration typeRequired permission
Schema migrationSELECT
Incremental data migrationRead and write on the objects to be migrated

Limitations

Source database

  • Table limit: If you select tables as migration objects and rename destination tables or columns, a single task supports up to 1,000 tables. Exceeding this limit causes a request error. Split the migration into multiple tasks, or migrate the entire database as a single object.

  • Read-only instances: You cannot migrate data from a read-only PolarDB-X 1.0 instance.

  • Binary log retention: Retain binary logs for more than 24 hours. If DTS cannot read the binary logs, the task fails and data loss or inconsistency may occur. Make sure that you set the retention period of binary logs based on the preceding requirements. Otherwise, the Service Level Agreement (SLA) of DTS does not guarantee service reliability or performance.

  • Prohibited operations during migration: Do not upgrade or downgrade the source instance, migrate frequently-updated tables, change shard keys, or run DDL operations on source objects. These operations cause the migration task to fail.

  • Foreign key constraints: During combined schema migration and incremental data migration, DTS disables constraint checks and cascade operations on foreign keys at the session level. If you run cascade delete operations on the source database during this period, data inconsistency may occur.

  • Network type changes: If you change the network type of the PolarDB-X 1.0 instance during migration, update the network connection settings of the migration task accordingly.

  • Write operations during schema-only migration: If you run schema migration without incremental data migration, stop writing to the source database during migration to prevent data inconsistency.

Other limitations

  • Schedule migration during off-peak hours to reduce impact on source and destination database performance.

  • DTS automatically retries failed tasks for up to seven days. Before switching workloads to the destination database, stop or release any failed tasks, or run REVOKE to remove DTS write permissions on the destination database. Otherwise, the failed task may resume and overwrite destination data with source data.

Usage notes

DTS periodically updates the dts_health_check.ha_health_check table in the source database to advance the binary log position.

Billing

Migration typeInstance configuration feeInternet traffic fee
Schema migrationFreeFree
Incremental data migrationCharged. See Billing overview.

Supported migration types

Schema migration

DTS migrates the schemas of the selected source objects to the destination DataHub project. Foreign keys in the source database are also migrated when schema migration is selected.

Incremental data migration

DTS continuously captures and migrates incremental changes from the source database to DataHub after the initial schema migration completes. This lets your applications continue running without interruption during migration.

SQL operations supported for incremental migration

Operation typeSQL statements
DMLINSERT, UPDATE, DELETE
DDLADD COLUMN

Create a migration task

Step 1: Go to the Data Migration Tasks page

  1. Log on to the Data Management (DMS) console.

  2. In the top navigation bar, click DTS.

  3. In the left-side navigation pane, choose DTS (DTS) > Data Migration.

Console operations may vary based on the DMS console mode and layout. See Simple mode and Customize the layout and style of the DMS console. Alternatively, go directly to the Data Migration Tasks page in the new DTS console.

Step 2: Select a region

From the drop-down list next to Data Migration Tasks, select the region where the migration instance resides.

In the new DTS console, select the region in the upper-left corner.

Step 3: Configure source and destination databases

Click Create Task, then configure the source and destination databases.

Warning

After selecting the source and destination instances, read the Limits information displayed at the top of the page before proceeding.

Source database settings

ParameterDescription
Task NameThe task name. DTS assigns a name automatically. Specify a descriptive name to identify the task easily. The name does not need to be unique.
Select an existing DMS database instance(Optional) Select an existing instance to auto-populate the connection parameters, or leave blank to configure them manually.
Database TypeSelect PolarDB-X 1.0.
Connection TypeSelect Alibaba Cloud Instance.
Instance RegionThe region where the source PolarDB-X 1.0 instance resides.
Replicate Data Across Alibaba Cloud AccountsSelect No for same-account migration.
Instance IDThe ID of the source PolarDB-X 1.0 instance.
Database AccountThe database account for the source instance. See Prerequisites for required permissions.
Database PasswordThe password for the database account.

Destination Database

ParameterDescription
Select an existing DMS database instance(Optional) Select an existing instance to auto-populate the connection parameters, or leave blank to configure them manually.
Database TypeSelect DataHub.
Connection TypeSelect Alibaba Cloud Instance.
Instance RegionThe region where the destination DataHub project resides.
ProjectThe ID of the destination DataHub project.

Step 4: Test connectivity

Click Test Connectivity and Proceed.

DTS also automatically adds its server CIDR blocks to the IP address whitelist of Alibaba Cloud database instances or Elastic Compute Service (ECS) security groups.

Warning

Adding DTS server CIDR blocks to your IP address whitelist or security group rules may expose your database to security risks. Before proceeding, take protective measures: use strong account credentials, limit exposed ports, authenticate API calls, and regularly audit whitelist and security group rules. For self-managed databases in data centers or from third-party providers, manually add the DTS CIDR blocks to the database whitelist. See Add the CIDR blocks of DTS servers to the security settings of on-premises databases.

Step 5: Configure migration objects

ParameterDescription
Migration TypesSelect Schema Migration, Incremental Data Migration, or both.
Processing Mode of Conflicting TablesPrecheck and Report Errors: fails the precheck if destination tables have the same names as source tables. Use object name mapping to rename conflicting tables before migration. Ignore Errors and Proceed: skips the name conflict check. If schemas match, DTS skips records with duplicate primary keys. If schemas differ, only specific columns are migrated or the task fails. Use with caution.
Naming Rules of Additional ColumnsDTS adds extra columns to destination tables after migration. If those column names conflict with existing column names in the destination, migration fails. Select New Rule or Previous Rule to control naming. Check for conflicts before selecting. See Naming rules for additional columns.
Capitalization of Object Names in Destination InstanceControls the case of database, table, and column names in the destination. The default uses the DTS policy. See Specify the capitalization of object names in the destination instance.
Source ObjectsSelect one or more objects, then click Rightwards arrow to move them to Selected Objects. Select tables or entire databases. If you select tables, DTS does not migrate views, triggers, or stored procedures.
Selected ObjectsRight-click an object to rename it, add SQL filter conditions, or select specific SQL operations to migrate. To rename multiple objects at once, click Batch Edit. See Map object names and Use SQL conditions to filter data.
If a database is selected as a migration object and the source table has a primary key (single-column or composite), that primary key is used as the distribution key in the destination. If the source table has no primary key, DTS generates an auto-increment primary key column in the destination, which may cause data inconsistency.
Renaming an object with object name mapping may cause dependent objects to fail migration.

Step 6: Configure advanced settings

Click Next: Advanced Settings, then configure the following parameters.

ParameterDescription
Set AlertsNo: no alerting. Yes: sends notifications when the task fails or migration latency exceeds the threshold. Specify the alert threshold and contacts. See Configure monitoring and alerting.
Retry Time for Failed ConnectionsHow long DTS retries after a connection failure. Range: 10–1,440 minutes. Default: 720 minutes. Set to at least 30 minutes. If DTS reconnects within this window, the task resumes. Otherwise, it fails. If multiple tasks share a source or destination, the shortest retry time applies to all.
The wait time before a retry when other issues occur in the source and destination databasesHow long DTS retries after DDL or DML operation failures. Range: 1–1,440 minutes. Default: 10 minutes. Set to at least 10 minutes. This value must be less than the Retry Time for Failed Connections value.
Configure ETLYes: enables extract, transform, and load (ETL). Enter data processing statements in the code editor. See Configure ETL in a data migration or data synchronization task. No: skips ETL configuration.
DTS charges for retry attempts. Release the DTS instance promptly after the source and destination instances are released.

Step 7: Run the precheck

Click Next: Save Task Settings and Precheck.

To view the API parameters for this task configuration, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.

DTS validates the task configuration before starting migration. The precheck covers connectivity, account permissions, binary log settings, and object compatibility.

  • If a precheck item fails, click View Details to see the error, fix the issue, then click Precheck Again.

  • If a precheck item triggers an alert that can be safely ignored, click Confirm Alert Details, then Ignore in the View Details dialog, click OK, and click Precheck Again.

Warning

Ignoring a precheck alert may result in data inconsistency or other risks.

Step 8: Purchase the migration instance

Wait until the precheck success rate reaches 100%, then click Next: Purchase Instance.

On the Purchase Instance page, configure the instance class.

ParameterDescription
Resource GroupThe resource group for the migration instance. Default: default resource group. See What is Resource Management?.
Instance ClassThe migration speed varies by instance class. Select a class based on your throughput requirements. See Specifications of data migration instances.

Step 9: Start migration

Select the Data Transmission Service (Pay-as-you-go) Service Terms check box, then click Buy and Start.

The migration task starts and appears in the task list. Monitor the task progress from there.

Data type mappings

See Data type mappings for schema synchronization.

What's next

After incremental data migration is running and the migration latency drops to zero, verify data consistency between the source and destination databases, then switch your applications to use DataHub as the data destination.