Data Transmission Service (DTS) migrates data from a PolarDB-X 2.0 instance to a Tablestore instance with minimal downtime. This approach supports schema migration, full data migration, and incremental data migration — so you can keep the source database online while the migration runs.
Prerequisites
Before you begin, ensure that you have:
-
A Tablestore instance. For setup instructions, see Use Tablestore
-
The AccessKey ID and AccessKey secret of the Alibaba Cloud account (not a Resource Access Management (RAM) user) that owns the Tablestore instance. See Create an AccessKey pair
If you use a RAM user AccessKey ID, the data migration task may fail because the RAM user is not granted the required permissions. Use the Alibaba Cloud account credentials instead.
Limitations
Source database
| Constraint | Detail |
|---|---|
| Primary key or unique constraint | Tables must have a PRIMARY KEY or UNIQUE constraint with all fields unique. Without this, the destination may contain duplicate records. |
| Table name casing | Tables with uppercase letters in their names support schema migration only — full and incremental migration are not supported for these tables. |
| Max tables per task | Up to 5,000 tables when migrating individual tables with renaming. Exceeding this limit causes a request error. To migrate more tables, either split the work into multiple tasks or migrate the entire database. |
| Binary logging | The binlog_row_image parameter must be set to full. If it is not, the precheck fails and the task cannot start. See Parameter settings. |
Destination (Tablestore)
| Constraint | Detail |
|---|---|
| Max tables | Up to 64 tables per Tablestore instance. Exceeding this limit causes a migration error. Contact Tablestore technical support to raise the limit. |
| Table and column naming | Names must contain only letters, digits, and underscores (_), and must start with a letter or underscore. Length: 1–255 characters. |
| Object scope | Only one database (or multiple tables within the same database) can be migrated per task. Database-level name mapping is not supported. |
General
-
Run migrations during off-peak hours. Full data migration uses significant read and write resources on both source and destination, which increases database server load.
-
Full data migration causes fragmentation in destination tables. After migration, destination storage usage exceeds source storage usage.
-
Write data to the destination only through DTS during migration. Writing from other sources causes data inconsistency between source and destination.
Billing
| Migration type | Instance configuration fee | Internet traffic fee |
|---|---|---|
| Schema migration and full data migration | Free | Free |
| Incremental data migration | Charged. See Billing overview. | — |
SQL operations supported for incremental migration
| Operation type | SQL statements |
|---|---|
| DML | INSERT, UPDATE, DELETE |
Required database account permissions
| Database | Schema migration | Full data migration | Incremental data migration |
|---|---|---|---|
| Source PolarDB-X 2.0 | SELECT | SELECT | REPLICATION SLAVE, REPLICATION CLIENT, and SELECT on objects to migrate |
For instructions on creating accounts and granting permissions, see Manage database accounts and Permissions required for an account to synchronize data.
Create a migration task
Step 1: Open the Data Migration Tasks page
-
Log on to the Data Management (DMS) console.
-
In the top navigation bar, click DTS.
-
In the left-side navigation pane, choose DTS (DTS) > Data Migration.
The steps above may vary based on the DMS console mode and layout. See Simple mode and Customize the layout and style of the DMS console. You can also go directly to the Data Migration Tasks page of the new DTS console.
Step 2: Select a region
From the drop-down list next to Data Migration Tasks, select the region where the data migration instance resides.
In the new DTS console, select the region in the upper-left corner of the page.
Step 3: Configure source and destination databases
Click Create Task. In the Create Task wizard, configure the following parameters.
Source database
| Parameter | Description |
|---|---|
| Select an existing DMS database instance | (Optional) Select an existing DMS instance to auto-populate connection settings, or leave blank to configure manually. |
| Database Type | Select PolarDB-X 2.0. |
| Access Method | Select Alibaba Cloud Instance. |
| Instance Region | The region where the source PolarDB-X 2.0 instance resides. |
| Replicate Data Across Alibaba Cloud Accounts | Select No if source and destination are in the same account. |
| Instance ID | The ID of the source PolarDB-X 2.0 instance. |
| Database Account | The account with the permissions listed in the Required database account permissions section. |
| Database Password | The password for the database account. |
Destination database
| Parameter | Description |
|---|---|
| Select an existing DMS database instance | (Optional) Select an existing DMS instance to auto-populate connection settings, or leave blank to configure manually. |
| Database Type | Select Tablestore. |
| Access Method | Select Alibaba Cloud Instance. |
| Instance Region | The region where the destination Tablestore instance resides. |
| Instance ID | The ID of the destination Tablestore instance. |
| AccessKey ID of Alibaba Cloud Account | The AccessKey ID of the Alibaba Cloud account that owns the Tablestore instance. Do not use a RAM user AccessKey ID. |
| AccessKey Secret of Alibaba Cloud Account | The AccessKey secret of the Alibaba Cloud account that owns the Tablestore instance. |
Step 4: Test connectivity
Click Test Connectivity and Proceed.
DTS automatically adds its server CIDR blocks to the IP whitelist of Alibaba Cloud database instances. For self-managed databases on Elastic Compute Service (ECS) instances, DTS adds CIDR blocks to the ECS security group rules. If the self-managed database spans multiple ECS instances, manually add DTS CIDR blocks to each instance's security group. For on-premises or third-party cloud databases, manually add the CIDR blocks to the database IP whitelist. See Add the CIDR blocks of DTS servers to the security settings of on-premises databases.
Adding DTS server CIDR blocks to IP whitelists or security group rules introduces security risks. Before proceeding, take preventive measures including: strengthening account and password security, restricting exposed ports, authenticating API calls, regularly auditing IP whitelists and security group rules, and using Express Connect, VPN Gateway, or Smart Access Gateway to connect databases to DTS.
Step 5: Configure objects and migration settings
Configure the following parameters.
Migration type
| Option | When to use |
|---|---|
| Schema Migration + Full Data Migration | One-time migration. We recommend that you do not write data to the source database during migration to ensure data consistency. |
| Schema Migration + Full Data Migration + Incremental Data Migration | Minimal-downtime migration. DTS continues capturing changes from the source during full migration, then applies them to the destination. |
If you do not select Incremental Data Migration, we recommend that you do not write data to the source database during migration. This ensures data consistency between the source and destination databases.
Conflict handling
| Parameter | Option | Behavior |
|---|---|---|
| Processing Mode of Conflicting Tables | Precheck and Report Errors | Fails the precheck if destination tables share names with source tables. Use object name mapping to rename tables before starting. |
| Ignore Errors and Proceed | Skips the precheck. Rows with matching primary keys are not migrated; schema differences may cause partial migration or task failure. Use with caution. |
Tablestore-specific settings
| Parameter | Options and guidance |
|---|---|
| Processing Policy of Dirty Data | Skip ignores write errors and continues. Block stops the task on write errors. |
| Data Write Mode | Overwrite Row uses the UpdateRowChange operation to overwrite rows in the Tablestore instance. Update Row uses the PutRowChange operation to update rows in the Tablestore instance. |
| Batch Write Mode | BulkImportRequest writes data offline and offers higher read and write efficiency at lower cost. BatchWriteRowRequest writes data in batches. Use BulkImportRequest to achieve higher efficiency and reduce the costs of using the Tablestore instance. |
| More (advanced) | Queue Size: length of the write queue. Thread Quantity: number of write callback threads. Concurrency: maximum concurrent write threads. Buckets: maximum concurrent buckets for incremental data (must be less than or equal to Concurrency). |
Object selection
| Parameter | Description |
|---|---|
| Operation Types | The DML operation types to migrate. All types are selected by default. |
| Capitalization of Object Names in Destination Instance | Controls how database, table, and column names are capitalized in the destination. Default is DTS default policy. See Specify the capitalization of object names in the destination instance. |
| Source Objects | Select tables or databases to migrate and click the arrow icon to add them to Selected Objects. Tablestore supports migrating only one database, or multiple tables within the same database, per task. |
| Selected Objects | Right-click an object to rename it. Click Batch Edit to rename multiple objects. Note: database name mapping is not supported — only table and column names can be mapped. To filter rows, right-click a table and specify SQL conditions. See Use SQL conditions to filter data. |
Step 6: Configure advanced settings
Click Next: Advanced Settings and configure the following parameters.
| Parameter | Description |
|---|---|
| Select the dedicated cluster used to schedule the task | By default, DTS schedules the task on a shared cluster. Select a dedicated cluster for dedicated resources. See What is a DTS dedicated cluster. |
| Set Alerts | Select Yes to receive notifications when the task fails or migration latency exceeds a threshold. Specify alert contacts and thresholds. See Configure monitoring and alerting. |
| Retry Time for Failed Connections | How long DTS retries a failed connection before marking the task as failed. Range: 10–1440 minutes. Default: 720 minutes. Set to at least 30 minutes to allow time for transient failures to recover. |
| The wait time before a retry when other issues occur in the source and destination databases | How long DTS retries failed DDL or DML operations. Range: 1–1440 minutes. Default: 10 minutes. Set to at least 10 minutes. This value must be less than Retry Time for Failed Connections. |
| Enable Throttling for Full Data Migration | Limits the read/write load on the source and destination during full migration. Configure QPS to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Displayed only when Full Data Migration is selected. |
| Enable Throttling for Incremental Data Migration | Limits the load during incremental migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Displayed only when Incremental Data Migration is selected. |
| Environment Tag | An optional tag to identify the DTS instance. |
| Configure ETL | Extract, transform, and load (ETL) is not supported for this migration path. Select No. |
| Whether to delete SQL operations on heartbeat tables of forward and reverse tasks | Select Yes to prevent DTS from writing to heartbeat tables in the source database. In this case, a latency of the DTS instance may be displayed. Select No to write heartbeat operations. In this case, specific features such as physical backup and cloning of the source database may be affected. |
Step 7: Configure primary key columns
Click Next: Configure Database and Table Fields. In the Note dialog box, click OK.
DTS automatically maps each table's primary key column to the Tablestore Primary Key Column. To change the mapping, set Definition Status to All and select one or more columns from the drop-down list to create a composite primary key.
Step 8: Run the precheck
Click Next: Save Task Settings and Precheck.
To preview the API parameters for this configuration before saving, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.
The task runs a precheck before starting. If precheck items fail:
-
Click View Details next to the failed item, resolve the issue, then click Precheck Again.
-
If an alert item can be safely ignored, click Confirm Alert Details, click Ignore, confirm by clicking OK, and then click Precheck Again.
Ignoring precheck alerts may result in data inconsistency. Proceed only if you understand the implications.
Step 9: Wait for the precheck to pass
Wait until the success rate reaches 100%, then click Next: Purchase Instance.
Step 10: Configure the instance class
On the Purchase Instance page, configure the following parameters under New Instance Class.
| Parameter | Description |
|---|---|
| Resource Group | The resource group for the DTS instance. Default: default resource group. See What is Resource Management? |
| Instance Class | Controls migration throughput. Select a class based on data volume and time constraints. See Specifications of data migration instances. |
Step 11: Accept service terms
Read and accept the Data Transmission Service (Pay-as-you-go) Service Terms by selecting the check box.
Step 12: Start the migration
Click Buy and Start. The task appears in the task list, where you can monitor its progress.