All Products
Search
Document Center

Data Transmission Service:Migrate data from a Db2 for LUW database to a PolarDB for MySQL cluster

Last Updated:Mar 28, 2026

Use Data Transmission Service (DTS) to migrate data from a self-managed IBM Db2 for Linux, UNIX, and Windows (LUW) database to a PolarDB for MySQL cluster. DTS supports schema migration, full data migration, and incremental data migration, so you can choose the approach that best fits your downtime tolerance.

Prerequisites

Before you begin, make sure that:

Important

After changing the logarchmeth configuration, back up the source database to apply the change. Without a backup, a precheck error occurs when you start the migration task.

For supported Db2 for LUW versions, see Overview of data migration scenarios.

Billing

Migration typeInstance configuration feeInternet traffic fee
Schema migration and full data migrationFreeFree
Incremental data migrationCharged — see Billing overview

Permissions required

Grant the following permissions to the database accounts used for the migration task.

DatabaseSchema migrationFull data migrationIncremental data migration
Db2 for LUWCONNECT and SELECTCONNECT and SELECTDatabase administrator permissions
PolarDB for MySQLRead and write on the destination database
Important

Use a privileged account for the PolarDB for MySQL destination database.

For instructions on creating accounts and granting permissions:

Limitations

Schema migration

  • DTS migrates foreign keys from the source database to the destination database.

Full data migration and incremental data migration

  • DTS temporarily disables constraint checks and cascade operations on foreign keys at the session level. If you perform cascade or delete operations on the source database during migration, data inconsistency may occur.

Source database requirements

  • The source database server must have enough outbound bandwidth. Insufficient bandwidth reduces migration speed.

  • Tables to be migrated must have PRIMARY KEY or UNIQUE constraints, and all fields must be unique. Tables without these constraints may produce duplicate records in the destination database.

  • If you select tables as the migration objects and need to rename tables or columns in the destination, a single migration task supports up to 1,000 tables. For more than 1,000 tables, create multiple tasks or migrate the entire database instead.

Incremental data migration requirements

  • Data logging must be enabled on the source database. If it is not, the precheck fails and the migration task cannot start.

  • Log retention requirements: If DTS cannot retrieve data logs due to insufficient retention, the task fails and data loss or inconsistency may occur. The service level agreement (SLA) does not guarantee reliability or performance if log retention does not meet these requirements.

    • Incremental migration only: logs must be retained for more than 24 hours

    • Full data migration combined with incremental data migration: logs must be retained for at least seven days

    • After full data migration completes, set the retention period to more than 24 hours

  • DTS migrates incremental data using the Change Data Capture (CDC) replication technology of Db2 for LUW. For CDC-specific restrictions, see General data restrictions for SQL Replication.

Operations on the source database during migration

  • During full data migration: do not perform DDL operations that change database or table schemas. Such operations cause the migration task to fail.

  • During full data migration (without incremental): do not write data to the source database. To ensure data consistency, select both full data migration and incremental data migration.

Other limitations

  • Schedule migration tasks during off-peak hours. Full data migration uses read and write resources on both source and destination databases, which increases server load.

  • After full data migration, the used tablespace in the destination database is larger than in the source database. Concurrent INSERT operations during migration cause table fragmentation.

  • Do not write data from outside the source database to the destination database during migration. Data from other sources causes inconsistency. For post-migration DDL operations, use Data Management (DMS).

  • If a DDL statement fails to execute in the destination database, DTS continues running the task. View failed DDL statements in the task logs. See View task logs.

Special cases for self-managed Db2 for LUW databases

  • If a primary/secondary switchover occurs on the source database while the migration task is running, the task fails.

  • DTS calculates migration latency by comparing the timestamp of the latest migrated data in the destination database against the current timestamp in the source database. If no DML operations are performed on the source database for an extended period, the latency reading may be inaccurate. To refresh the latency, perform a DML operation on the source database.

    Note

    If you select the entire database as the migration object, create a heartbeat table. DTS updates the heartbeat table every second, keeping the latency reading accurate.

SQL operations supported for incremental data migration

Operation typeSQL statements
DMLINSERT, UPDATE, and DELETE

Migrate data from Db2 for LUW to PolarDB for MySQL

Step 1: Go to the Data Migration Tasks page

  1. Log on to the Data Management (DMS) console.

  2. In the top navigation bar, click DTS.

  3. In the left-side navigation pane, choose DTS (DTS) > Data Migration.

Note

You can also go directly to the Data Migration Tasks page of the new DTS console. Console layout and available options may vary based on your DMS mode. See Simple mode and Customize the layout and style of the DMS console.

Step 2: Select the region

From the drop-down list next to Data Migration Tasks, select the region where the migration instance resides.

Note

In the new DTS console, select the region in the upper-left corner.

Step 3: Configure the source and destination databases

Click Create Task. On the Create Task page, configure the following parameters.

Source database (Db2 for LUW)

ParameterDescription
Task NameA descriptive name for the task. DTS assigns a name automatically — change it to something that makes the task easy to identify. The name does not need to be unique.
Select an existing DMS database instance(Optional) Select an existing registered instance to auto-populate the parameters below. If you skip this, configure the parameters manually.
Database TypeSelect DB2 for LUW.
Access MethodSelect Express Connect, VPN Gateway, or Smart Access Gateway. For the preparation steps required to access a self-managed database, see Preparation overview.
Instance RegionThe region where the source Db2 for LUW database resides.
Replicate Data Across Alibaba Cloud AccountsSelect No.
Connected VPCThe ID of the virtual private cloud (VPC) to which the source database belongs.
IP Address or Domain NameThe server IP address of the source Db2 for LUW database.
Port NumberThe service port of the source database. Default: 50000.
Database NameThe name of the source Db2 for LUW database that contains the objects to migrate.
Database AccountThe database account for the source database. See the Permissions required section for required permissions.
Database PasswordThe password for the database account.

Destination database (PolarDB for MySQL)

ParameterDescription
Select an existing DMS database instance(Optional) Select an existing registered instance to auto-populate the parameters below.
Database TypeSelect PolarDB for MySQL.
Access MethodSelect Alibaba Cloud Instance.
Instance RegionThe region where the destination PolarDB for MySQL cluster resides.
PolarDB Cluster IDThe ID of the destination PolarDB for MySQL cluster.
Database AccountThe database account for the destination cluster. See the Permissions required section for required permissions.
Database PasswordThe password for the database account.
EncryptionWhether to encrypt the connection using SSL. Configure based on your security requirements. See Configure SSL encryption.

Step 4: Test connectivity

Click Test Connectivity and Proceed.

DTS automatically adds its server CIDR blocks to the IP address whitelist or security group rules of Alibaba Cloud database instances and ECS-hosted databases. For self-managed databases on multiple ECS instances, manually add the DTS CIDR blocks to each instance's security group rules. For on-premises databases or databases hosted by third-party providers, manually add the DTS CIDR blocks to the database whitelist.

For the complete list of DTS server CIDR blocks, see the "CIDR blocks of DTS servers" section in Add the CIDR blocks of DTS servers to the security settings of on-premises databases.

Warning

Adding DTS CIDR blocks to a database whitelist or ECS security group introduces security exposure. Before proceeding, take preventive measures: strengthen account credentials, limit exposed ports, authenticate API calls, regularly audit the whitelist or security group rules to remove unauthorized CIDR blocks, and connect to DTS through Express Connect, VPN Gateway, or Smart Access Gateway where possible.

Step 5: Configure migration objects

Configure the following parameters.

ParameterDescription
Migration TypesSelect the migration types based on your downtime requirements:
  • Schema Migration + Full Data Migration — for a one-time migration with a maintenance window.
  • Schema Migration + Full Data Migration + Incremental Data Migration — for continuous migration with minimal downtime.
If you do not select Incremental Data Migration, stop writes to the source database during migration to prevent data inconsistency.
Processing Mode of Conflicting Tables
  • Precheck and Report Errors — DTS checks whether the destination already contains tables with the same names as the source. If conflicts exist, the precheck fails and the task does not start. Use object name mapping if you need to rename objects to resolve conflicts.
  • Ignore Errors and Proceed — DTS skips the conflict check. If the source and destination tables have the same schema, DTS skips records that share primary key values with existing destination records. If schemas differ, only specific columns are migrated or the task fails. Use this option with caution.
Capitalization of Object Names in Destination InstanceControls the capitalization of database, table, and column names in the destination. Default: DTS default policy. See Specify the capitalization of object names in the destination instance.
Source ObjectsSelect objects from Source Objects and click the arrow icon to add them to Selected Objects. Select columns, tables, or databases. If you select tables or columns, DTS does not migrate views, triggers, or stored procedures.
Selected Objects
Note

Using object name mapping may cause dependent objects to fail migration. If you do not migrate all tables or the source and destination schemas differ, data in columns that the destination does not contain is lost.

Step 6: Configure advanced settings

Click Next: Advanced Settings and configure the following parameters.

Data verification

To configure data verification, see Enable data verification.

Advanced settings

ParameterDescription
Select the dedicated cluster used to schedule the taskBy default, DTS runs the migration task on the shared cluster. To use a dedicated cluster, purchase and specify one. See What is a DTS dedicated cluster.
Set AlertsWhether to configure alerting. Select Yes to receive notifications when the task fails or migration latency exceeds a threshold. Specify the alert threshold and contacts. See Configure monitoring and alerting.
Select the engine type of the destination databaseThe storage engine for the destination database: InnoDB (default) or X-Engine (an online transaction processing (OLTP) storage engine).
Retry Time for Failed ConnectionsHow long DTS retries failed connections after the task starts. Valid values: 10–1440. Unit: minutes. Default: 720. Set this to more than 30. If DTS reconnects within this period, the task resumes; otherwise, the task fails.
Note

When multiple tasks share a source or destination database, the most recently set retry time applies to all of them. DTS charges for the instance during retry periods. We recommend that you specify the retry time range based on your business requirements. You can also release the DTS instance at your earliest opportunity after the source and destination instances are released.

The wait time before a retry when other issues occur in the source and destination databasesHow long DTS retries failed DDL or DML operations. Valid values: 1–1440. Unit: minutes. Default: 10. Set this to more than 10. This value must be less than Retry Time for Failed Connections.
Enable Throttling for Full Data MigrationWhether to limit the migration rate during full data migration. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s) to reduce the load on the destination database. This parameter appears only when Full Data Migration is selected.
Enable Throttling for Incremental Data MigrationWhether to limit the migration rate during incremental data migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). This parameter appears only when Incremental Data Migration is selected.
Environment TagThe environment tag that is used to identify the DTS instance. You can select an environment tag based on your business requirements.
Configure ETLWhether to use extract, transform, and load (ETL) processing. Select Yes to enter data processing statements. See Configure ETL in a data migration or synchronization task.

Step 7: Save settings and run the precheck

Click Next: Save Task Settings and Precheck.

Note

To preview the API parameters for this task configuration, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters before proceeding.

DTS runs a precheck before starting the migration task. The task can only start after the precheck passes.

  • If a check item fails, click View Details next to the failed item, fix the issue based on the error message, and click Precheck Again.

  • If a check item triggers an alert:

    • If the alert cannot be ignored, click View Details next to the failed item and troubleshoot the issues. Then, click Precheck Again.

    • If the alert can be safely ignored, click Confirm Alert Details. In the View Details dialog box, click Ignore, then OK, and then Precheck Again. Ignoring an alert may cause data inconsistency.

Step 8: Wait for the precheck to complete

Wait until the success rate reaches 100%, then click Next: Purchase Instance.

Step 9: Configure the instance class

On the Purchase Instance page, configure the following parameters.

ParameterDescription
Resource GroupThe resource group for the migration instance. Default: default resource group. See What is Resource Management?
Instance ClassThe instance class determines migration speed. Select based on your data volume and time requirements. See Specifications of data migration instances.

Step 10: Accept service terms

Read and accept the Data Transmission Service (Pay-as-you-go) Service Terms by selecting the check box.

Step 11: Start the migration task

Click Buy and Start. The migration task starts. Track its progress in the task list.