Use Data Transmission Service (DTS) to migrate data from a self-managed Db2 for LUW database to a PolarDB-X 2.0 instance with minimal downtime.
Migration capabilities
| Capability | Supported | Notes |
|---|---|---|
| Full data migration | Yes | Migrates historical data from the source database |
| Incremental data migration | Yes | Migrates ongoing changes after full migration completes; requires archive logging on the source |
| Schema migration | Yes | Migrates foreign keys; DTS temporarily disables foreign key constraint checks during migration |
| DML operations (INSERT, UPDATE, DELETE) | Yes | Supported during incremental data migration |
| DDL operations | No | Performing DDL on the source during full data migration causes the task to fail |
Prerequisites
Before you begin, make sure you have:
A Db2 for LUW database and a PolarDB-X 2.0 instance. For supported versions, see Overview of data migration scenarios.
A PolarDB-X 2.0 instance compatible with MySQL 5.7.
Available storage on the destination instance that exceeds the total size of the source database.
Databases and tables created in the destination instance before starting the migration task.
Required permissions granted to the database accounts. See Required permissions.
(For incremental migration) Archive logging enabled on the source Db2 for LUW database. See Enable archive logging.
Required permissions
| Database | Full data migration | Incremental data migration |
|---|---|---|
| Db2 for LUW | CONNECT and SELECT | DBADM authority |
| PolarDB-X 2.0 | Read and write on the destination database | Read and write on the destination database |
For instructions on granting permissions, see Creating group and user IDs for a Db2 database installation, Authorities overview, and Manage accounts.
Limitations
Source database limitations
The source database server must have sufficient outbound bandwidth. Insufficient bandwidth reduces migration speed.
Tables to migrate must have PRIMARY KEY or UNIQUE constraints with all fields unique. Without these constraints, the destination database may contain duplicate records.
If you select tables as migration objects and need to rename tables or columns in the destination, a single task supports up to 1,000 tables. For more than 1,000 tables, split the migration into multiple tasks, or migrate the entire database instead.
For incremental migration, the source database must meet these requirements:
Archive logging must be enabled. If not enabled, the precheck fails and the task cannot start.
For incremental-only migration, retain data logs for at least 24 hours.
For full and incremental migration, retain data logs for at least 7 days. After full migration completes, you can set the retention period to more than 24 hours. If DTS cannot get the data logs, the task fails and data inconsistency or loss may occur.
Operational limitations
During full data migration, do not perform DDL operations on the source database. DDL changes to schemas or tables cause the migration task to fail.
If you run full data migration only, do not write to the source database during migration. To avoid data inconsistency, select both full data migration and incremental data migration.
Data type limitations
DTS uses the
ROUND(COLUMN,PRECISION)function to read FLOAT and DOUBLE columns. If no precision is specified, DTS defaults to 38 digits for FLOAT and 308 digits for DOUBLE. Verify that these defaults meet your requirements before starting the migration.During full data migration, concurrent INSERT operations cause table fragmentation in the destination. The destination tablespace usage will be larger than the source after full migration completes.
CDC behavior
DTS migrates incremental updates using the Change Data Capture (CDC) replication technology of Db2 for LUW. This technology has its own limitations — see General data restrictions for SQL Replication for details.
Additional CDC behaviors to be aware of:
If a primary/secondary switchover occurs on the source database while a migration task is running, the task fails.
DTS calculates migration latency based on the timestamp of the latest migrated data in the destination and the current timestamp in the source. If no DML operations are performed on the source for an extended period, the reported latency may be inaccurate. To update the latency, perform a DML operation on the source. If you migrate an entire database, create a heartbeat table that receives or updates data every second.
Task resumption
DTS automatically retries failed migration tasks for up to 7 days. Before switching workloads to the destination instance, stop or release the migration task. Alternatively, run the revoke command to revoke write permissions from the DTS accounts. If you do not take either action and DTS resumes the task, data from the source overwrites data in the destination instance.
Enable archive logging
To migrate incremental data from Db2 for LUW, enable archive logging on the source database. See Primary log archive method and Secondary log archive method.
Before you migrate data, evaluate the impact of data migration on the performance of the source and destination databases. Perform data migration during off-peak hours. Full data migration uses read and write resources on both the source and destination databases, which increases load on the database servers.
Create a migration task
Go to the Data Migration Tasks page.
Log on to the Data Management (DMS) console.
In the top navigation bar, click DTS.
In the left-side navigation pane, choose DTS (DTS) > Data Migration.
Steps may vary based on the DMS console mode and layout. See Simple mode and Configure the DMS console based on your business requirements. You can also go directly to the Data Migration Tasks page of the new DTS console.
From the drop-down list next to Data Migration Tasks, select the region where the migration instance resides.
In the new DTS console, select the region in the upper-left corner.
Click Create Task and configure the source and destination databases.
WarningAfter selecting the source and destination instances, read the limits displayed at the top of the page to prevent task failures or data inconsistency.
Source Database
Parameter Description Task Name The task name. DTS generates a name automatically. Specify a descriptive name to identify the task. The name does not need to be unique. Database Type Select DB2 for LUW. Access Method Select Public IP Address. Instance Region The region where the source Db2 for LUW database resides. Domain Name or IP Address The public IP address used to connect to the source database. Port Number The service port of the source database. The port must be accessible over the Internet. Default: 50000. Database Name The name of the source Db2 for LUW database. Database Account The account for the source database. See Required permissions for the minimum required permissions. Database Password The password of the database account. Encryption Select Non-encrypted or SSL-encrypted based on your requirements. If you select SSL-encrypted, enable SSL encryption for the ApsaraDB RDS for MySQL instance before configuring the task. See Configure SSL encryption for an ApsaraDB RDS for MySQL instance. Destination Database
Parameter Description Database Type Select PolarDB-X 2.0. Access Method Select Alibaba Cloud Instance. Instance Region The region where the destination PolarDB-X 2.0 instance resides. Instance ID The ID of the destination PolarDB-X 2.0 instance. Database Account The account for the destination database. See Required permissions for the minimum required permissions. Database Password The password of the database account. Click Test Connectivity and Proceed. DTS automatically adds its server CIDR blocks to the whitelist of Alibaba Cloud database instances (such as ApsaraDB RDS for MySQL or ApsaraDB for MongoDB) and to the security group rules of Elastic Compute Service (ECS)-hosted databases. For self-managed databases in data centers or hosted by third-party cloud providers, manually add the DTS server CIDR blocks to the database whitelist. See Add the CIDR blocks of DTS servers to the security settings of on-premises databases.
WarningAdding DTS server CIDR blocks to a database whitelist or security group introduces security risks. Take preventive measures, including strengthening username and password security, limiting exposed ports, authenticating API calls, and regularly reviewing whitelist and security group rules. Alternatively, connect the database to DTS using Express Connect, VPN Gateway, or Smart Access Gateway.
Configure objects to migrate and advanced settings.
Basic settings
Parameter Description Migration Type Select Full Data Migration to migrate historical data only. Select Full Data Migration and Incremental Data Migration to keep services running during migration. NoteIf you select full data migration only, do not write to the source instance during migration to avoid data inconsistency.
Processing Mode of Conflicting Tables Precheck and Report Errors: checks for tables with identical names in the source and destination. The precheck passes only if no identical table names exist; otherwise, the task cannot start. To handle name conflicts, use the object name mapping feature to rename migrated tables. Ignore Errors and Proceed: skips the precheck for identical table names. WarningSelecting this option may cause data inconsistency. If schemas match, DTS skips records with the same primary key. If schemas differ, only some columns may be migrated or the task may fail.
Source Objects Select objects from the Source Objects section, then click the
icon to add them to Selected Objects. NoteSelecting tables or columns excludes other objects such as views, triggers, and stored procedures.
Selected Objects To rename a single object, right-click it in Selected Objects. See Map the name of a single object. To rename multiple objects at once, click Batch Edit. See Map multiple object names at a time. NoteRenaming an object may cause dependent objects to fail migration. To filter data by row, right-click an object and specify WHERE conditions. See Use SQL conditions to filter data. To select specific DML or DDL operations for a table, right-click the object and select the operations to migrate.
Advanced settings
Parameter Description Set Alerts Select Yes to receive notifications when the task fails or migration latency exceeds the threshold. Specify the alert threshold and contacts. See Configure monitoring and alerting when you create a DTS task. Select No to disable alerts. Retry Time for Failed Connections The time range during which DTS retries failed connections after the task starts. Valid values: 10–1440 minutes. Default: 720 minutes. Set this to more than 30 minutes. If DTS reconnects within the specified range, the task resumes; otherwise, the task fails. NoteIf multiple tasks share the same source or destination database and have different retry time settings, the most recently configured value applies. DTS charges for retry operations — release the DTS instance promptly after the source and destination instances are released.
Click Next: Save Task Settings and Precheck.
DTS runs a precheck before the migration task can start. If any item fails, click View Details to see the cause and troubleshoot the issue, then click Precheck Again. For alert items that can be ignored, click Confirm Alert Details, then click Ignore > OK > Precheck Again. Ignoring alert items may cause data inconsistency.
Wait until the Success Rate reaches 100%, then click Next: Purchase Instance.
On the Purchase Instance page, select an instance class for the migration instance.
Parameter Description Instance Class The migration speed varies by instance class. Select a class based on your workload. See Specifications of data migration instances. Read and select the check box to agree to the Data Transmission Service (Pay-as-you-go) Service Terms.
Click Buy and Start. Monitor the task progress in the task list.
What's next
After migration is complete, stop the migration task before switching workloads to the destination instance. DTS automatically retries failed migration tasks for up to 7 days — if you do not stop or release the task in time, data from the source may overwrite the destination after a task resumption. If you plan to keep the task running for failback purposes, revoke write permissions from the DTS accounts using the revoke command to prevent source data from overwriting the destination.