Use Data Transmission Service (DTS) to migrate data from a self-managed Db2 for LUW database to an ApsaraDB RDS for MySQL instance. DTS supports schema migration, full data migration, and incremental data migration, so you can choose the combination that fits your cutover strategy.
Prerequisites
Before you begin, ensure that you have:
-
An ApsaraDB RDS for MySQL instance with available storage larger than the total size of the source Db2 for LUW data. For more information, see Create an ApsaraDB RDS for MySQL instance.
-
Reviewed the supported Db2 for LUW versions in Overview of data migration scenarios.
For incremental data migration only: Enable archive logging on the source Db2 for LUW database using the logarchmeth1 and logarchmeth2 configuration parameters. See the IBM documentation for logarchmeth1 and logarchmeth2.
After modifying the logarchmeth configuration, back up the source database to make the changes take effect. Otherwise, an error may be returned during the precheck.
Billing
| Migration type | Instance configuration fee | Internet traffic fee |
|---|---|---|
| Schema migration and full data migration | Free | Free |
| Incremental data migration | Charged. See Billing overview. | — |
Permissions required
Grant the following permissions to the database accounts used by DTS.
| Database | Full data migration | Schema migration | Incremental data migration |
|---|---|---|---|
| Db2 for LUW | CONNECT and SELECT | CONNECT and SELECT | Database administrator |
| ApsaraDB RDS for MySQL | Read and write on the destination database | — | — |
For instructions on creating accounts and granting permissions, see:
-
Db2 for LUW: Creating group and user IDs for a Db2 database installation and Authorities overview
-
ApsaraDB RDS for MySQL: Create an account and Modify account permissions
Limitations
Schema migration
-
DTS migrates foreign keys from the source database to the destination database.
-
During full and incremental data migration, DTS temporarily disables foreign key constraint checks and cascade operations at the session level. Performing cascade or delete operations on the source during migration may cause data inconsistency.
Source database
-
The server to which the source database belongs must have sufficient outbound bandwidth. Otherwise, the data migration speed decreases.
-
Tables must have PRIMARY KEY or UNIQUE constraints with all unique fields. Without these constraints, the destination database may contain duplicate records.
-
When migrating individual tables, a single task supports a maximum of 1,000 tables. Exceeding this limit causes a request error. To migrate more than 1,000 tables, split the migration into multiple tasks or migrate the entire database instead.
-
For incremental data migration, the data logging feature must be enabled on the source database. DTS returns an error during the precheck if it is not enabled.
-
Log retention requirements: Insufficient log retention may cause the task to fail, or result in data inconsistency or data loss. After full data migration is complete, you can reduce the retention period to more than 24 hours. Failure to meet these requirements means the DTS service level agreement (SLA) does not guarantee reliability or performance.
-
Incremental migration only: retain logs for more than 24 hours.
-
Full and incremental migration combined: retain logs for at least 7 days.
-
-
During full data migration, avoid DDL operations that change database or table schemas. Such changes cause the migration task to fail.
-
During full-only migration, avoid writing data to the source database. To ensure data consistency, run full data migration and incremental data migration together.
Incremental data migration
DTS migrates incremental data using the Change Data Capture (CDC) replication technology of Db2 for LUW. This technology has its own limitations. See General data restrictions for SQL Replication.
SQL operations supported during incremental data migration:
| Operation type | SQL statements |
|---|---|
| DML | INSERT, UPDATE, and DELETE |
Other limitations
-
Migrate data during off-peak hours. Full data migration uses read and write resources on both the source and destination databases, which may increase server load.
-
Concurrent INSERT operations during full data migration cause fragmentation in destination tables. After full data migration, the destination tablespace is larger than the source.
-
Writing to the destination database using tools other than DTS during migration may cause data inconsistency. Use Data Management (DMS) for DDL operations after migration. See Perform lock-free DDL operations.
-
If a DDL statement fails in the destination database, DTS continues running. View failed DDL statements in the task logs. See View task logs.
Self-managed Db2 for LUW
-
A primary/secondary switchover on the source while a migration task is running causes the task to fail.
-
DTS calculates migration latency by comparing the timestamp of the latest migrated data in the destination database against the current timestamp in the source. If no DML operations run on the source for an extended period, latency figures may be inaccurate. To refresh latency, run a DML operation on the source. If you are migrating an entire database, create a heartbeat table that is updated every second.
Migrate data from Db2 for LUW to ApsaraDB RDS for MySQL
Step 1: Open Data Migration Tasks
-
Log on to the Data Management (DMS) console.
-
In the top navigation bar, click DTS.
-
In the left-side navigation pane, choose DTS (DTS) > Data Migration.
Console layout and operations may vary. See Simple mode and Customize the layout and style of the DMS console for details. You can also go directly to the Data Migration Tasks page in the new DTS console.
Step 2: Select the region
From the drop-down list next to Data Migration Tasks, select the region where the data migration instance resides.
In the new DTS console, select the region in the upper-left corner.
Step 3: Configure source and destination databases
Click Create Task. On the Create Task wizard page, configure the following parameters.
Source database
| Parameter | Description |
|---|---|
| Task Name | The name of the migration task. DTS assigns a name automatically. Specify a descriptive name to identify the task easily. The name does not need to be unique. |
| Select an existing DMS database instance | Optional. Select an existing DMS database instance to have DTS populate database parameters automatically. If you skip this, configure the parameters manually. |
| Database Type | Select DB2 for LUW. |
| Access Method | The access method for the source database. This example uses Self-managed Database on ECS. For other access methods, see Preparation overview. |
| Instance Region | The region where the source Db2 for LUW database resides. |
| Replicate Data Across Alibaba Cloud Accounts | Select No for this example. |
| ECS Instance ID | The ID of the Elastic Compute Service (ECS) instance that hosts the source Db2 for LUW database. |
| Port Number | The service port of the source Db2 for LUW database. |
| Database Name | The name of the source Db2 for LUW database containing the objects to migrate. |
| Database Account | The database account for the source database. See the Permissions required section for details. |
| Database Password | The password for the database account. |
Destination database
| Parameter | Description |
|---|---|
| Select an existing DMS database instance | Optional. Select an existing DMS database instance to have DTS populate database parameters automatically. If you skip this, configure the parameters manually. |
| Database Type | Select MySQL. |
| Access Method | Select Alibaba Cloud Instance. |
| Instance Region | The region where the destination ApsaraDB RDS for MySQL instance resides. |
| Replicate Data Across Alibaba Cloud Accounts | Select No for this example. |
| RDS Instance ID | The ID of the destination ApsaraDB RDS for MySQL instance. |
| Database Account | The database account for the destination database. See the Permissions required section for details. |
| Database Password | The password for the database account. |
| Encryption | Select Non-encrypted or SSL-encrypted. If you select SSL-encrypted, enable SSL encryption on the ApsaraDB RDS for MySQL instance first. See Configure the SSL encryption feature. |
Step 4: Test connectivity
Click Test Connectivity and Proceed.
DTS automatically adds its CIDR blocks to the IP address whitelist of Alibaba Cloud database instances, or to the security group rules of ECS instances. For self-managed databases in a data center or hosted by a third-party cloud provider, manually add the DTS CIDR blocks to the database IP address whitelist. See Add the CIDR blocks of DTS servers to the security settings of on-premises databases.
If DTS hosts the source database on multiple ECS instances, add the DTS CIDR blocks to the security group rules of each ECS instance manually.
Adding DTS public CIDR blocks to a database IP address whitelist or ECS security group rules introduces security risks. Before using DTS, take preventive measures, including but not limited to: strengthening account and password security, limiting exposed ports, authenticating API calls, auditing IP address whitelist and security group rules regularly, and connecting the database to DTS through Express Connect, VPN Gateway, or Smart Access Gateway.
Step 5: Configure migration objects
| Parameter | Description |
|---|---|
| Migration Types | Select the migration types based on your cutover strategy: <br>- Schema migration and full data migration only: select Schema Migration and Full Data Migration.<br>- Zero-downtime migration with service continuity: select Schema Migration, Full Data Migration, and Incremental Data Migration.<br><br> Note
If you do not select Incremental Data Migration, avoid writing to the source database during migration to maintain data consistency. |
| Processing Mode of Conflicting Tables | - Precheck and Report Errors: checks for tables with identical names in source and destination. If identical names exist, the precheck fails and the task cannot start. Use the object name mapping feature to rename tables if needed. See Map object names.<br>- Ignore Errors and Proceed: skips the identical name check. Warning
This may cause data inconsistency. If the schemas differ between source and destination, only specific columns are migrated or the task fails. |
| Capitalization of Object Names in Destination Instance | Controls capitalization of database names, table names, and column names in the destination. The default is DTS default policy. See Specify the capitalization of object names in the destination instance. |
| Source Objects | Select objects from the Source Objects section and click |
| Selected Objects | Right-click an object to rename it or set WHERE filter conditions. Click Batch Edit in the upper-right corner to rename multiple objects at once. See Map object names and Use SQL conditions to filter data. <br><br> Note
Object name mapping may cause data inconsistency if dependent objects are not also migrated. For incremental migration of specific SQL operations on an object, right-click the object in Selected Objects and select the SQL operations to include. |
Step 6: Configure advanced settings
Click Next: Advanced Settings.
Data verification
To configure data verification, see Enable data verification.
Advanced settings
| Parameter | Description |
|---|---|
| Select the dedicated cluster used to schedule the task | By default, DTS schedules the task to the shared cluster. Purchase and specify a dedicated cluster to run the task on dedicated resources. See What is a DTS dedicated cluster. |
| Set Alerts | Configure alerts for task failures or latency exceeding a threshold. Select Yes to specify an alert threshold and contacts. See Configure monitoring and alerting. |
| Retry Time for Failed Connections | The time range during which DTS retries a failed connection after the task starts. Valid values: 10–1,440 minutes. Default: 720 minutes. Set this value to more than 30 minutes. If DTS reconnects within this period, the task resumes. Otherwise, the task fails. <br><br> Note
When multiple tasks share the same source or destination database, the most recently configured retry time takes precedence. DTS charges for the instance during retries. |
| The wait time before a retry when other issues occur in the source and destination databases | The time range during which DTS retries failed DDL or DML operations after the task starts. Valid values: 1–1,440 minutes. Default: 10 minutes. Set this value to more than 10 minutes. This value must be smaller than Retry Time for Failed Connections. |
| Enable Throttling for Full Data Migration | Limit the read and write rate during full data migration to reduce load on database servers. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). This parameter is available only when Full Data Migration is selected. |
| Enable Throttling for Incremental Data Migration | Limit the write rate during incremental data migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). This parameter is available only when Incremental Data Migration is selected. |
| Environment Tag | An optional tag to identify the DTS instance by environment. |
| Configure ETL | Configure the extract, transform, and load (ETL) feature to process data during migration. Select Yes and enter data processing statements in the code editor. See What is ETL? and Configure ETL in a data migration or data synchronization task. |
Step 7: Save settings and run a precheck
Click Next: Save Task Settings and Precheck.
To preview the API parameters for configuring the DTS task, move the pointer over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters before proceeding.
DTS runs a precheck before starting the migration task.
-
If the precheck fails, click View Details next to the failed item, resolve the issue, and run the precheck again.
-
If an alert appears for an item:
-
If the alert cannot be ignored, click View Details, resolve the issue, and run the precheck again.
-
If the alert can be ignored, click Confirm Alert Details, then click Ignore > OK > Precheck Again. Ignoring alerts may cause data inconsistency.
-
Step 8: Purchase the migration instance
Wait until the success rate reaches 100%, then click Next: Purchase Instance.
On the Purchase Instance page, configure the following parameters.
| Parameter | Description |
|---|---|
| Resource Group | The resource group for the migration instance. Default: default resource group. See What is Resource Management? |
| Instance Class | The instance class determines migration speed. Select a class based on your requirements. See Specifications of data migration instances. |
Step 9: Start the migration
Select the Data Transmission Service (Pay-as-you-go) Service Terms check box, then click Buy and Start.
The migration task starts. Track progress in the task list.