All Products
Search
Document Center

Data Transmission Service:Synchronize data from a Db2 for LUW database to a PolarDB for MySQL cluster

Last Updated:Mar 28, 2026

Use Data Transmission Service (DTS) to continuously replicate schema, historical data, and ongoing changes from a self-managed IBM Db2 for LUW database to a PolarDB for MySQL cluster. DTS uses Db2's change data capture (CDC) replication technology to track incremental changes after the initial full load.

Prerequisites

Before you begin, ensure that you have:

Billing

Synchronization typeCost
Schema synchronization and full data synchronizationFree
Incremental data synchronizationCharged. See Billing overview

Supported synchronization topologies

  • One-way one-to-one synchronization

  • One-way one-to-many synchronization

  • One-way cascade synchronization

  • One-way many-to-one synchronization

For details, see Synchronization topologies.

SQL operations that can be synchronized

DTS synchronizes DML operations only: INSERT, UPDATE, and DELETE.

Limitations

Source database requirements

  • The server hosting the source database must have enough outbound bandwidth. Insufficient bandwidth reduces synchronization speed.

  • Tables to be synchronized must have PRIMARY KEY or UNIQUE constraints with all fields unique. Without these constraints, the destination database may contain duplicate records.

  • When synchronizing individual tables with column or table renaming on the destination side, a single task supports up to 5,000 tables. If your selection exceeds 5,000 tables, split the work across multiple tasks, or synchronize at the database level instead.

  • Log archiving must be enabled on the source database. If it is not enabled, the precheck fails and the task cannot start.

Log retention requirements by synchronization type:

Synchronization typeMinimum log retention
Incremental data synchronization onlyMore than 24 hours
Full data synchronization + incremental data synchronizationAt least 7 days. After full synchronization completes, you can reduce this to more than 24 hours.

Logs retained for shorter periods than required invalidate the DTS service level agreement (SLA).

CDC-specific limitations

These limitations apply only when incremental data synchronization is enabled.

  • DTS uses Db2 for LUW's CDC replication technology. For constraints inherent to that technology, see General data restrictions for SQL Replication.

  • MySQL column names are case-insensitive. If the source has multiple columns whose names differ only in capitalization, they are written to the same column in the destination, causing unexpected results.

  • The source Db2 for LUW database is a self-managed database. A primary/secondary switchover on the source database while a task is running causes the task to fail.

  • DTS calculates synchronization latency using the timestamp of the latest record in the destination against the current timestamp in the source. With no DML activity on the source for an extended period, the reported latency may be inaccurate. Perform a DML operation on the source to refresh the latency value. If you synchronize an entire database, you can create a heartbeat table that receives a write every second to keep latency accurate.

General synchronization behavior

  • Run the synchronization task during off-peak hours. Initial full data synchronization consumes read and write resources on both the source and destination databases.

  • After initial full data synchronization, the destination tablespace may be larger than the source due to table fragmentation from concurrent INSERT operations.

  • During synchronization, write data to the destination exclusively through DTS to prevent data inconsistency. After synchronization completes, use Data Management (DMS) to run DDL statements online. See Perform lock-free DDL operations.

  • If a DDL statement fails on the destination, the task continues running. View failed DDL statements in the task logs. See View task logs.

  • After the task Status changes to Completed, run analyze table <table_name> to confirm data was written to disk in the destination table. In a high-availability (HA) switchover scenario, data may exist only in memory and can be lost.

Schema synchronization behavior

  • DTS synchronizes foreign keys from the source to the destination during schema synchronization.

  • During full and incremental data synchronization, DTS temporarily disables foreign key constraint checks and cascade operations at the session level. Cascade update or delete operations on the source during this period may cause data inconsistency.

Set up the synchronization task

Step 1: Go to the Data Synchronization Tasks page

  1. Log on to the Data Management (DMS) console.

  2. In the top navigation bar, click Data + AI.

  3. In the left-side navigation pane, choose DTS (DTS) > Data Synchronization.

Operations may vary based on the DMS console mode. See Simple mode and Customize the layout and style of the DMS console. You can also go directly to the Data Synchronization Tasks page in the new DTS console.

Step 2: Select the region

On the right side of Data Synchronization Tasks, select the region where the data synchronization instance resides.

In the new DTS console, select the region in the top navigation bar.

Step 3: Configure the source and destination databases

Click Create Task. In the Create Data Synchronization Task wizard, configure the following parameters.

Source Database

ParameterDescription
Task NameA name for the DTS task. DTS generates a name automatically. Specify a descriptive name to make the task easy to identify. Uniqueness is not required.
Database TypeSelect DB2 for LUW.
Connection TypeSelect Express Connect, VPN Gateway, or Smart Access Gateway.
Instance RegionThe region where the source Db2 for LUW database resides.
Replicate Data Across Alibaba Cloud AccountsSelect No for same-account synchronization.
Connected VPCThe ID of the virtual private cloud (VPC) connected to the Db2 for LUW database.
IP AddressThe server IP address of the Db2 for LUW database.
Port NumberThe service port of the source database. Default value: 50000.
Database NameThe name of the source Db2 for LUW database containing the objects to synchronize.
Database AccountThe account for the source database. See Prerequisites for required permissions.
Database PasswordThe password for the database account.

Destination Database

ParameterDescription
Database TypeSelect PolarDB for MySQL.
Connection TypeSelect Alibaba Cloud Instance.
Instance RegionThe region where the destination PolarDB for MySQL cluster resides.
PolarDB Cluster IDThe ID of the destination PolarDB for MySQL cluster.
Database AccountThe database account for the destination cluster. See Prerequisites for required permissions.
Database PasswordThe password for the database account.

Step 4: Test connectivity

Click Test Connectivity and Proceed at the bottom of the page.

DTS handles whitelist configuration automatically based on the database type:

  • Alibaba Cloud database instances (such as ApsaraDB RDS or ApsaraDB for MongoDB): DTS adds its CIDR blocks to the instance whitelist automatically.

  • Self-managed databases on Elastic Compute Service (ECS): DTS adds its CIDR blocks to the ECS security group rules automatically. Make sure the ECS instance can reach the database.

  • Self-managed databases in data centers or third-party clouds: Manually add the DTS CIDR blocks to the database whitelist. See Add the CIDR blocks of DTS servers.

Warning

Adding DTS CIDR blocks to whitelists or security groups introduces network access from external servers. Before proceeding, understand and accept these risks and take appropriate measures—such as strengthening credentials, limiting exposed ports, auditing API calls, regularly reviewing whitelist rules, and preferring Express Connect, VPN Gateway, or Smart Access Gateway for connectivity.

Step 5: Configure synchronization objects and advanced settings

Basic settings

ParameterDescription
Synchronization TypesIncremental Data Synchronization is selected by default. Also select Schema Synchronization and Full Data Synchronization to replicate historical data before tracking ongoing changes.
Processing Mode of Conflicting TablesPrecheck and Report Errors (default): fails the precheck if source and destination have tables with identical names. Use object name mapping to rename conflicting destination tables if needed. Ignore Errors and Proceed: skips the precheck. During full synchronization, existing destination records with the same primary or unique key are kept. During incremental synchronization, they are overwritten. Different schemas may cause initialization failures or partial synchronization. Use with caution.
Capitalization of Object Names in Destination InstanceControls capitalization of database, table, and column names in the destination. Default: DTS default policy. See Specify the capitalization of object names.
Source ObjectsSelect columns, tables, or databases from Source Objects and click the arrow icon to move them to Selected Objects. If you select tables or columns, views, triggers, and stored procedures are not synchronized.
Selected ObjectsTo rename a single object, right-click it in Selected Objects. To rename multiple objects, click Batch Edit. See Map object names.

Advanced settings

ParameterDescription
Monitoring and AlertingNo: disables alerting. Yes: enables alerting. Configure the alert threshold and notification settings. See Configure monitoring and alerting.
Select the engine type of the destination databaseInnoDB (default): standard storage engine. X-Engine: an online transaction processing (OLTP) storage engine.
Retry Time for Failed ConnectionsHow long DTS retries failed connections after the task starts. Range: 10–1,440 minutes. Default: 720 minutes. Set this to at least 30 minutes. If you specify different retry time ranges for multiple data synchronization tasks that have the same source or destination database, the shortest retry time range takes precedence. DTS charges for the instance during retry attempts.
Configure ETLYes: enables extract, transform, and load (ETL) processing. Enter data transformation statements in the code editor. See Configure ETL. No: disables ETL.

Step 6: Save settings and run the precheck

To preview the OpenAPI parameters for this task before saving, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.

Click Next: Save Task Settings and Precheck.

The task cannot start until it passes the precheck. If the precheck fails, click View Details next to each failed item, fix the issues, and click Precheck Again. For alert items that can be safely ignored, click Confirm Alert Details > Ignore > OK > Precheck Again. Ignoring alerts may cause data inconsistency.

Step 7: Purchase a data synchronization instance

Wait until the Success Rate reaches 100%, then click Next: Purchase Instance.

On the purchase page, configure the following parameters.

ParameterDescription
Billing MethodSubscription: pay upfront for a fixed duration, more cost-effective for long-term use. Pay-as-you-go: billed hourly, suitable for short-term use. Release the instance when no longer needed to stop charges.
Resource Group SettingsThe resource group for the instance. Default: default resource group. See What is Resource Management?
Instance ClassDetermines the synchronization speed. Select a class based on your throughput requirements. See Instance classes of data synchronization instances.
Subscription DurationAvailable only for the Subscription billing method. Options: 1–9 months, or 1, 2, 3, or 5 years.

Step 8: Start the task

Read and select Data Transmission Service (Pay-as-you-go) Service Terms, then click Buy and Start. In the confirmation dialog, click OK.

Track synchronization progress in the task list.