Data Transmission Service (DTS) lets you replicate data from a Db2 for LUW database to an AnalyticDB for PostgreSQL instance. It supports schema synchronization, full data synchronization, and incremental synchronization. Incremental sync uses Db2 for LUW's native Change Data Capture (CDC) replication technology to capture ongoing changes in near-real time.
What's supported
| Capability | Details |
|---|---|
| Synchronization types | Schema synchronization, full data synchronization, incremental data synchronization |
| DML operations | INSERT, UPDATE, DELETE |
| Synchronization topologies | One-to-one, one-to-many, cascade, many-to-one (all one-way) |
| Objects | Tables only — append-optimized (AO) tables are not supported |
Billing
| Synchronization type | Fee |
|---|---|
| Schema synchronization and full data synchronization | Free |
| Incremental data synchronization | Charged. See Billing overview |
Prerequisites
Before you begin, ensure that you have:
-
A destination AnalyticDB for PostgreSQL instance. See Create an instance
-
For full data synchronization: enough available storage space in the destination instance to hold all source data
-
Database accounts with the required permissions (see Preparations)
Limitations
Source database requirements
-
Tables must have a PRIMARY KEY or UNIQUE constraint, with all fields unique. Tables without these constraints may produce duplicate records in the destination.
-
When synchronizing tables with column or object renaming, a single task supports up to 5,000 tables. For more than 5,000 tables, create multiple tasks or synchronize at the database level instead.
-
Data logging must be enabled on the source database, or the precheck fails.
-
Incremental-only sync: retain data logs for more than 24 hours.
-
Full + incremental sync: retain data logs for at least 7 days. After full sync completes, you can reduce the retention period to more than 24 hours.
WarningIf data logs are not retained for the required duration, DTS may fail to read them, causing task failure or, in exceptional cases, data inconsistency or loss. Insufficient retention means the service reliability or performance stated in the SLA of DTS cannot be guaranteed.
-
-
The server hosting the source database must have enough outbound bandwidth. Limited bandwidth reduces synchronization speed.
Other limitations
-
CDC replication is subject to IBM's own restrictions. See General data restrictions for SQL Replication for details.
-
Concurrent INSERT operations during full sync cause table fragmentation in the destination, so the destination tablespace will be larger than the source after full sync completes.
-
Write to the destination database only through DTS during synchronization. Writing through other tools may cause data inconsistency. After sync completes, use Data Management (DMS) for DDL operations online. See Perform lock-free DDL operations.
-
Only tables can be selected as sync objects. AO tables are not supported.
-
If you use column mapping for non-full-table sync, or if source and destination table schemas differ, columns present in the source but absent in the destination are dropped.
Foreign key behavior by sync phase
| Sync phase | Foreign key behavior |
|---|---|
| Schema synchronization | Foreign keys are synchronized from source to destination |
| Full data synchronization | DTS temporarily disables foreign key constraint checks and cascade operations at the session level. Cascade update and delete operations on the source during sync may cause data inconsistency |
| Incremental data synchronization | Foreign keys are not synchronized. Cascade update and delete operations on the source during sync may cause data inconsistency |
Special cases (self-managed Db2 for LUW)
-
If a primary/secondary switchover occurs on the source database while the task is running, the task fails.
-
Synchronization latency is calculated based on the timestamp of the latest synced record in the destination and the current timestamp in the source. If no DML operations occur on the source for an extended period, the displayed latency may be inaccurate. Perform a DML operation on the source to refresh the latency.
When synchronizing an entire database, create a heartbeat table that is updated every second to keep the latency accurate.
Preparations
Step 1: Create database accounts and grant permissions
| Database | Required permissions | Reference |
|---|---|---|
| Db2 for LUW | Database administrator permissions | Creating group and user IDs for a Db2 database installation |
| AnalyticDB for PostgreSQL | Read and write permissions on the destination database, or RDS_SUPERUSER permissions | Create and manage a database account · Manage users and permissions |
Step 2: Enable log archiving on the source database
DTS reads incremental changes from the Db2 for LUW archive logs. Enable log archiving using the logarchmeth1 and logarchmeth2 configuration parameters.
See the IBM documentation for setup instructions:
Create a data synchronization task
Run the task during off-peak hours to minimize performance impact on your source and destination databases. DTS reads from the source and writes to the destination concurrently during full sync, which increases load on both database servers.
Step 1: Open the Data Synchronization Tasks page
-
Log on to the Data Management (DMS) console.
-
In the top navigation bar, click Data + AI.
-
In the left-side navigation pane, choose DTS (DTS) > Data Synchronization.
The navigation path may vary based on the DMS console mode and layout. See Simple mode and Customize the layout and style of the DMS console. Alternatively, go to the Data Synchronization Tasks page in the new DTS console.
Step 2: Select a region
On the Data Synchronization Tasks page, select the region where the data synchronization instance will reside.
In the new DTS console, select the region in the top navigation bar.
Step 3: Configure source and destination databases
Click Create Task. In the wizard, configure the following parameters.
Source database
| Parameter | Description |
|---|---|
| Task Name | A name for the DTS task. DTS generates one automatically. Specify a descriptive name for easy identification — it doesn't need to be unique. |
| Select a DMS database instance | Select an existing DMS database instance to auto-populate the parameters below, or leave blank to configure manually. |
| Database Type | Select DB2 for LUW. |
| Access Method | The deployment location of the source database. This example uses Self-managed Database on ECS. If your source is a self-managed database, set up the required network environment first. See Preparation overview. |
| Instance Region | The region of the source Db2 for LUW database. |
| Replicate Data Across Alibaba Cloud Accounts | Whether to sync across Alibaba Cloud accounts. This example uses No. |
| ECS Instance ID | The ID of the Elastic Compute Service (ECS) instance hosting the Db2 for LUW database. |
| Port Number | The service port of the source database. Default: 50000. |
| Database Name | The name of the source Db2 for LUW database. |
| Database Account | The database account with administrator permissions. |
| Database Password | The password for the database account. |
Destination database
| Parameter | Description |
|---|---|
| Select a DMS database instance | Select an existing DMS database instance to auto-populate the parameters below, or leave blank to configure manually. |
| Database Type | Select AnalyticDB for PostgreSQL. |
| Access Method | Select Alibaba Cloud Instance. |
| Instance Region | The region of the destination AnalyticDB for PostgreSQL instance. |
| Instance ID | The ID of the destination AnalyticDB for PostgreSQL instance. |
| Database Name | The name of the destination database. |
| Database Account | The database account with read and write permissions. |
| Database Password | The password for the database account. |
Step 4: Test connectivity
Click Test Connectivity and Proceed.
DTS automatically adds its CIDR blocks to the security settings of the source and destination databases:
-
Alibaba Cloud database instance (such as ApsaraDB RDS for MySQL or ApsaraDB for MongoDB): CIDR blocks are added to the instance whitelist.
-
Self-managed database on ECS: CIDR blocks are added to the ECS instance security group rules. Make sure the ECS instance can reach the database.
-
On-premises or third-party cloud database: Manually add DTS server CIDR blocks to the database whitelist. See Add the CIDR blocks of DTS servers.
Adding DTS CIDR blocks to whitelists or security groups introduces security risks. Before proceeding, take preventive measures such as strengthening credentials, restricting exposed ports, validating API calls, regularly auditing whitelists and security group rules, and removing unauthorized CIDR blocks. Alternatively, connect the database to DTS using Express Connect, VPN Gateway, or Smart Access Gateway.
Step 5: Configure objects and advanced settings
Basic settings
| Parameter | Description |
|---|---|
| Synchronization Types | Select Schema Synchronization, Full Data Synchronization, and Incremental Data Synchronization. After the precheck, DTS first copies all historical data, then continuously replicates changes. |
| Processing Mode of Conflicting Tables | How DTS handles destination tables with the same name as source tables. Precheck and Report Errors — fails the precheck if identical table names are found. Use object name mapping to rename tables before syncing. Clear Destination Table — deletes all data from matching destination tables before sync starts. Use with caution. Ignore Errors and Proceed — skips the name conflict check. Records with duplicate primary keys are retained in the destination during full sync and overwritten during incremental sync. If schemas differ, sync may fail or only partial columns are written. |
| DDL and DML operations to be synchronized | The SQL operations to replicate. Only DML (INSERT, UPDATE, DELETE) is supported. To select operations per table, right-click a table in Selected Objects. |
| Source Objects | Select one or more objects from the Source Objects list and click the arrow icon to add them to Selected Objects. |
| Selected Objects | To rename a single object in the destination, right-click it in Selected Objects. To rename multiple objects at once, click Batch Edit. See Map object names. To filter rows by condition, right-click an object and specify a WHERE clause. See Specify filter conditions. |
Advanced settings
| Parameter | Description |
|---|---|
| Monitoring and Alerting | Configure alerts for task failures or latency exceeding a threshold. Select Yes to set alert thresholds and notification contacts. See Configure monitoring and alerting. |
| Retry Time for Failed Connections | How long DTS retries before marking the task as failed if the source or destination becomes unreachable. Range: 10–1,440 minutes. Default: 720 minutes. Set to at least 30 minutes. When multiple tasks share the same source or destination, the shortest retry time among them applies. DTS instance charges continue during retry. |
| Configure ETL | Whether to enable extract, transform, and load (ETL) processing. Select Yes to enter transformation statements. See Configure ETL and What is ETL?. |
Step 6: Configure table fields for AnalyticDB for PostgreSQL
Click Next: Configure Database and Table Fields. Set the primary key column and distribution key column for each table being synchronized to the destination AnalyticDB for PostgreSQL instance.
Step 7: Save settings and run the precheck
Click Next: Save Task Settings and Precheck.
To view the OpenAPI parameters for this task configuration before saving, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.
DTS runs a precheck before starting the task. If the precheck fails:
-
Click View Details next to each failed item, resolve the issue, and click Precheck Again.
-
If an alert item can be safely ignored, click Confirm Alert Details > Ignore > OK, then click Precheck Again. Ignoring alert items may lead to data inconsistency.
Step 8: Purchase the synchronization instance
Wait until the Success Rate reaches 100%, then click Next: Purchase Instance.
On the purchase page, configure the following parameters.
| Parameter | Description |
|---|---|
| Billing Method | Subscription — pay upfront for a fixed term; more cost-effective for long-term use. Pay-as-you-go — billed hourly; suited for short-term or trial use. Release the instance when no longer needed to stop charges. |
| Resource Group Settings | The resource group for the synchronization instance. Default: default resource group. See What is Resource Management?. |
| Instance Class | The instance class determines synchronization throughput. Select based on your data volume and latency requirements. See Instance classes of data synchronization instances. |
| Subscription Duration | Available when Subscription is selected. Options: 1–9 months, 1 year, 2 years, 3 years, or 5 years. |
Read and select Data Transmission Service (Pay-as-you-go) Service Terms, then click Buy and Start > OK.
The task appears in the task list. Monitor its progress from there.
What's next
-
Synchronization topologies — learn about supported synchronization topology types
-
Map object names — rename objects during synchronization
-
Perform lock-free DDL operations — run DDL statements online via DMS after sync completes