All Products
Search
Document Center

AnalyticDB:Synchronize data from a Db2 for LUW database to an AnalyticDB for PostgreSQL instance

Last Updated:Mar 28, 2026

Data Transmission Service (DTS) supports synchronizing data from a self-managed IBM Db2 for LUW database to an AnalyticDB for PostgreSQL instance, including schema synchronization, full data synchronization, and incremental data synchronization via Change Data Capture (CDC).

Prerequisites

Before you begin, make sure you have:

  • An AnalyticDB for PostgreSQL instance. See Create an instance

  • Sufficient storage space in the destination instance — for full data synchronization, the available space must exceed the total size of the source data

  • Log archiving enabled on the Db2 for LUW database. See logarchmeth1 and logarchmeth2

  • Database accounts with the required permissions (see Required permissions)

Required permissions

Set up accounts for both the source and destination databases before configuring the synchronization task.

DatabaseRequired permissionsReferences
Db2 for LUWDatabase administrator permissionsCreating group and user IDs for a Db2 database installation
AnalyticDB for PostgreSQLRead and write permissions on the destination databaseCreate a database account. Accounts with the RDS_SUPERUSER permission are also supported. See Manage users and permissions

Billing

Synchronization typeCost
Schema synchronization and full data synchronizationFree
Incremental data synchronizationCharged. See Billing overview

Supported synchronization topologies

DTS supports the following one-way synchronization topologies for this source-destination combination:

  • One-to-one

  • One-to-many

  • Cascade

  • Many-to-one

For details, see Synchronization topologies.

SQL operations that can be synchronized

Operation typeSQL statementsNotes
DMLINSERT, UPDATE, DELETEWhen writing to AnalyticDB for PostgreSQL, UPDATE is automatically converted to REPLACE INTO. If UPDATE targets primary key columns, it is converted to DELETE followed by INSERT.

Limitations

General limitations

  • Tables must have PRIMARY KEY or UNIQUE constraints with all fields unique. Otherwise, duplicate records may appear in the destination.

  • Only tables can be selected as synchronization objects — views, triggers, and stored procedures are not synchronized. Tables cannot be append-optimized (AO) tables.

  • When synchronizing up to 5,000 tables with destination table modifications (such as renaming tables or columns), a single task is sufficient. For more than 5,000 tables, split the workload across multiple tasks or synchronize the entire database in one task.

  • Column mapping for non-full table synchronization, or schema mismatches between source and destination tables, will cause data in unmapped source columns to be lost.

  • During initial full data synchronization, concurrent INSERT operations cause fragmentation in destination tables, so the destination tablespace will be larger than the source.

  • Run synchronization tasks during off-peak hours to minimize performance impact on source and destination databases.

  • Write data to the destination database only through DTS during synchronization to prevent data inconsistency. After synchronization completes, use Data Management (DMS) to run DDL statements online. See Perform lock-free DDL operations.

  • The source Db2 for LUW server must have sufficient outbound bandwidth; insufficient bandwidth reduces synchronization speed.

Incremental data synchronization (CDC) limitations

  • DTS uses the CDC replication technology of Db2 for LUW for incremental synchronization. For CDC-specific restrictions, see General data restrictions for SQL Replication.

  • Data log retention requirements: If logs are not retained long enough, DTS may fail to read them, causing task failure or data loss. Make sure that you set the retention period of data logs based on the preceding requirements. Otherwise, the service reliability or performance stated in the SLA of DTS cannot be guaranteed.

    • Incremental synchronization only: retain logs for more than 24 hours.

    • Full + incremental synchronization: retain logs for at least 7 days. After full synchronization completes, you can set the retention period to more than 24 hours.

  • During schema synchronization, DTS synchronizes foreign keys from the source database to the destination database. However, during full data synchronization and incremental data synchronization, DTS temporarily disables the constraint check and cascade operations on foreign keys at the session level. If you perform cascade update and delete operations on the source database during data synchronization, data inconsistency may occur.

    During schema synchronization and incremental data synchronization, the foreign keys of the source database are not synchronized to the destination database.
  • If a primary/secondary switchover occurs on the source database while a task is running, the task fails.

  • DTS calculates synchronization latency based on the timestamp of the latest synchronized record in the destination versus the current source timestamp. If no DML operations run on the source for an extended period, the reported latency may be inaccurate. Run a DML operation on the source to refresh the latency reading. If you selected an entire database as the synchronization object, create a heartbeat table that receives data every second to keep latency accurate.

Set up the synchronization task

Step 1: Go to the Data Synchronization Tasks page

  1. Log on to the Data Management (DMS) console.

  2. In the top navigation bar, click DTS.

  3. In the left-side navigation pane, choose DTS (DTS) > Data Synchronization.

The navigation steps may differ based on your DMS console mode and layout. See Simple mode and Customize the layout and style of the DMS console. Alternatively, go directly to the Data Synchronization Tasks page of the new DTS console.

Step 2: Select the region

On the right side of Data Synchronization Tasks, select the region where the synchronization instance resides.

In the new DTS console, select the region in the top navigation bar.

Step 3: Create a task

Click Create Task to open the task configuration page.

If the page shows a New Configuration Page button in the upper-right corner, click it to switch to the new version. Skip this if Back to Previous Version is shown instead.

Step 4: Configure source and destination databases

CategoryParameterDescription
N/ATask NameA name for the DTS task. DTS auto-generates one. Specify a descriptive name for easy identification. Names don't need to be unique.
Source DatabaseSelect a DMS database instance.Select an existing DMS database instance to auto-populate connection parameters, or configure the connection manually. To register a new database, click Add DMS Database Instance in the DMS console (see Register an Alibaba Cloud database instance and Register a database hosted on a third-party cloud service or a self-managed database). In the DTS console, register on the Database Connections page. See Manage database connections.
Database TypeSelect DB2 for LUW.
Access MethodSelect Self-managed Database on ECS. If the source is a self-managed database, set up the network environment first. See Preparation overview.
Instance RegionThe region where the source Db2 for LUW database is deployed.
Replicate Data Across Alibaba Cloud AccountsSelect No if using a database under the current Alibaba Cloud account.
ECS Instance IDThe ID of the Elastic Compute Service (ECS) instance running the Db2 for LUW database.
Port NumberThe Db2 for LUW service port. Default: 50000.
Database NameThe name of the Db2 for LUW database containing the objects to synchronize.
Database AccountThe Db2 for LUW database account. See Required permissions.
Database PasswordThe password for the database account.
Destination DatabaseSelect a DMS database instance.Select an existing DMS database instance to auto-populate connection parameters, or configure the connection manually. Registration options are the same as for the source database.
Database TypeSelect AnalyticDB for PostgreSQL.
Access MethodSelect Alibaba Cloud Instance.
Instance RegionThe region where the destination AnalyticDB for PostgreSQL instance resides.
Instance IDThe ID of the destination AnalyticDB for PostgreSQL instance.
Database NameThe name of the target database in AnalyticDB for PostgreSQL.
Database AccountThe AnalyticDB for PostgreSQL account. See Required permissions.
Database PasswordThe password for the database account.

Step 5: Test connectivity

Click Test Connectivity and Proceed at the bottom of the page.

DTS automatically adds its server CIDR blocks to the whitelist of Alibaba Cloud database instances or the security group rules of ECS-hosted databases. For self-managed databases in data centers or hosted by third-party providers, manually add DTS server CIDR blocks to the database whitelist. See Add the CIDR blocks of DTS servers.

Warning

Adding DTS server CIDR blocks to whitelists or security groups introduces security exposure. Before proceeding, take preventive measures including: strengthening account credentials, limiting exposed ports, authenticating API calls, regularly auditing whitelist and security group rules, and considering private connectivity options such as Express Connect, VPN Gateway, or Smart Access Gateway.

Step 6: Configure objects to synchronize

Configure synchronization types and objects:

ParameterDescription
Synchronization TypesIncremental Data Synchronization is selected by default. Also select Schema Synchronization and Full Data Synchronization to synchronize historical data as the baseline for incremental sync.
DDL and DML Operations to Be SynchronizedThe DML operations to synchronize. See SQL operations that can be synchronized. To configure per-table or per-database SQL operations, right-click an object in Selected Objects and select the operations to synchronize.
Processing Mode of Conflicting TablesPrecheck and Report Errors (default): checks for identical table names in source and destination. If matches exist, the precheck fails and the task cannot start. Use object name mapping to rename destination tables if needed. Ignore Errors and Proceed: skips the check.
Warning

This may cause data inconsistency. During full sync, conflicting records in the destination are retained; during incremental sync, they are overwritten. If schemas differ, initialization may fail.

Storage Engine TypeAutomatically set to Beam. Available only for AnalyticDB for PostgreSQL V7.0.6.6 or later when Schema Synchronization is selected.
Capitalization of Object Names in Destination InstanceThe capitalization policy for database, table, and column names in the destination. Default: DTS default policy. See Specify the capitalization of object names in the destination instance.
Source ObjectsSelect objects from Source Objects and click the 向右 icon to move them to Selected Objects. Only tables are supported — views, triggers, and stored procedures are excluded.
Selected ObjectsRight-click an object to rename it in the destination instance. See Map the name of a single object. For bulk renaming, click Batch Edit. See Map multiple object names at a time. To filter data with WHERE conditions, right-click an object and specify conditions. See Specify filter conditions.

Configure advanced settings:

Click Next: Advanced Settings to configure the following parameters.

ParameterDescription
Dedicated Cluster for Task SchedulingBy default, DTS schedules the task to the shared cluster. For improved stability, purchase a dedicated cluster. See What is a DTS dedicated cluster.
Retry Time for Failed ConnectionsThe window during which DTS retries failed database connections after the task starts. Valid values: 10–1440 minutes. Default: 720 minutes. We recommend that you set this parameter to a value greater than 30 minutes. If DTS reconnects within this window, the task resumes; otherwise, it fails. If multiple tasks share a source or destination, the shortest retry window applies. Note that retry time counts toward billing.
Retry Time for Other IssuesThe window for retrying failed DDL or DML operations. Valid values: 1–1440 minutes. Default: 10 minutes. We recommend that you set this parameter to a value greater than 10 minutes. Must be smaller than Retry Time for Failed Connections.
Enable Throttling for Full Data MigrationLimits resource usage during full data synchronization by capping Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Displayed only when Full Data Synchronization is selected.
Enable Throttling for Incremental Data SynchronizationLimits resource usage during incremental sync by configuring RPS of Incremental Data Synchronization and Data synchronization speed for incremental synchronization (MB/s).
Environment TagA tag to identify the synchronization instance. Optional.
Configure ETLEnables extract, transform, and load (ETL) processing. Select Yes to enter data processing statements in the code editor. See Configure ETL in a data migration or data synchronization task. Select No to skip. See What is ETL?
Monitoring and AlertingConfigures alerts for task failures or high synchronization latency. Select Yes to set an alert threshold and notification contacts. See Configure monitoring and alerting when you create a DTS task.

Configure data verification (optional):

Click Next Step: Data Verification. See Configure a data verification task.

Configure database and table fields (optional):

Click Next: Configure Database and Table Fields to set the Type, Primary Key Column, and Distribution Key for destination tables. This step is available only when Schema Synchronization is selected. Set Definition Status to All to view and modify all tables.

The Primary Key Column supports composite primary keys (multiple columns). At least one Primary Key Column must also be specified as a Distribution Key. See Manage tables and Define table distribution.

Step 7: Run the precheck

Click Next: Save Task Settings and Precheck.

To preview the API parameters for this task configuration, hover over the button and click Preview OpenAPI parameters before proceeding.

DTS runs a precheck before starting the task. The task starts only after the precheck passes. If the precheck fails, click View Details next to each failed item, resolve the issue, and click Precheck Again. For alert items that can be ignored, click Confirm Alert Details, then click Ignore > OK > Precheck Again. Ignoring alerts may cause data inconsistency.

Step 8: Purchase the synchronization instance

After Success Rate reaches 100%, click Next: Purchase Instance.

On the buy page, configure the instance:

SectionParameterDescription
New Instance ClassBilling MethodSubscription: pay upfront for a fixed term; more cost-effective for long-term use. Pay-as-you-go: billed hourly; suitable for short-term use. Release the instance when no longer needed to stop charges.
Resource Group SettingsThe resource group for the instance. Default: default resource group. See What is Resource Management?
Instance ClassDetermines synchronization speed. See Instance classes of data synchronization instances.
Subscription DurationAvailable only for the Subscription billing method. Options: 1–9 months, 1 year, 2 years, 3 years, or 5 years.

Read and accept Data Transmission Service (Pay-as-you-go) Service Terms, then click Buy and Start. In the confirmation dialog, click OK.

The task appears in the task list. Track its progress there.

What's next

  • Monitor synchronization latency and task status in the DTS console.

  • After synchronization is complete and data is verified, switch your application to use the AnalyticDB for PostgreSQL instance.

  • Use DMS to run DDL statements on the destination database online. See Perform lock-free DDL operations.