All Products
Search
Document Center

Data Transmission Service:Migrate data from a Db2 for LUW database to an AnalyticDB for PostgreSQL instance

Last Updated:Mar 28, 2026

This topic describes how to use Data Transmission Service (DTS) to migrate data from a self-managed Db2 for LUW database to an AnalyticDB for PostgreSQL instance. DTS supports schema migration, full data migration, and incremental data migration for this migration path.

Prerequisites

Before you begin, ensure that you have:

  • An AnalyticDB for PostgreSQL instance with available storage space larger than the total data size in the source Db2 for LUW database. To create an instance, see Create an instance

  • A database created in the destination AnalyticDB for PostgreSQL instance to receive the migrated data. For details, see Import vector data

  • A database account with the required permissions (see Required permissions)

For incremental data migration only:

  • Archive logging enabled on the source Db2 for LUW database, configured via logarchmeth1 or logarchmeth2. For configuration details, see logarchmeth1 and logarchmeth2

Important

After modifying logarchmeth configurations, back up the source database to make the change take effect. Otherwise, DTS returns an error during the precheck.

For supported database versions, see Overview of data migration scenarios.

Billing

Migration typeInstance configuration feeInternet traffic fee
Schema migration and full data migrationFree of chargeFree of charge
Incremental data migrationCharged. See Billing overview.

Required permissions

DatabaseSchema migrationFull data migrationIncremental data migration
Db2 for LUWCONNECT and SELECTCONNECT and SELECTDBADM authority
AnalyticDB for PostgreSQLRead and write permissions on the destination database

For instructions on creating accounts and granting permissions:

Limitations

During schema migration, DTS migrates foreign keys from the source database to the destination database.
During full data migration and incremental data migration, DTS temporarily disables foreign key constraint checks and cascade operations at the session level. Performing cascade or delete operations on the source database during migration may cause data inconsistency.

Source database limitations

  • The source server must have sufficient outbound bandwidth; otherwise, migration speed decreases.

  • Tables must have a PRIMARY KEY or UNIQUE constraint with all fields unique. Without this, the destination database may contain duplicate records.

  • If you use object name mapping to rename objects and want to migrate more than 1,000 tables, split the migration into multiple tasks or migrate the entire database instead. Tasks that target more than 1,000 individually selected tables with renaming fail with a request error.

  • For incremental data migration:

    • Data logging must be enabled on the source database. DTS fails the precheck if logging is disabled.

    • Incremental migration only: data logs must be retained for more than 24 hours.

    • Full data migration + incremental data migration: data logs must be retained for at least 7 days. After full migration completes, you can reduce the retention period to more than 24 hours.

    • If the retention period is insufficient, DTS may fail to read data logs, causing task failure or, in rare cases, data inconsistency or loss. This scenario falls outside the scope of the DTS Service Level Agreement (SLA).

  • During full data migration, do not perform DDL operations on the source database. Schema changes during this phase cause the task to fail.

  • If you run full data migration only (without incremental migration), avoid writing to the source database during migration. To guarantee data consistency, run schema migration, full data migration, and incremental data migration together.

Other limitations

  • Incremental data migration uses the change data capture (CDC) replication technology of Db2 for LUW. CDC has its own data type restrictions. See General data restrictions for SQL Replication.

  • Only tables can be selected as migration objects. Append-optimized (AO) tables are not supported as a destination table type.

  • During schema migration and incremental data migration, foreign keys are not migrated to the destination database.

  • Concurrent INSERT operations during full data migration cause tablespace fragmentation. The destination tablespace will be larger than the source after full migration completes.

  • If tools other than DTS write to the destination database during migration, data inconsistency may occur. After migration completes, use Data Management (DMS) for online DDL operations. See Perform lock-free DDL operations.

  • If column mapping is used for non-full table migration, or if the source and destination schemas differ, data in source columns that do not exist in the destination is lost.

  • Run migrations during off-peak hours when possible. Full data migration uses read and write resources on both databases, which increases server load.

Limitations specific to self-managed Db2 for LUW

  • If a primary/secondary failover occurs on the source database while the migration task is running, the task fails.

  • DTS calculates migration latency as the difference between the timestamp of the last migrated record in the destination database and the current timestamp in the source database. If no DML operations are performed on the source for an extended period, the displayed latency may be inaccurate. To refresh the latency reading, perform a DML operation on the source. If you select an entire database as the migration object, create a heartbeat table that is updated every second.

SQL operations supported for incremental migration

Operation typeSQL statements
DMLINSERT, UPDATE, DELETE

Create a migration task

The following steps guide you through configuring and starting a DTS migration task from a self-managed Db2 for LUW database to an AnalyticDB for PostgreSQL instance.

Step 1: Go to the Data Migration Tasks page

  1. Log on to the Data Management (DMS) console.

  2. In the top navigation bar, click DTS.

  3. In the left-side navigation pane, choose DTS (DTS) > Data Migration.

The steps above may vary based on the DMS console mode and layout. For details, see Simple mode and Customize the layout and style of the DMS console. Alternatively, go directly to the Data Migration Tasks page.

Step 2: Configure source and destination databases

  1. From the drop-down list next to Data Migration Tasks, select the region where the migration instance resides.

    In the new DTS console, select the region in the upper-left corner instead.
  2. Click Create Task. On the Create Task wizard, configure the following parameters.

Source database

ParameterDescription
Select an existing DMS database instance (optional)Select an existing registered instance to auto-populate its parameters, or skip this and configure the parameters manually.
Database TypeSelect DB2 for LUW.
Access MethodSelect the access method based on where the source database is deployed. This example uses Self-managed Database on ECS. For other access methods, see Preparation overview.
Instance RegionThe region where the source Db2 for LUW database resides.
Replicate Data Across Alibaba Cloud AccountsSelect No if the source and destination are under the same Alibaba Cloud account (as in this example).
ECS Instance IDThe ID of the Elastic Compute Service (ECS) instance that hosts the source Db2 for LUW database.
Port NumberThe service port of the source Db2 for LUW database.
Database NameThe name of the source database that contains the objects to migrate.
Database AccountThe account for the source database. For required permissions, see Required permissions.
Database PasswordThe password for the database account.

Destination database

ParameterDescription
Select an existing DMS database instance (optional)Select an existing registered instance to auto-populate its parameters, or skip this and configure the parameters manually.
Database TypeSelect AnalyticDB for PostgreSQL.
Access MethodSelect Alibaba Cloud Instance.
Instance RegionThe region where the destination AnalyticDB for PostgreSQL instance resides.
Instance IDThe ID of the destination AnalyticDB for PostgreSQL instance.
Database NameThe name of the database in the destination instance that will receive the migrated data.
Database AccountThe account for the destination database. For required permissions, see Required permissions.
Database PasswordThe password for the database account.

Step 3: Test connectivity

Click Test Connectivity and Proceed.

DTS automatically adds its CIDR blocks to the security configuration of Alibaba Cloud database instances and ECS-hosted databases. For self-managed databases in a data center or on a third-party cloud, add the DTS CIDR blocks to the database IP address whitelist manually. See Add the CIDR blocks of DTS servers to the security settings of on-premises databases.

Warning

Adding DTS CIDR blocks to your IP address whitelist or ECS security group rules carries security risks. Before proceeding, take preventive measures such as: using strong account credentials, restricting exposed ports, authenticating API calls, auditing whitelist entries regularly, and connecting to DTS over Express Connect, VPN Gateway, or Smart Access Gateway.

Step 4: Configure migration objects and advanced settings

Configure the following parameters.

Migration types

Choose the migration type based on how much downtime you can accept:

OptionWhen to use
Schema Migration + Full Data MigrationUse when you can stop writes to the source database before migrating. The task completes after the initial data load finishes.
Schema Migration + Full Data Migration + Incremental Data Migration (recommended)Use when the source database must stay in production during migration. Incremental migration continuously captures and applies changes until you cut over.
If you do not select Incremental Data Migration, avoid writing to the source database during migration to prevent data inconsistency.

Other settings

ParameterDescription
DDL and DML Operations to Be SynchronizedThe SQL operations to include in incremental migration. For the supported operations, see SQL operations supported for incremental migration. To configure per-object SQL operation filtering, right-click an object in the Selected Objects section and select the operations in the dialog box.
Processing Mode of Conflicting TablesPrecheck and Report Errors: DTS fails the precheck if any destination tables share names with source tables. Use object name mapping to rename conflicting tables before starting the task. See Map object names. Ignore Errors and Proceed: DTS skips the name conflict check. If source and destination tables share the same schema, records with matching primary key values are not migrated. If schemas differ, only matching columns are migrated, or the task may fail. Use with caution.
Capitalization of Object Names in Destination InstanceControls the case of database, table, and column names in the destination. Default is DTS default policy. Match this to the default case convention of the destination database (for example, lowercase) to avoid errors during the task. See Specify the capitalization of object names in the destination instance.
Source ObjectsSelect one or more objects and click the 向右小箭头 icon to add them to Selected Objects. Only tables can be selected.
Selected ObjectsTo rename an object, right-click it and use the object name mapping feature. To rename multiple objects at once, click Batch Edit. See Map object names. To filter rows by condition, right-click an object and specify a WHERE clause. See Use SQL conditions to filter data.
Object name mapping and schema differences between source and destination can result in data loss for columns that exist in the source but not in the destination.

Step 5: Configure advanced settings

Click Next: Advanced Settings and configure the following parameters.

Data verification

See Enable data verification.

Advanced settings

ParameterDescription
Select the dedicated cluster used to schedule the taskBy default, DTS schedules the task on the shared cluster. To use a dedicated cluster, purchase one and specify it here. See What is a DTS dedicated cluster.
Set AlertsSelect Yes to receive notifications when the task fails or migration latency exceeds the threshold. Specify the threshold and alert contacts. See Configure monitoring and alerting.
Retry Time for Failed ConnectionsHow long DTS retries if the source or destination database connection fails after the task starts. Range: 10–1440 minutes. Default: 720 minutes. Set to at least 30 minutes. If DTS reconnects within this period, it resumes the task; otherwise, the task fails.
Note

If multiple tasks share the same source or destination, the retry time set most recently takes effect. During retry, you are charged for the DTS instance.

The wait time before a retry when other issues occur in the source and destination databasesHow long DTS retries failed DDL or DML operations. Range: 1–1440 minutes. Default: 10 minutes. Set to at least 10 minutes. This value must be smaller than Retry Time for Failed Connections.
Enable Throttling for Full Data MigrationThrottles full data migration to reduce load on the destination. Configure QPS (queries per second) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Displayed only when Full Data Migration is selected.
Enable Throttling for Incremental Data MigrationThrottles incremental data migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Displayed only when Incremental Data Migration is selected.
Environment TagAn optional tag to identify the DTS instance by environment (for example, production or staging).
Configure ETLSelect Yes to use the extract, transform, and load (ETL) feature and enter data processing statements. See Configure ETL in a data migration or data synchronization task.

Step 6: Configure database and table fields (optional)

Click Next: Configure Database and Table Fields to set the Type, Primary Key Column, and Distribution Key for tables that will be migrated to AnalyticDB for PostgreSQL.

This step applies only when Schema Migration is selected. Set Definition Status to All to view and modify all tables.
In the Primary Key Column field, specify multiple columns to form a composite primary key. At least one primary key column must also be designated as a distribution key.
For details on primary key and distribution key configuration, see Manage tables and Define table distribution.

Step 7: Run the precheck

Click Next: Save Task Settings and Precheck.

To preview the API parameters for this task configuration, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters before saving.
DTS runs a precheck before starting the task. The task only starts after passing the precheck.
If the precheck fails, click View Details next to each failed item, resolve the issues, then click Precheck Again.
If an alert item can be safely ignored, click Confirm Alert Details, then Ignore, then OK, then Precheck Again. Ignoring alerts may result in data inconsistency.

Step 8: Purchase and start

  1. Wait for the precheck success rate to reach 100%, then click Next: Purchase Instance.

  2. On the Purchase Instance page, configure the following:

    SectionParameterDescription
    New Instance ClassResource GroupThe resource group for the migration instance. Default: default resource group. See What is Resource Management?
    Instance ClassThe instance class determines migration speed. Select based on your data volume and time requirements. See Specifications of data migration instances.
  3. Read and select Data Transmission Service (Pay-as-you-go) Service Terms.

  4. Click Buy and Start. The task appears in the task list where you can monitor its progress.

What's next