All Products
Search
Document Center

Data Transmission Service:Migrate data from a Db2 for LUW database to an Alibaba Cloud Message Queue for Kafka instance

Last Updated:Mar 28, 2026

Use Data Transmission Service (DTS) to migrate data from an IBM Db2 for Linux, UNIX, and Windows (LUW) database to an ApsaraMQ for Kafka instance. DTS supports schema migration, full data migration, and incremental data migration using the Change Data Capture (CDC) technology built into Db2 for LUW.

Billing

Migration typeInstance configuration feeInternet traffic fee
Schema migration and full data migrationFreeFree
Incremental data migrationCharged. For more information, see Billing overview.

Supported SQL operations

Incremental data migration supports the following DML operations: INSERT, UPDATE, and DELETE.

DTS does not migrate foreign keys from the source database. Cascade and delete operations defined on the source are not replicated to the destination.

Prerequisites

Before you begin, make sure the following conditions are met.

Source Db2 for LUW database:

  • The server hosting the source database has sufficient outbound bandwidth. Insufficient bandwidth reduces migration speed.

  • Tables to be migrated have PRIMARY KEY or UNIQUE constraints, with all fields unique. Tables without these constraints may produce duplicate records in the destination.

  • If you want to migrate incremental data, archive logging must be enabled on the source database. DTS reads the Db2 transaction log to capture incremental changes, which requires archive logging to be active. Set the logarchmeth1 or logarchmeth2 configuration parameter to enable archive logging. For details, see logarchmeth1 and logarchmeth2.

    Important

    After modifying the logarchmeth parameters, back up the source database for the changes to take effect. Without a backup, the precheck may fail.

  • For incremental migration, operation logging is enabled on the destination database. The precheck fails and the task cannot start if logging is disabled.

  • For incremental data migration only: data logs on the source database are retained for more than 24 hours.

  • For combined full and incremental data migration: data logs are retained for at least 7 days.

    If log retention is insufficient, DTS may fail to read logs, which can cause the task to fail or data loss to occur. After full migration completes, you can reduce the retention period to more than 24 hours. The DTS service level agreement (SLA) does not guarantee reliability or performance if retention requirements are not met.

Destination ApsaraMQ for Kafka instance:

  • The available storage space is larger than the total data size of the source Db2 for LUW database. For sizing guidance, see Overview.

  • A topic is created to receive migrated data. See the "Step 1: Create a topic" section in Step 3: Create resources.

For supported Db2 for LUW versions, see Overview of data migration scenarios.

Required permissions

Grant the Db2 for LUW database account the permissions listed below based on your migration type.

Migration typeRequired permissions
Full data migrationCONNECT and SELECT
Schema migrationCONNECT and SELECT
Incremental data migrationThe database administrator permissions
For incremental data migration, DTS requires database administrator authorization to read Db2 transaction log records.

For instructions on creating accounts and granting permissions, see Creating group and user IDs for a Db2 database installation and Authorities overview.

Limitations

CategoryLimitation
Source databaseThe server must have sufficient outbound bandwidth; otherwise, migration speed decreases.
Source databaseTables must have PRIMARY KEY or UNIQUE constraints with all fields unique; otherwise, duplicate records may appear in the destination.
Source databaseWhen migrating individual tables (not an entire database) with object name mapping, a single task supports up to 1,000 tables. For more than 1,000 tables, configure multiple tasks or migrate the entire database instead.
Incremental migrationCDC replication is subject to the limitations of Db2 for LUW's SQL Replication. For details, see General data restrictions for SQL Replication.
During migrationDo not perform DDL operations (schema changes) on the source during full data migration. The task fails if schema changes are detected.
During migrationIf running full data migration only, do not write to the source database during the task. Data inconsistency may result. To ensure consistency during live traffic, select both full data migration and incremental data migration.
PerformanceDuring full data migration, DTS reads and writes to both source and destination databases, which increases server load. Schedule migration during off-peak hours.
StorageConcurrent INSERT operations during full data migration cause tablespace fragmentation. The destination tablespace may be larger than the source after migration completes.
Data isolationDo not write data unrelated to the source database to the destination during migration. Use Data Management (DMS) for DDL operations after the migration task completes.
FailoverIf a primary/secondary switchover occurs on the source while the task is running, the task fails.
Migration latencyLatency is calculated from the latest migrated data timestamp against the current source timestamp. If no DML operations occur on the source for an extended period, latency readings may be inaccurate. Perform a DML operation on the source to refresh the latency display. If migrating an entire database, create a heartbeat table that receives updates every second.
ApsaraMQ for KafkaIf the destination ApsaraMQ for Kafka instance is upgraded or downgraded during migration, restart the instance to resume the task.

Create a migration task

Step 1: Go to the Data Migration Tasks page

  1. Log on to the Data Management (DMS) console.

  2. In the top navigation bar, click DTS.

  3. In the left-side navigation pane, choose DTS (DTS) > Data Migration.

The DMS console layout may differ based on your configuration. For details, see Simple mode and Customize the layout and style of the DMS console. Alternatively, go directly to the Data Migration Tasks page in the new DTS console.

Step 2: Select a region

From the drop-down list next to Data Migration Tasks, select the region where the migration instance will reside.

In the new DTS console, select the region from the upper-left corner instead.

Step 3: Configure the source and destination databases

Click Create Task. On the Create Task page, configure the following parameters.

Task settings:

ParameterDescription
Task NameA name for the migration task. DTS assigns a default name automatically. Specify a descriptive name for easy identification. The name does not need to be unique.

Source database (Db2 for LUW):

ParameterDescription
Select an existing DMS database instance(Optional) Select a previously registered DMS database instance. If selected, DTS auto-fills the parameters below. If left blank, configure the parameters manually.
Database TypeSelect DB2 for LUW.
Access MethodSelect the method based on where the source database is deployed. This example uses Self-managed Database on ECS. For other access methods, see Preparation overview.
Instance RegionThe region where the source Db2 for LUW database resides.
Replicate Data Across Alibaba Cloud AccountsSelect No for this example.
ECS Instance IDThe ID of the ECS instance hosting the source database.
Port NumberThe service port of the source Db2 for LUW database.
Database NameThe name of the source database containing the objects to migrate.
Database AccountThe database account. See Required permissions for the minimum permissions needed.
Database PasswordThe password for the database account.

Destination database (ApsaraMQ for Kafka):

ParameterDescription
Select an existing DMS database instance(Optional) Select a previously registered DMS database instance. If selected, DTS auto-fills the parameters below.
Database TypeSelect Kafka.
Access MethodSelect Express Connect, VPN Gateway, or Smart Access Gateway. DTS connects to ApsaraMQ for Kafka instances as a self-managed Kafka cluster via this access method.
Instance RegionThe region where the ApsaraMQ for Kafka instance resides.
Connected VPCThe virtual private cloud (VPC) ID for the destination ApsaraMQ for Kafka instance. To find the VPC ID: log on to the ApsaraMQ for Kafka console, go to the Instance Details page, and check the VPC ID in the Configuration Information section of the Instance Information tab.
IP Address or Domain NameAn IP address from the default endpoint of the ApsaraMQ for Kafka instance. To find an IP address: on the Instance Details page, go to the Endpoint Information section of the Instance Information tab. Find the endpoint with Default Endpoint in the Type column, hover over the value in the Domain Name column, and copy one of the listed IP addresses.
Port NumberThe service port of the ApsaraMQ for Kafka instance. Default: 9092.
Database AccountThe SASL username for the ApsaraMQ for Kafka instance. Required only if the access control list (ACL) feature is enabled. To find the username: on the Instance Details page, go to the Manage SASL Users tab. For instructions on enabling ACL, see Grant permissions to SASL users.
Database PasswordThe SASL password. On the Manage SASL Users tab, find the account and click Copy Password in the Password column.
Kafka VersionThe version of the ApsaraMQ for Kafka instance. Select 0.10 if the instance version is 0.10.2. Select Later Than 1.0 if the version is 2.6.2 or 2.2.0.
EncryptionSelect Non-encrypted or SCRAM-SHA-256 based on your security requirements.
TopicThe topic to receive migrated data. Select from the drop-down list.
Topic That Stores DDL InformationThe topic for storing DDL information. If not configured, DDL information is stored in the topic specified by Topic.
Use Kafka Schema RegistryWhether to use Kafka Schema Registry. Select No to skip. Select Yes and enter the Schema Registry URL or IP address if you store Avro schemas in Kafka Schema Registry.

Step 4: Test connectivity

Click Test Connectivity and Proceed.

DTS automatically adds its server CIDR blocks to the security settings of Alibaba Cloud database instances and ECS-hosted databases. For self-managed databases in data centers or third-party environments, manually add the DTS server CIDR blocks to the database whitelist. For the list of CIDR blocks and instructions, see the "CIDR blocks of DTS servers" section in Add the CIDR blocks of DTS servers to the security settings of on-premises databases.

Warning

Adding DTS server CIDR blocks to a database whitelist or ECS security group introduces potential security risks. Before proceeding, take preventive measures: strengthen account and password security, restrict exposed ports, authenticate API calls, and review the whitelist and security group rules regularly to remove unauthorized entries. Where possible, connect the database to DTS using Express Connect, VPN Gateway, or Smart Access Gateway instead of public endpoints.

Step 5: Select objects and configure migration settings

ParameterDescription
Migration TypesSelect the migration types to run: Schema Migration, Full Data Migration, and Incremental Data Migration. To maintain service continuity during migration, select all three. For a one-time migration without ongoing replication, select Schema Migration and Full Data Migration only — avoid writing to the source database during the task to prevent data inconsistency.
Processing Mode of Conflicting TablesPrecheck and Report Errors: checks for tables with identical names in the source and destination. The task does not start if conflicts are found. Use the object name mapping feature to rename conflicting tables. See Map object names. Ignore Errors and Proceed: skips the conflict check. If the schemas match, rows with duplicate primary keys are skipped. If the schemas differ, only matching columns are migrated, or the task fails. Use with caution.
Data Format in KafkaOnly DTS Avro is supported. Data is parsed using the DTS Avro schema. For schema details, see the avro directory on GitHub.
Policy for Shipping Data to Kafka PartitionsThe policy for distributing migrated data across Kafka partitions. Select a policy based on your business requirements. For available policies and their behavior, see Specify the policy for migrating data to Kafka partitions.
Capitalization of Object Names in Destination InstanceThe capitalization policy for database, table, and column names in the destination. Default: DTS default policy. To match the source or destination capitalization, select the corresponding option. For details, see Specify the capitalization of object names in the destination instance.
Source ObjectsSelect the objects to migrate from the Source Objects list and click the icon to add them to Selected Objects. You can select columns, tables, or databases. If you select tables or columns, DTS does not migrate other object types such as views, triggers, and stored procedures.
Selected ObjectsTo rename an object in the destination, right-click it in the Selected Objects section. See Map the name of a single object. To rename multiple objects at once, click Batch Edit. See Map multiple object names at a time. To filter rows using WHERE conditions, right-click the object and specify the conditions. See Set filter conditions. To select which SQL operations to replicate for a specific object, right-click the object and select the operations.
If you use object name mapping to rename an object, other objects that depend on it may fail to migrate.

Step 6: Configure advanced settings

Click Next: Advanced Settings and configure the following parameters.

ParameterDescriptionDefault
Select the dedicated cluster used to schedule the taskBy default, DTS schedules the task to a shared cluster. To use a dedicated cluster, select one here. For details, see What is a DTS dedicated cluster.Shared cluster
Set AlertsWhether to send alerts if the task fails or migration latency exceeds the threshold. Select Yes and configure the threshold and alert contacts. See Configure monitoring and alerting.No
Retry Time for Failed ConnectionsHow long DTS retries a connection if the source or destination becomes unreachable. If reconnected within this window, DTS resumes the task; otherwise, the task fails. Set to a value greater than 30 minutes. Valid range: 10–1,440 minutes. If multiple tasks share the same source or destination, the most recently set value applies.720 minutes
The wait time before a retry when other issues occur in the source and destination databasesHow long DTS retries failed DDL or DML operations. Must be smaller than the Retry Time for Failed Connections value. Set to a value greater than 10 minutes. Valid range: 1–1,440 minutes.10 minutes
Enable Throttling for Full Data MigrationWhether to limit the read/write rate during full data migration to reduce load on the source and destination. Configure QPS (queries per second) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Displayed only when Full Data Migration is selected.Disabled
Enable Throttling for Incremental Data MigrationWhether to limit the rate during incremental data migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Displayed only when Incremental Data Migration is selected.Disabled
Environment TagA tag for identifying the DTS instance. Select based on your environment (for example, production or test).None
Configure ETLWhether to apply extract, transform, and load (ETL) processing to the migrated data. Select Yes and enter data processing statements in the editor. See Configure ETL in a data migration or synchronization task.No

Step 7: Run the precheck

Click Next: Save Task Settings and Precheck.

To view the API parameters for this task configuration, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters before proceeding.

DTS runs a precheck before the migration task starts. If the precheck fails:

  • For failed items: click View Details next to the item, fix the reported issue, then click Precheck Again.

  • For alert items that can be ignored: click Confirm Alert Details, then click Ignore in the View Details dialog. Click OK in the confirmation message, then click Precheck Again. Ignoring an alert may cause data inconsistency.

Wait until the success rate reaches 100%, then click Next: Purchase Instance.

Step 8: Purchase the instance and start the task

On the Purchase Instance page, configure the following parameters.

ParameterDescription
Resource GroupThe resource group for the migration instance. Default: default resource group. For details, see What is Resource Management?
Instance ClassThe instance class determines migration speed. Select based on your data volume and time requirements. For specifications, see Specifications of data migration instances.

Select the Data Transmission Service (Pay-as-you-go) Service Terms checkbox, then click Buy and Start.

The task appears in the task list. Monitor its progress from there.

What's next

  • To perform DDL operations on the destination after migration, use Data Management (DMS) to avoid locking issues.