Data Transmission Service (DTS) lets you migrate data from a self-managed Oracle database into an ApsaraMQ for Kafka instance. DTS supports schema migration, full data migration, and incremental data migration — use all three together for live systems where you need continuous replication with no data gaps.
Prerequisites
Before you begin, make sure you have completed all of the following:
-
Created the source and destination instances. The self-managed Oracle database and the ApsaraMQ for Kafka instance must both be running. See Overview of data migration scenarios for supported versions.
-
Enabled ARCHIVELOG mode on Oracle. The Oracle database must run in ARCHIVELOG mode, with archived log files accessible and an appropriate retention period set. See Managing Archived Redo Log Files.
-
Enabled supplemental logging on Oracle. Supplemental logging must be enabled, with
SUPPLEMENTAL_LOG_DATA_PKandSUPPLEMENTAL_LOG_DATA_UIset toYES. See Supplemental Logging. -
Verified Kafka storage capacity. The available storage of the ApsaraMQ for Kafka instance must exceed the storage occupied by the Oracle database.
-
Created a Kafka topic. Create a topic in the ApsaraMQ for Kafka instance to receive the migrated data. See Step 1: Create a topic.
-
Created an Oracle database account with the required permissions. See Set up the Oracle account below.
-
Reviewed DTS capabilities and limits for Oracle. Advanced Database and Application Migration (ADAM) is available for database evaluation before migration. See Prepare an Oracle database and Overview.
Set up the Oracle account
Create a database account with the permissions shown below. Skip this step if you already have a qualifying account.
| Database | Schema migration | Full data migration | Incremental data migration |
|---|---|---|---|
| Source Oracle database | Schema owner permissions | Schema owner permissions | Fine-grained permissions |
For detailed instructions on how to create a database account and grant the required permissions, see Prepare a database account, CREATE USER, and GRANT.
For incremental data migration, you must also enable archive logging and supplemental logging. See Configure an Oracle database.
Limitations
DTS does not migrate foreign keys. CASCADE and DELETE behaviors defined on the source are not applied in the destination.
Network and connectivity
-
The source database server must have sufficient outbound bandwidth. Insufficient bandwidth reduces migration throughput.
-
For Oracle Real Application Clusters (RAC) connected over Express Connect: you must specify a virtual IP address (VIP) for the database when you configure the source database.
-
For Oracle Real Application Clusters (RAC) connected over Express Connect, VPN Gateway, Smart Access Gateway, Database Gateway, or Cloud Enterprise Network (CEN): use a single virtual IP address (VIP) instead of a Single Client Access Name (SCAN) IP. After you specify the VIP, node failover is no longer supported for that Oracle RAC database.
Data types
-
If a source Oracle field contains an empty
VARCHAR2string — treated asNULLin Oracle — and the corresponding destination field has aNOT NULLconstraint, the migration task fails. -
Do not update
LONGTEXTfields during migration. The task will fail if aLONGTEXTupdate is detected.
Object requirements
-
Tables must have a
PRIMARY KEYorUNIQUEconstraint, with all fields unique. Without this, duplicate records may appear in the destination. -
For Oracle 12c and later, table names cannot exceed 30 bytes.
-
If you select individual tables and need to rename tables or columns: a single task supports at most 1,000 tables. For more than 1,000 tables, split the work across multiple tasks or migrate the entire database in one task.
Log retention for incremental migration
Redo logging and archive logging must be enabled.
| Migration mode | Minimum log retention |
|---|---|
| Incremental migration only | More than 24 hours |
| Full migration + incremental migration | At least 7 days (can reduce to 24 hours after full migration completes) |
If DTS cannot access the required logs, the task fails. In extreme cases, data inconsistency or loss may occur. DTS service-level agreements do not cover failures caused by insufficient log retention.
Operational restrictions during migration
| Phase | Restriction |
|---|---|
| Schema migration and full data migration | Do not perform DDL operations that change database or table schemas. |
| Full migration only | Do not write to the source database. To guarantee consistency, run schema migration, full data migration, and incremental data migration together. |
Other considerations
-
Schedule migrations during off-peak hours. Full data migration increases read and write load on both source and destination servers.
-
After full data migration, the destination tablespace is larger than the source because concurrent
INSERToperations cause fragmentation. -
DTS retries failed migration tasks for up to 7 days. Before switching workloads to the destination, stop or release any failed tasks — or revoke DTS write permissions using
REVOKE. Otherwise, a resumed failed task can overwrite destination data with source data. -
If the destination Kafka cluster is upgraded or downgraded during migration, restart the cluster.
Billing
| Migration type | Instance configuration fee | Internet traffic fee |
|---|---|---|
| Schema migration + Full data migration | Free | Charged only when data leaves Alibaba Cloud over the Internet. See Billing overview. |
| Incremental data migration | Charged. See Billing overview. | — |
Choose a migration type
Select migration types based on your goal. For live systems, use all three types together.
| Migration type | What it does | When to use it |
|---|---|---|
| Schema migration | Migrates object schemas from Oracle to Kafka. Triggers are not supported — delete source triggers before migrating to prevent data inconsistency. See Configure a data synchronization task for a source database that contains a trigger. | Always include as the first step. |
| Full data migration | Migrates all historical data from Oracle to Kafka. No DDL on migrated objects during this phase. | Use alone only for one-time historical data loads with no live traffic. |
| Incremental data migration | Reads Oracle redo log files and continuously streams changes to Kafka. Keeps the destination in sync during and after migration. | Combine with full migration for live systems. |
Recommended: Select Schema Migration, Full Data Migration, and Incremental Data Migration together. This is the standard approach for ensuring service continuity when you migrate data from a self-managed Oracle database.
SQL operations supported for incremental migration
| Operation type | SQL statements |
|---|---|
| DML | INSERT, UPDATE, DELETE |
| DDL | CREATE TABLE, ALTER TABLE, DROP TABLE, RENAME TABLE, TRUNCATE TABLE |
| CREATE VIEW, ALTER VIEW, DROP VIEW | |
| CREATE PROCEDURE, ALTER PROCEDURE, DROP PROCEDURE | |
| CREATE FUNCTION, DROP FUNCTION, CREATE TRIGGER, DROP TRIGGER | |
| CREATE INDEX, DROP INDEX |
Configure and start the migration task
Step 1: Open the Data Migration Tasks page
-
Log in to the Data Management (DMS) console.
-
In the top navigation bar, click DTS.
-
In the left-side navigation pane, choose DTS (DTS) > Data Migration.
The exact navigation path depends on the DMS console mode and layout. See Simple mode and Customize the layout and style of the DMS console. Alternatively, go directly to the Data Migration Tasks page.
-
From the drop-down list next to Data Migration Tasks, select the region where the migration instance will reside.
In the new DTS console, select the region in the upper-left corner instead.
Step 2: Configure source and destination databases
Click Create Task. On the Create Task page, configure the following parameters.
After configuring source and destination databases, read the limits displayed at the top of the page before proceeding. Skipping this step may cause the task to fail or produce inconsistent data.
Source database
| Parameter | Description |
|---|---|
| Task Name | A name for the task. DTS assigns a name automatically — replace it with a descriptive name for easy identification. The name does not need to be unique. |
| Select a DMS database instance | Select an existing instance (DTS auto-fills the parameters) or leave blank and configure manually. |
| Database Type | Select Oracle. |
| Connection Type | Select the method that matches where the source database is deployed. This example uses Self-managed Database on ECS. If using a self-managed database, set up the required network environment first. See Preparation overview. |
| Instance Region | The region where the source Oracle database resides. |
| ECS Instance ID | The ID of the Elastic Compute Service (ECS) instance hosting the Oracle database. |
| Port Number | The Oracle service port. Default: 1521. |
| Oracle Type | Select Non-RAC Instance (requires the SID parameter) or RAC or PDB Instance (requires the Service Name parameter). This example uses Non-RAC Instance. |
| Database Account | The Oracle account with the permissions described in Set up the Oracle account. |
| Database Password | The password for the database account. |
Destination database
| Parameter | Description |
|---|---|
| Select a DMS database instance | Select an existing instance (DTS auto-fills the parameters) or leave blank and configure manually. |
| Database Type | Select Kafka. |
| Connection Type | Select Express Connect, VPN Gateway, or Smart Access Gateway. ApsaraMQ for Kafka is not available as a direct access method — configure it as a self-managed Kafka database. |
| Instance Region | The region of the ApsaraMQ for Kafka instance. |
| Connected VPC | The virtual private cloud (VPC) ID of the ApsaraMQ for Kafka instance. To find it: in the ApsaraMQ for Kafka console, go to the instance's Instance Details page > Instance Information tab > Configuration Information section. |
| IP Address or Domain Name | An IP address from the Default Endpoint of the ApsaraMQ for Kafka instance. To find it: in the ApsaraMQ for Kafka console, go to Instance Details > Instance Information tab > Endpoint Information section > Default Endpoint. |
| Port Number | The Kafka service port. Default: 9092. |
| Database Account | The ApsaraMQ for Kafka account. Not required if the instance type is VPC Type. |
| Database Password | The password for the database account. Not required if the instance type is VPC Type. |
| Kafka Version | The version of the destination ApsaraMQ for Kafka instance. |
| Connection Method | Select Non-encrypted or SCRAM-SHA-256 based on your security requirements. |
| Topic | The topic that receives the migrated data. Select from the drop-down list. |
| Topic That Stores DDL Information | The topic for DDL events. If left blank, DDL events are stored in the topic specified by Topic. |
| Use Kafka Schema Registry | Whether to use Kafka Schema Registry for Avro schema management. Select Yes and enter the Registry URL if needed, or select No. |
Step 3: Test connectivity and add DTS CIDR blocks
If an IP allowlist is configured on the source database, add the DTS server CIDR blocks to it.
Adding DTS CIDR blocks to your database allowlist or ECS security group introduces potential security exposure. Before proceeding, take preventive measures: strengthen credentials, restrict exposed ports, authenticate API calls, review allowlist rules regularly, and block unauthorized CIDR ranges. Alternatively, connect DTS to the database via Express Connect, VPN Gateway, or Smart Access Gateway instead of opening public access.
Click Test Connectivity and Proceed.
Step 4: Select objects and configure migration settings
| Parameter | Description |
|---|---|
| Migration Types | Select Schema Migration and Full Data Migration for a one-time migration. Select all three types — Schema Migration, Full Data Migration, and Incremental Data Migration — to keep the destination in sync and maintain service continuity. If you do not select Incremental Data Migration, avoid writing to the source database during migration to prevent data inconsistency. |
| Processing Mode of Conflicting Tables | Precheck and Report Errors (default): fails the precheck if the destination has tables with the same names as the source. If conflicting tables cannot be removed, use object name mapping to rename them. See Map object names. Ignore Errors and Proceed: skips the name conflict check. Records with matching primary keys are not migrated; schema mismatches may cause partial migration or task failure. Use with caution. |
| Data Format in Kafka | DTS Avro: data is parsed using the DTS Avro schema. For the schema definition, see GitHub. SharePlex JSON: data is stored in SharePlex JSON format. See the Shareplex Json section of "Data formats of a Kafka cluster." |
| Policy for shipping data to Kafka partitions | The partitioning policy for migrated data. See Specify the policy for migrating data to Kafka partitions. |
| Source Objects / Selected Objects | In Source Objects, select the tables or databases to migrate, then click the arrow icon to move them to Selected Objects. To rename a single object, right-click it in Selected Objects. See Map the name of a single object. To rename multiple objects at once, click Batch Edit. See Map multiple object names at a time. To filter rows by condition, right-click a table and specify a WHERE clause. See Set filter conditions. To limit which SQL operations are incrementally migrated for a specific object, right-click it and select the operations. |
If you rename an object using object name mapping, other objects that depend on it may fail to migrate.
Step 5: Configure advanced settings
Click Next: Advanced Settings.
| Parameter | Description |
|---|---|
| Select the dedicated cluster used to schedule the task | DTS schedules tasks to a shared cluster by default. To run this task on a dedicated cluster, purchase one and select it here. See What is a DTS dedicated cluster. |
| Retry Time for Failed Connections | How long DTS retries after a connection failure. Valid values: 10–1440 minutes. Default: 720. Set to more than 30 minutes. If DTS reconnects within this window, the task resumes; otherwise, the task fails. If multiple tasks share a source or destination database, the most recently set retry window applies to all. DTS charges for the instance during retry periods. |
| The wait time before a retry when other issues occur in the source and destination databases | How long DTS retries after DDL or DML operation failures. Valid values: 1–1440 minutes. Default: 10. Set to more than 10 minutes. This value must be less than Retry Time for Failed Connections. |
| Enable Throttling for Full Data Migration | Limits read/write throughput during full migration to reduce database server load. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected. |
| Enable Throttling for Incremental Data Migration | Limits throughput during incremental migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected. |
| Environment Tag | An optional tag to categorize the DTS instance by environment, for example, production or development. |
| Actual Write Code | The character encoding for data written to the destination. Select based on your data requirements. |
| Configure ETL | Whether to transform data before writing it to the destination. Select Yes to enter processing statements in the code editor. See Configure ETL in a data migration or data synchronization task. Select No to skip ETL. See What is ETL? |
| Monitoring and Alerting | Whether to configure alerts for task failures or high migration latency. Select Yes and specify thresholds and alert contacts. See Configure monitoring and alerting for a new DTS task. |
Step 6: Run the precheck
Click Next: Save Task Settings and Precheck.
To review the API parameters used for this configuration, hover over Next: Save Task Settings and Precheck and click Preview API Call.
DTS runs a precheck before starting the task. If any item fails:
-
Click View Details next to the failed item, fix the issue, then click Precheck Again.
-
If an alert item is non-critical and can be ignored, click Confirm Alert Details > Ignore > OK, then click Precheck Again. Ignoring alerts may result in data inconsistency.
Wait until the success rate reaches 100%, then click Next: Purchase Instance.
Step 7: Purchase the instance and start the task
On the Purchase Instance page, configure the instance class.
| Section | Parameter | Description |
|---|---|---|
| New Instance Class | Resource Group | The resource group for the migration instance. Default: default resource group. See What is Resource Management? |
| Instance Class | The instance class determines migration throughput. See Specifications of data migration instances. |
Select the Data Transmission Service (Pay-as-you-go) Service Terms check box, then click Buy and Start.
Monitor progress on the Task Management page.
What's next
-
Overview of data migration scenarios — check the full list of supported source and destination versions.
-
Map object names — rename objects as they land in the destination.
-
Set filter conditions — migrate a subset of rows using WHERE conditions.
-
Specify the policy for migrating data to Kafka partitions — control how data is distributed across Kafka partitions.