Data Transmission Service (DTS) migrates data from a self-managed Oracle database to an AnalyticDB for MySQL V3.0 cluster across three phases: schema migration, full data migration, and incremental data migration. Running all three phases together keeps your service running with no downtime during the migration.
How it works
DTS pulls data from your Oracle source and writes it to AnalyticDB for MySQL in three sequential phases:
Schema migration — DTS converts and creates the table structures in the destination cluster. Foreign keys are not migrated.
Full data migration — DTS copies all existing data from the source to the destination. Concurrent INSERT operations may cause table fragmentation in the destination cluster.
Incremental data migration — DTS reads Oracle redo logs and archive logs to capture ongoing changes and applies them to the destination in near real time. Only DML operations (INSERT, UPDATE, DELETE) are captured.
When DTS writes to AnalyticDB for MySQL V3.0, UPDATE statements are automatically converted to REPLACE INTO. UPDATE operations on a primary key are converted to DELETE + INSERT.
Prerequisites
Before you begin, make sure you have:
An AnalyticDB for MySQL V3.0 cluster with available storage larger than the total size of the source Oracle database. See Create a cluster.
The source Oracle database running in ARCHIVELOG mode with archive log files accessible and an appropriate retention period configured. See Managing archived redo log files.
Supplemental logging enabled on the source Oracle database, with
SUPPLEMENTAL_LOG_DATA_PKandSUPPLEMENTAL_LOG_DATA_UIset toYes. Run the following SQL commands on the source Oracle database to enable archive logging and supplemental logging:-- Enable archive logging (requires ARCHIVELOG mode) -- Verify current mode: SELECT LOG_MODE FROM V$DATABASE; -- Enable supplemental logging at the database level ALTER DATABASE ADD SUPPLEMENTAL LOG DATA; -- Enable primary key supplemental logging ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS; -- Enable unique index supplemental logging ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (UNIQUE INDEX) COLUMNS; -- Verify supplemental logging status SELECT SUPPLEMENTAL_LOG_DATA_MIN, SUPPLEMENTAL_LOG_DATA_PK, SUPPLEMENTAL_LOG_DATA_UI FROM V$DATABASE; -- Expected: SUPPLEMENTAL_LOG_DATA_PK = YES, SUPPLEMENTAL_LOG_DATA_UI = YESFor more information, see Supplemental logging and Configure an Oracle database.
Familiarity with the capabilities and limits of DTS for Oracle database migration. Advanced Database & Application Migration (ADAM) is used for database evaluation to help ensure a smooth migration to the cloud. See Prepare an Oracle database and Overview.
A database account on the source Oracle database with the required permissions. See Required permissions.
A database account on the destination AnalyticDB for MySQL V3.0 cluster with read and write permissions.
For supported Oracle database versions, see Overview of data migration scenarios.
Required permissions
Create a database account for each database and grant the following permissions before configuring the migration task. Skip this step if the accounts already exist with these permissions.
| Database | Schema migration | Full data migration | Incremental data migration |
|---|---|---|---|
| Self-managed Oracle database | Schema owner permissions | Schema owner permissions | Fine-grained permissions |
| AnalyticDB for MySQL V3.0 cluster | Read and write permissions on the destination database |
To create an account and grant permissions:
Self-managed Oracle database: See Prepare a database account, CREATE USER, and GRANT.
AnalyticDB for MySQL V3.0 cluster: See Create a database account.
Limitations
Review the following limitations before configuring the task.
During schema migration, DTS does not migrate foreign keys from the source to the destination.
During full data migration and incremental data migration, DTS temporarily disables constraint checks and cascade operations on foreign keys at the session level. Cascade update and delete operations on the source during migration may cause data inconsistency.
Source database limitations
| Category | Limitation |
|---|---|
| Bandwidth | The source server must have sufficient outbound bandwidth; otherwise migration speed decreases. |
| Oracle RAC over Express Connect | Specify a virtual IP address (VIP) for the database. |
| Oracle RAC over Express Connect, VPN Gateway, Smart Access Gateway, Database Gateway, or Cloud Enterprise Network (CEN) | Use a single VIP rather than a Single Client Access Name (SCAN) IP address. After specifying the VIP, node failover is not supported for the Oracle RAC database. |
| VARCHAR2 empty string | Oracle evaluates an empty VARCHAR2 string as null. If the destination field has a NOT NULL constraint, the migration task fails. |
| Table constraints | Tables must have PRIMARY KEY or UNIQUE constraints with all fields unique. Without these constraints, the destination may contain duplicate records. |
| Oracle 12c and later | Table names cannot exceed 30 bytes. |
| Table rename limit | If you select tables as migration objects and need to rename tables or columns: a single task supports up to 1,000 tables. For more than 1,000 tables, configure multiple tasks or migrate the entire database. |
| Incremental migration: log retention | Redo logs and archive logs must be retained for at least 7 days. Shorter retention periods may cause the task to fail, and in exceptional cases, data inconsistency or loss may occur. If the retention period is insufficient, the DTS service level agreement (SLA) does not guarantee reliability or performance. |
| Incremental migration: source operations | Do not use Oracle Data Pump to write data to the source during incremental migration. Data loss may occur. |
| DDL during migration | Do not perform DDL operations that change database or table schemas during schema migration or full data migration. The task will fail. |
| Full data migration: source writes | Do not write data to the source during full data migration only (without incremental migration). Data inconsistency between source and destination may occur. |
Destination cluster limitations
| Category | Limitation |
|---|---|
| Disk usage | If disk usage on any node in the AnalyticDB for MySQL V3.0 cluster exceeds 80%, an exception occurs in the DTS task and the DTS task is delayed. Estimate required space before starting and ensure the destination cluster has sufficient storage. |
| Primary key | Specify a custom primary key in the destination database or configure Primary Key Column during table field configuration. Missing primary keys may cause migration failures. |
| Backup conflict | If the destination cluster runs a backup while the DTS task is active, the DTS task fails. |
| External tables | External tables cannot be migrated. |
| Character set | The character sets of the source and destination databases must be compatible. Incompatible character sets may cause data inconsistency or task failure. |
| Time zone | The time zones of source and destination databases must be the same. |
| Failed task recovery | DTS attempts to resume failed tasks within the last 7 days. Before switching workloads to the destination cluster, stop or release failed tasks, or run the REVOKE statement to revoke write permissions from DTS accounts. Otherwise, resumed tasks may overwrite destination data. |
| DDL execution failures | If a DDL statement fails in the destination, the DTS task continues running. View failed statements in the task logs. See View task logs. |
| Schema migration | Use the schema migration feature of DTS to avoid task failures caused by incompatible data types. |
| Performance impact | Full data migration increases load on both source and destination servers. Run migration during off-peak hours. |
| Table fragmentation | Concurrent INSERT operations during full data migration cause table fragmentation. After full data migration, the used tablespace in the destination is larger than in the source. |
| DTS technical support | If a DTS task fails, technical support attempts to restore it within 8 hours. During restoration, the task may be restarted and task parameters (not database parameters) may be modified. See Modify instance parameters for parameters that may be modified. |
Billing
| Migration type | Instance configuration fee | Internet traffic fee |
|---|---|---|
| Schema migration and full data migration | Free | Charged when Access Method for the destination is set to Public IP Address. See Billing overview. |
| Incremental data migration | Charged. See Billing overview. | — |
Supported SQL operations for incremental migration
| Operation type | SQL statements |
|---|---|
| DML | INSERT, UPDATE, DELETE |
UPDATE statements are converted to REPLACE INTO when written to AnalyticDB for MySQL V3.0. UPDATE operations on primary keys are converted to DELETE + INSERT.
Data type mappings
For Oracle-to-MySQL data type mappings, see Data type mappings between heterogeneous databases.
Create a migration task
This procedure has five steps:
Go to the Data Migration page.
Configure source and destination databases.
Configure migration objects and advanced settings.
Run the precheck and purchase the instance.
Monitor the migration task.
Step 1: Go to the Data Migration page
Use one of the following methods to open the Data Migration page.
DTS console
Log on to the DTS console.
In the left-side navigation pane, click Data Migration.
In the upper-left corner, select the region where the migration instance resides.
DMS console
The actual operation may vary based on the mode and layout of the DMS console. See Simple mode and Customize the layout and style of the DMS console.
Log on to the DMS console.
In the top navigation bar, move the pointer over Data + AI > DTS (DTS) > Data Migration.
From the drop-down list to the right of Data Migration Tasks, select the region where the migration instance resides.
Step 2: Configure source and destination databases
Click Create Task.
Configure the task name and source database.
WarningAfter configuring the source and destination databases, read the Limits displayed at the top of the page before proceeding.
Parameter Description Task Name DTS generates a name automatically. Specify a descriptive name to identify the task. The name does not need to be unique. Select Existing Connection Select a registered database instance from the list, or configure the connection manually. Database Type Select Oracle. Access Method Select the access method for the source database. This example uses Self-managed Database on ECS. For other access methods, see Preparation overview. Instance Region The region where the source Oracle database resides. ECS Instance ID The ID of the Elastic Compute Service (ECS) instance hosting the source Oracle database. Port Number The service port of the source Oracle database. Default: 1521. Oracle Type Select the database architecture: Non-RAC Instance (requires SID) or RAC or PDB Instance (requires Service Name). This example uses RAC or PDB Instance. Database Account The source Oracle database account with the required permissions. Database Password The password for the source database account. Configure the destination database.
Parameter Description Select Existing Connection Select a registered database instance, or configure the connection manually. Database Type Select AnalyticDB for MySQL 3.0. Access Method Select Alibaba Cloud Instance. Instance Region The region where the destination AnalyticDB for MySQL V3.0 cluster resides. Instance ID The ID of the destination cluster. Database Account The database account for the destination cluster with read and write permissions. Database Password The password for the destination database account. Click Test Connectivity and Proceed, then click Test Connectivity in the CIDR Blocks of DTS Servers dialog box.
Make sure the CIDR blocks of DTS servers are added to the security settings of both source and destination databases. See Add the CIDR blocks of DTS servers.
Step 3: Configure migration objects and advanced settings
On the Configure Objects page, set the migration parameters.
Parameter Description Migration Types Select the migration types based on your requirements: <br>- Schema Migration + Full Data Migration: migrates existing data without incremental changes. <br>- Schema Migration + Full Data Migration + Incremental Data Migration: keeps the destination in sync during migration, enabling a smooth cutover with no downtime. <br><br> NoteIf you skip Schema Migration, create the target tables manually and enable object name mapping. If you skip Incremental Data Migration, do not write to the source during migration.
DDL and DML Operations to Be Synchronized The DDL or DML operations to migrate. To select operations for a specific table, right-click the table in Selected Objects. Merge Tables No (default): migrates tables without merging. Yes: adds a __dts_data_sourcecolumn to each table and merges all selected source tables into one destination table. See Enable the multi-table merging feature. <br><br>WarningDo not perform DDL schema changes on source tables when Merge Tables is enabled.
Processing Mode of Conflicting Tables Precheck and Report Errors (default): fails the precheck if destination tables share names with source tables. Ignore Errors and Proceed: skips the name conflict check. <br><br> WarningSelecting Ignore Errors and Proceed may result in data inconsistency. During full data migration, conflicting records are retained in the destination without update. During incremental migration, conflicting records are overwritten.
Capitalization of Object Names in Destination Instance Controls the capitalization of database, table, and column names in the destination. Default: DTS default policy. See Specify the capitalization of object names in the destination instance. Source Objects Select objects to migrate, then click
to move them to Selected Objects. Columns, tables, and databases are all supported.Selected Objects Right-click a single object to rename it. Click Edit in the upper-right corner to rename multiple objects at once. See Database, table, and column name mapping. <br><br> NoteRenaming an object may cause dependent objects to fail migration. To filter table data by condition, right-click the table and specify WHERE conditions. See Specify filter conditions.
Click Next: Advanced Settings and configure the advanced parameters.
Parameter Description Dedicated Cluster for Task Scheduling DTS uses a shared cluster by default. To use a dedicated cluster, purchase one separately. See What is a DTS dedicated cluster? Retry Time for Failed Connections How long DTS retries a failed connection after the task starts. Valid values: 10–1,440 minutes. Default: 720 minutes. Set to more than 30 minutes. If DTS reconnects within this period, the task resumes. Otherwise, it fails. <br><br> NoteWhen multiple tasks share the same source or destination, the last-configured retry time applies to all. DTS charges for the instance during retries.
Retry Time for Other Issues How long DTS retries failed DDL or DML operations. Valid values: 1–1,440 minutes. Default: 10 minutes. Set to more than 10 minutes. Must be less than Retry Time for Failed Connections. Enable Throttling for Full Data Migration Limits read/write resource usage during full data migration. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected. Enable Throttling for Incremental Data Migration Limits resource usage during incremental migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected. Environment Tag An optional tag to identify the DTS instance. Actual Write Code The encoding format for data written to the destination. Configure ETL Enables the extract, transform, and load (ETL) feature. See What is ETL? and Configure ETL in a data migration or data synchronization task. Monitoring and Alerting Configures alerts for task failures or latency exceeding a threshold. When enabled, specify the alert threshold and notification settings. See Configure monitoring and alerting when you create a DTS task. Click Next Step: Data Verification to configure the data verification task. See Configure a data verification task.
(Optional) Click Next: Configure Database and Table Fields to set the Type, Primary Key Column, Distribution Key, Partition Key, Partitioning Rules, and Partition Lifecycle for each table in the destination cluster.
This step is available only when Schema Migration is selected. Set Definition Status to All to view and modify all fields. The Primary Key Column supports composite primary keys (multiple columns). Specify at least one primary key column as both Distribution Key and Partition Key. See CREATE TABLE.
Step 4: Run the precheck and purchase the instance
Click Next: Save Task Settings and Precheck. DTS runs a precheck before starting the migration. Address any failures before proceeding:
For failed items, click View Details to see the cause, fix the issue, then click Precheck Again.
For alert items that cannot be ignored, click View Details, fix the issue, then rerun the precheck.
For alert items that can be ignored, click Confirm Alert Details > Ignore > OK, then click Precheck Again. Ignoring alerts may result in data inconsistency.
To preview the API parameters for this configuration, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.
Wait until Success Rate reaches 100%, then click Next: Purchase Instance.
On the Purchase Instance page, configure the instance class.
Parameter Description Resource Group The resource group for the migration instance. Default: default resource group. See What is Resource Management? Instance Class The instance class determines the migration speed. See Instance classes of data migration instances. Read and accept Data Transmission Service (Pay-as-you-go) Service Terms, then click Buy and Start > OK.
Step 5: Monitor the migration task
View task progress on the Data Migration page.
Full migration only (no incremental): the task stops automatically when complete. Status shows Completed.
With incremental migration: the task runs continuously and does not stop automatically. Status shows Running. Stop the task manually after verifying data consistency and before switching workloads to the destination cluster.
Verify migration results
After the task reaches Completed or before you cut over to the destination cluster, verify that data was migrated correctly.
Connect to the destination AnalyticDB for MySQL V3.0 cluster.
Compare row counts for key tables between the source and destination:
-- Run on source Oracle database SELECT COUNT(*) FROM <schema_name>.<table_name>; -- Run on destination AnalyticDB for MySQL V3.0 cluster SELECT COUNT(*) FROM <table_name>;Spot-check a sample of records to confirm data accuracy.
Review the DTS task logs for any warnings or errors. See View task logs.
For heterogeneous migrations involving data type conversion, always verify critical tables before cutting over.