Use Data Transmission Service (DTS) to migrate data from a self-managed Oracle database to an ApsaraDB RDS for MySQL instance. DTS supports schema migration, full data migration, and incremental data migration. To maintain service continuity during migration, select all three migration types.
What DTS migrates
| Category | Supported | Notes |
|---|---|---|
| Schema migration | Yes | Converts clustered tables and index-organized tables (IOTs) to common tables. Does not support nested tables, function-based indexes, domain indexes, bitmap indexes, or reverse indexes. Does not support views, synonyms, stored procedures, functions, packages, or user-defined types. |
| Full data migration | Yes | Migrates all existing data from the source Oracle database. |
| Incremental data migration | Yes | Reads redo log files from the source Oracle database after full data migration completes. Supports INSERT, UPDATE, and DELETE (DML) and a subset of DDL operations. |
| External tables | No | — |
Billing
| Migration type | Instance configuration fee | Internet traffic fee |
|---|---|---|
| Schema migration and full data migration | Free | Charged only when Access Method is set to Public IP Address. For details, see Billing overview. |
| Incremental data migration | Charged. For details, see Billing overview. | — |
SQL operations supported for incremental migration
| Type | Operations |
|---|---|
| DML | INSERT, UPDATE, DELETE |
| DDL | CREATE TABLE (excluding tables with nested functions), ALTER TABLE, ADD COLUMN, DROP COLUMN, RENAME COLUMN, ADD INDEX, DROP TABLE, RENAME TABLE, TRUNCATE TABLE, CREATE INDEX (within the current database account only) |
Prerequisites
Before you begin, make sure you have:
A self-managed Oracle database running in ARCHIVELOG mode, with archived log files accessible and a suitable retention period configured. See Managing Archived Redo Log Files.
Supplemental logging enabled on the source Oracle database, with
SUPPLEMENTAL_LOG_DATA_PKandSUPPLEMENTAL_LOG_DATA_UIset to Yesalert notification settings. See Supplemental Logging.An ApsaraDB RDS for MySQL instance created. See Create an ApsaraDB RDS for MySQL instance.
For supported source and destination database versions, see Overview of data migration scenarios.
Limitations
Source database requirements
The source database server must have sufficient outbound bandwidth. Insufficient bandwidth reduces migration speed.
If the source database is an Oracle RAC database connected over Express Connect, VPN Gateway, Smart Access Gateway, Database Gateway, or Cloud Enterprise Network (CEN), specify a virtual IP address (VIP) rather than a Single Client Access Name (SCAN) IP address. After specifying the VIP, node failover is not supported for the Oracle RAC database.
If a VARCHAR2 field in the source Oracle database contains an empty string (evaluated as null in Oracle) and the corresponding field in the destination database has a NOT NULL constraint, the migration task fails.
Objects to be migrated must meet the following requirements:
Tables must have PRIMARY KEY or UNIQUE constraints with all fields unique. Otherwise, the destination database may contain duplicate records.
For Oracle 12c and later: table names cannot exceed 30 bytes.
If you select individual tables and need to rename them in the destination database, a single migration task supports up to 1,000 tables. For more than 1,000 tables, configure multiple tasks or migrate the entire database instead.
For incremental data migration:
Redo logging and archiving must be enabled.
If running incremental migration only: retain redo logs and archive logs for more than 24 hours.
If running both full and incremental migration: retain redo logs and archive logs for at least 7 days. After full migration completes, you can reduce the retention period to more than 24 hours.
WarningIf the retention period is too short, DTS may fail to obtain the required logs, causing task failure or, in extreme cases, data inconsistency or loss. DTS service-level agreements (SLAs) do not guarantee reliability if log retention requirements are not met.
During schema migration and full data migration: do not perform DDL operations on the source database.
During full data migration only: do not write data to the source database. To maintain data consistency, select schema migration, full data migration, and incremental data migration together.
During data migration: do not update LONGTEXT fields.
Other limitations
During incremental data migration: do not use Oracle Data Pump to write data to the source database, as this may cause data loss.
Schedule migration during off-peak hours. Full data migration uses read and write resources on both source and destination databases, increasing server load.
After full data migration completes, the used tablespace in the destination database is larger than in the source database, due to table fragmentation from concurrent INSERT operations.
DTS automatically retries a failed migration task for up to 7 days. Before switching workloads to the destination database, stop or release any failed tasks. Alternatively, run
REVOKEto remove DTS write permissions from the destination database — otherwise, the resumed task may overwrite destination data with source data.If DDL statements fail in the destination database, the DTS task continues running. Review failed DDL statements in the task logs. See View task logs.
The character sets of the source and destination databases must be compatible.
Use the DTS schema migration feature to avoid task failures caused by incompatible data types.
The source and destination databases must use the same time zone.
If column names in the destination MySQL table differ only in capitalization, migration results may not match expectations because MySQL column names are case-insensitive.
After migration completes (task Status changes to Completed), run
ANALYZE TABLE <table_name>on the destination to verify data was written correctly. This is especially important after a high availability (HA) switchover, where data may reside only in memory.If a DTS task fails, DTS technical support attempts to restore it within 8 hours. During restoration, the task may be restarted and task parameters (not database parameters) may be modified.
Special cases for ApsaraDB RDS for MySQL destinations
Table name case sensitivity: ApsaraDB RDS for MySQL table names are case-insensitive. Oracle uppercase letters are converted to lowercase when a table is created. If the source Oracle database contains table names that differ only in capitalization, they are treated as duplicates, and a "The object already exists" error may appear during schema migration. To prevent conflicts, use the object name mapping feature to standardize table name casing. See Object name mapping.
Database creation: DTS automatically creates the destination database in the RDS instance. If the source database name is invalid, create the database manually before configuring the migration task. See Manage databases.
Data type mappings
For Oracle-to-MySQL data type mappings, see Data type mappings between heterogeneous databases.
The source and destination databases are heterogeneous. Schema migration may result in schema differences between them. Evaluate the impact of data type conversion on your workloads before migration.
Preparations
Create a database account on the source Oracle database and grant it the required permissions. If an account with the required permissions already exists, skip this step.
| Database | Schema migration | Full data migration | Incremental data migration |
|---|---|---|---|
| Self-managed Oracle database | Schema owner permissions | Schema owner permissions | Fine-grained permissions |
| ApsaraDB RDS for MySQL instance | Write permissions on the destination database |
To create the database account and grant permissions:
Self-managed Oracle database: See Prepare a database account, CREATE USER, and GRANT.
ApsaraDB RDS for MySQL instance: See Create an account and Modify account permissions.
For incremental data migration, also enable archiving and supplemental logging on the source Oracle database. See Configure an Oracle database.
Configure the migration task
Step 1: Go to the Data Migration page
Use one of the following consoles:
DTS console
Log on to the DTS console.
In the left-side navigation pane, click Data Migration.
In the upper-left corner, select the region where the migration instance resides.
DMS console
The actual navigation may vary based on the DMS console mode and layout. See Simple mode and Customize the layout and style of the DMS console.
Log on to the DMS console.
In the top navigation bar, go to Data + AI > DTS (DTS) > Data Migration.
From the drop-down list next to Data Migration Tasks, select the region where the migration instance resides.
Step 2: Create a task
Click Create Task to open the task configuration page.
Step 3: Configure source and destination databases
Read the Limits displayed at the top of the page after configuring the source and destination databases. Skipping this step may cause task failures or data inconsistency.
Source database
| Parameter | Description |
|---|---|
| Task Name | A name for the DTS task. DTS generates a default name. Specify a descriptive name for easy identification. The name does not need to be unique. |
| Database Type | Select Oracle. |
| Access Method | The access method for the source database. This example uses Public IP Address. For other access methods, set up the required environment first. See Preparation overview. |
| Instance Region | The region where the source Oracle database resides. |
| Hostname or IP address | The endpoint for connecting to the source Oracle database. |
| Port Number | The service port of the source Oracle database. Default: 1521. The port must be accessible over the Internet when using Public IP Address. |
| Oracle Type | The architecture of the source Oracle database. Select Non-RAC Instance (requires SID) or RAC or PDB Instance (requires Service Name). This example uses Non-RAC Instance. |
| Database Account | The account for the source Oracle database. See Preparations for required permissions. |
| Database Password | The password for the database account. |
Destination database
| Parameter | Description |
|---|---|
| Database Type | Select MySQL. |
| Access Method | Select Alibaba Cloud Instance. |
| Instance Region | The region where the destination ApsaraDB RDS for MySQL instance resides. |
| RDS Instance ID | The ID of the destination ApsaraDB RDS for MySQL instance. |
| Database Account | The account for the destination RDS instance. See Preparations for required permissions. |
| Database Password | The password for the database account. |
| Connection Method | Select Non-encrypted or SSL-encrypted based on your requirements. For SSL encryption, enable it on the RDS instance before configuring the DTS task. See Use a cloud certificate to enable SSL encryption. |
Step 4: Test connectivity
At the bottom of the page, click Test Connectivity and Proceed. In the CIDR Blocks of DTS Servers dialog box, click Test Connectivity.
Step 5: Configure objects to migrate
On the Configure Objects page, set the following parameters:
| Parameter | Description |
|---|---|
| Migration Types | Select the migration types: Schema Migration and Full Data Migration for a one-time migration, or add Incremental Data Migration for continuous synchronization during migration. Note If you do not select Schema Migration, create the target database and tables manually and enable object name mapping. If you do not select Incremental Data Migration, do not write data to the source database during migration. |
| Processing Mode for Existing Destination Tables | Precheck and Report Errors: checks for tables with the same name in the source and destination. The task cannot start if duplicates exist. To resolve conflicts without deleting or renaming destination tables, use object name mapping. See Map object names. Ignore Errors and Proceed: skips the duplicate-name precheck. During full migration, existing records in the destination are retained (not overwritten). During incremental migration, existing records are overwritten. If the schemas differ, only partial columns may be migrated or the task may fail. Use this option with caution. |
| Source Objects | Select objects from Source Objects and click the arrow icon to add them to Selected Objects. |
| Selected Objects | To rename a single object, right-click it in Selected Objects. See Map the name of a single object. To rename multiple objects, click Batch Edit in the upper-right corner. See Map multiple object names at a time. Note Renaming an object may cause other objects that depend on it to fail migration. To filter rows, right-click a table and specify WHERE conditions. See Specify filter conditions. To select specific SQL operations for incremental migration on a table, right-click it in Selected Objects and choose the operations. |
Click Next: Advanced Settings.
Step 6: Configure advanced settings
| Parameter | Description |
|---|---|
| Dedicated Cluster for Task Scheduling | By default, DTS uses the shared cluster. For higher task stability, purchase a dedicated cluster. See What is a DTS dedicated cluster. |
| Retry Time for Failed Connections | How long DTS retries after a connection failure. Valid range: 10–1,440 minutes. Default: 720. Set to at least 30. DTS resumes the task if reconnected within this window; otherwise, the task fails. Note If multiple tasks share the same source or destination database, the latest-set retry time applies to all. DTS charges for the instance during retry periods. |
| Retry Time for Other Issues | How long DTS retries after DDL or DML operation failures. Valid range: 1–1,440 minutes. Default: 10. Set to at least 10. Must be less than Retry Time for Failed Connections. |
| Enable Throttling for Full Data Migration | Limits read/write load during full migration. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected. |
| Enable Throttling for Incremental Data Migration | Limits load during incremental migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected. |
| Environment Tag | A tag to identify the DTS instance. Optional. |
| Actual Write Code | The encoding format for writing data to the destination database. |
| Configure ETL | Whether to enable extract, transform, and load (ETL). Select Yes to enter data processing statements, or No to skip. See Configure ETL in a data migration or data synchronization task. |
| Monitoring and Alerting | Whether to configure alerting. Select Yes to set alert thresholds and notification contacts. See Configure monitoring and alerting. |
Step 7: Configure data verification (optional)
Click Next Step: Data Verification to set up a data verification task. See Configure a data verification task.
Step 8: Save settings and run the precheck
To preview API parameters before saving, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.
Click Next: Save Task Settings and Precheck.
DTS runs a precheck before starting the migration task. The task starts only after passing the precheck.
If the precheck fails, click View Details next to each failed item, fix the issues, and run the precheck again.
If the precheck triggers alerts: for alerts that cannot be ignored, click View Details and fix the issues. For ignorable alerts, click Confirm Alert Details, then Ignore, then OK, and click Precheck Again.
Step 9: Purchase the instance
Wait until Success Rate reaches 100%, then click Next: Purchase Instance.
On the Purchase Instance page, configure the following:
Section Parameter Description New Instance Class Resource Group The resource group for the migration instance. Default: default resource group. See What is Resource Management? Instance Class The instance class determines migration speed. See Instance classes of data migration instances. Select the Data Transmission Service (Pay-as-you-go) Service Terms check box.
Click Buy and Start, then click OK in the confirmation dialog box.
Monitor the migration task
After the task starts, go to the Data Migration page to monitor progress.
Schema migration and full data migration: the task stops automatically when complete. Status shows Completed.
Incremental data migration: the task runs continuously and does not stop automatically. Status shows Running.
Verify migration results
After Status shows Completed, run the following command on the destination database to verify that data was written correctly:
ANALYZE TABLE <table_name>;This step is especially important after a high availability (HA) switchover on the destination RDS instance, where data may temporarily reside only in memory.