All Products
Search
Document Center

ApsaraDB RDS:Migrate data from a self-managed Oracle database to an ApsaraDB RDS for MySQL instance

Last Updated:Mar 28, 2026

Use Data Transmission Service (DTS) to migrate data from a self-managed Oracle database to an ApsaraDB RDS for MySQL instance. DTS supports schema migration, full data migration, and incremental data migration. To maintain service continuity during migration, select all three migration types.

What DTS migrates

CategorySupportedNotes
Schema migrationYesConverts clustered tables and index-organized tables (IOTs) to common tables. Does not support nested tables, function-based indexes, domain indexes, bitmap indexes, or reverse indexes. Does not support views, synonyms, stored procedures, functions, packages, or user-defined types.
Full data migrationYesMigrates all existing data from the source Oracle database.
Incremental data migrationYesReads redo log files from the source Oracle database after full data migration completes. Supports INSERT, UPDATE, and DELETE (DML) and a subset of DDL operations.
External tablesNo

Billing

Migration typeInstance configuration feeInternet traffic fee
Schema migration and full data migrationFreeCharged only when Access Method is set to Public IP Address. For details, see Billing overview.
Incremental data migrationCharged. For details, see Billing overview.

SQL operations supported for incremental migration

TypeOperations
DMLINSERT, UPDATE, DELETE
DDLCREATE TABLE (excluding tables with nested functions), ALTER TABLE, ADD COLUMN, DROP COLUMN, RENAME COLUMN, ADD INDEX, DROP TABLE, RENAME TABLE, TRUNCATE TABLE, CREATE INDEX (within the current database account only)

Prerequisites

Before you begin, make sure you have:

Limitations

Note During schema migration, DTS migrates foreign keys from the source database to the destination. During full and incremental data migration, DTS temporarily disables foreign key constraint checks and cascade operations at the session level. If you perform cascade update or delete operations on the source database during migration, data inconsistency may occur.

Source database requirements

  • The source database server must have sufficient outbound bandwidth. Insufficient bandwidth reduces migration speed.

  • If the source database is an Oracle RAC database connected over Express Connect, VPN Gateway, Smart Access Gateway, Database Gateway, or Cloud Enterprise Network (CEN), specify a virtual IP address (VIP) rather than a Single Client Access Name (SCAN) IP address. After specifying the VIP, node failover is not supported for the Oracle RAC database.

  • If a VARCHAR2 field in the source Oracle database contains an empty string (evaluated as null in Oracle) and the corresponding field in the destination database has a NOT NULL constraint, the migration task fails.

  • Objects to be migrated must meet the following requirements:

    • Tables must have PRIMARY KEY or UNIQUE constraints with all fields unique. Otherwise, the destination database may contain duplicate records.

    • For Oracle 12c and later: table names cannot exceed 30 bytes.

    • If you select individual tables and need to rename them in the destination database, a single migration task supports up to 1,000 tables. For more than 1,000 tables, configure multiple tasks or migrate the entire database instead.

  • For incremental data migration:

    • Redo logging and archiving must be enabled.

    • If running incremental migration only: retain redo logs and archive logs for more than 24 hours.

    • If running both full and incremental migration: retain redo logs and archive logs for at least 7 days. After full migration completes, you can reduce the retention period to more than 24 hours.

    Warning

    If the retention period is too short, DTS may fail to obtain the required logs, causing task failure or, in extreme cases, data inconsistency or loss. DTS service-level agreements (SLAs) do not guarantee reliability if log retention requirements are not met.

  • During schema migration and full data migration: do not perform DDL operations on the source database.

  • During full data migration only: do not write data to the source database. To maintain data consistency, select schema migration, full data migration, and incremental data migration together.

  • During data migration: do not update LONGTEXT fields.

Other limitations

  • During incremental data migration: do not use Oracle Data Pump to write data to the source database, as this may cause data loss.

  • Schedule migration during off-peak hours. Full data migration uses read and write resources on both source and destination databases, increasing server load.

  • After full data migration completes, the used tablespace in the destination database is larger than in the source database, due to table fragmentation from concurrent INSERT operations.

  • DTS automatically retries a failed migration task for up to 7 days. Before switching workloads to the destination database, stop or release any failed tasks. Alternatively, run REVOKE to remove DTS write permissions from the destination database — otherwise, the resumed task may overwrite destination data with source data.

  • If DDL statements fail in the destination database, the DTS task continues running. Review failed DDL statements in the task logs. See View task logs.

  • The character sets of the source and destination databases must be compatible.

  • Use the DTS schema migration feature to avoid task failures caused by incompatible data types.

  • The source and destination databases must use the same time zone.

  • If column names in the destination MySQL table differ only in capitalization, migration results may not match expectations because MySQL column names are case-insensitive.

  • After migration completes (task Status changes to Completed), run ANALYZE TABLE <table_name> on the destination to verify data was written correctly. This is especially important after a high availability (HA) switchover, where data may reside only in memory.

  • If a DTS task fails, DTS technical support attempts to restore it within 8 hours. During restoration, the task may be restarted and task parameters (not database parameters) may be modified.

Special cases for ApsaraDB RDS for MySQL destinations

  • Table name case sensitivity: ApsaraDB RDS for MySQL table names are case-insensitive. Oracle uppercase letters are converted to lowercase when a table is created. If the source Oracle database contains table names that differ only in capitalization, they are treated as duplicates, and a "The object already exists" error may appear during schema migration. To prevent conflicts, use the object name mapping feature to standardize table name casing. See Object name mapping.

  • Database creation: DTS automatically creates the destination database in the RDS instance. If the source database name is invalid, create the database manually before configuring the migration task. See Manage databases.

Data type mappings

For Oracle-to-MySQL data type mappings, see Data type mappings between heterogeneous databases.

Important

The source and destination databases are heterogeneous. Schema migration may result in schema differences between them. Evaluate the impact of data type conversion on your workloads before migration.

Preparations

Create a database account on the source Oracle database and grant it the required permissions. If an account with the required permissions already exists, skip this step.

DatabaseSchema migrationFull data migrationIncremental data migration
Self-managed Oracle databaseSchema owner permissionsSchema owner permissionsFine-grained permissions
ApsaraDB RDS for MySQL instanceWrite permissions on the destination database

To create the database account and grant permissions:

Important

For incremental data migration, also enable archiving and supplemental logging on the source Oracle database. See Configure an Oracle database.

Configure the migration task

Step 1: Go to the Data Migration page

Use one of the following consoles:

DTS console

  1. Log on to the DTS console.

  2. In the left-side navigation pane, click Data Migration.

  3. In the upper-left corner, select the region where the migration instance resides.

DMS console

Note

The actual navigation may vary based on the DMS console mode and layout. See Simple mode and Customize the layout and style of the DMS console.

  1. Log on to the DMS console.

  2. In the top navigation bar, go to Data + AI > DTS (DTS) > Data Migration.

  3. From the drop-down list next to Data Migration Tasks, select the region where the migration instance resides.

Step 2: Create a task

Click Create Task to open the task configuration page.

Step 3: Configure source and destination databases

Warning

Read the Limits displayed at the top of the page after configuring the source and destination databases. Skipping this step may cause task failures or data inconsistency.

Source database

ParameterDescription
Task NameA name for the DTS task. DTS generates a default name. Specify a descriptive name for easy identification. The name does not need to be unique.
Database TypeSelect Oracle.
Access MethodThe access method for the source database. This example uses Public IP Address. For other access methods, set up the required environment first. See Preparation overview.
Instance RegionThe region where the source Oracle database resides.
Hostname or IP addressThe endpoint for connecting to the source Oracle database.
Port NumberThe service port of the source Oracle database. Default: 1521. The port must be accessible over the Internet when using Public IP Address.
Oracle TypeThe architecture of the source Oracle database. Select Non-RAC Instance (requires SID) or RAC or PDB Instance (requires Service Name). This example uses Non-RAC Instance.
Database AccountThe account for the source Oracle database. See Preparations for required permissions.
Database PasswordThe password for the database account.

Destination database

ParameterDescription
Database TypeSelect MySQL.
Access MethodSelect Alibaba Cloud Instance.
Instance RegionThe region where the destination ApsaraDB RDS for MySQL instance resides.
RDS Instance IDThe ID of the destination ApsaraDB RDS for MySQL instance.
Database AccountThe account for the destination RDS instance. See Preparations for required permissions.
Database PasswordThe password for the database account.
Connection MethodSelect Non-encrypted or SSL-encrypted based on your requirements. For SSL encryption, enable it on the RDS instance before configuring the DTS task. See Use a cloud certificate to enable SSL encryption.

Step 4: Test connectivity

At the bottom of the page, click Test Connectivity and Proceed. In the CIDR Blocks of DTS Servers dialog box, click Test Connectivity.

Note Make sure the DTS server CIDR blocks are added to the security settings of the source and destination databases. See Add the CIDR blocks of DTS servers.

Step 5: Configure objects to migrate

On the Configure Objects page, set the following parameters:

ParameterDescription
Migration TypesSelect the migration types: Schema Migration and Full Data Migration for a one-time migration, or add Incremental Data Migration for continuous synchronization during migration.
Note

If you do not select Schema Migration, create the target database and tables manually and enable object name mapping. If you do not select Incremental Data Migration, do not write data to the source database during migration.

Processing Mode for Existing Destination TablesPrecheck and Report Errors: checks for tables with the same name in the source and destination. The task cannot start if duplicates exist. To resolve conflicts without deleting or renaming destination tables, use object name mapping. See Map object names. Ignore Errors and Proceed: skips the duplicate-name precheck. During full migration, existing records in the destination are retained (not overwritten). During incremental migration, existing records are overwritten. If the schemas differ, only partial columns may be migrated or the task may fail. Use this option with caution.
Source ObjectsSelect objects from Source Objects and click the arrow icon to add them to Selected Objects.
Selected ObjectsTo rename a single object, right-click it in Selected Objects. See Map the name of a single object. To rename multiple objects, click Batch Edit in the upper-right corner. See Map multiple object names at a time.
Note

Renaming an object may cause other objects that depend on it to fail migration. To filter rows, right-click a table and specify WHERE conditions. See Specify filter conditions. To select specific SQL operations for incremental migration on a table, right-click it in Selected Objects and choose the operations.

Click Next: Advanced Settings.

Step 6: Configure advanced settings

ParameterDescription
Dedicated Cluster for Task SchedulingBy default, DTS uses the shared cluster. For higher task stability, purchase a dedicated cluster. See What is a DTS dedicated cluster.
Retry Time for Failed ConnectionsHow long DTS retries after a connection failure. Valid range: 10–1,440 minutes. Default: 720. Set to at least 30. DTS resumes the task if reconnected within this window; otherwise, the task fails.
Note

If multiple tasks share the same source or destination database, the latest-set retry time applies to all. DTS charges for the instance during retry periods.

Retry Time for Other IssuesHow long DTS retries after DDL or DML operation failures. Valid range: 1–1,440 minutes. Default: 10. Set to at least 10. Must be less than Retry Time for Failed Connections.
Enable Throttling for Full Data MigrationLimits read/write load during full migration. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected.
Enable Throttling for Incremental Data MigrationLimits load during incremental migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected.
Environment TagA tag to identify the DTS instance. Optional.
Actual Write CodeThe encoding format for writing data to the destination database.
Configure ETLWhether to enable extract, transform, and load (ETL). Select Yes to enter data processing statements, or No to skip. See Configure ETL in a data migration or data synchronization task.
Monitoring and AlertingWhether to configure alerting. Select Yes to set alert thresholds and notification contacts. See Configure monitoring and alerting.

Step 7: Configure data verification (optional)

Click Next Step: Data Verification to set up a data verification task. See Configure a data verification task.

Step 8: Save settings and run the precheck

To preview API parameters before saving, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.

Click Next: Save Task Settings and Precheck.

Note
  • DTS runs a precheck before starting the migration task. The task starts only after passing the precheck.

  • If the precheck fails, click View Details next to each failed item, fix the issues, and run the precheck again.

  • If the precheck triggers alerts: for alerts that cannot be ignored, click View Details and fix the issues. For ignorable alerts, click Confirm Alert Details, then Ignore, then OK, and click Precheck Again.

Step 9: Purchase the instance

  1. Wait until Success Rate reaches 100%, then click Next: Purchase Instance.

  2. On the Purchase Instance page, configure the following:

    SectionParameterDescription
    New Instance ClassResource GroupThe resource group for the migration instance. Default: default resource group. See What is Resource Management?
    Instance ClassThe instance class determines migration speed. See Instance classes of data migration instances.
  3. Select the Data Transmission Service (Pay-as-you-go) Service Terms check box.

  4. Click Buy and Start, then click OK in the confirmation dialog box.

Monitor the migration task

After the task starts, go to the Data Migration page to monitor progress.

  • Schema migration and full data migration: the task stops automatically when complete. Status shows Completed.

  • Incremental data migration: the task runs continuously and does not stop automatically. Status shows Running.

Verify migration results

After Status shows Completed, run the following command on the destination database to verify that data was written correctly:

ANALYZE TABLE <table_name>;

This step is especially important after a high availability (HA) switchover on the destination RDS instance, where data may temporarily reside only in memory.

What's next