All Products
Search
Document Center

Data Transmission Service:Migrate a self-managed Oracle database to AnalyticDB for MySQL 3.0

Last Updated:Mar 28, 2026

Data Transmission Service (DTS) migrates data from a self-managed Oracle database to an AnalyticDB for MySQL V3.0 cluster across three phases: schema migration, full data migration, and incremental data migration. Running all three phases together keeps your service running with no downtime during the migration.

How it works

DTS pulls data from your Oracle source and writes it to AnalyticDB for MySQL in three sequential phases:

  1. Schema migration — DTS converts and creates the table structures in the destination cluster. Foreign keys are not migrated.

  2. Full data migration — DTS copies all existing data from the source to the destination. Concurrent INSERT operations may cause table fragmentation in the destination cluster.

  3. Incremental data migration — DTS reads Oracle redo logs and archive logs to capture ongoing changes and applies them to the destination in near real time. Only DML operations (INSERT, UPDATE, DELETE) are captured.

When DTS writes to AnalyticDB for MySQL V3.0, UPDATE statements are automatically converted to REPLACE INTO. UPDATE operations on a primary key are converted to DELETE + INSERT.

Prerequisites

Before you begin, make sure you have:

  • An AnalyticDB for MySQL V3.0 cluster with available storage larger than the total size of the source Oracle database. See Create a cluster.

  • The source Oracle database running in ARCHIVELOG mode with archive log files accessible and an appropriate retention period configured. See Managing archived redo log files.

  • Supplemental logging enabled on the source Oracle database, with SUPPLEMENTAL_LOG_DATA_PK and SUPPLEMENTAL_LOG_DATA_UI set to Yes. Run the following SQL commands on the source Oracle database to enable archive logging and supplemental logging:

    -- Enable archive logging (requires ARCHIVELOG mode)
    -- Verify current mode:
    SELECT LOG_MODE FROM V$DATABASE;
    
    -- Enable supplemental logging at the database level
    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
    
    -- Enable primary key supplemental logging
    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
    
    -- Enable unique index supplemental logging
    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (UNIQUE INDEX) COLUMNS;
    
    -- Verify supplemental logging status
    SELECT SUPPLEMENTAL_LOG_DATA_MIN,
           SUPPLEMENTAL_LOG_DATA_PK,
           SUPPLEMENTAL_LOG_DATA_UI
    FROM V$DATABASE;
    -- Expected: SUPPLEMENTAL_LOG_DATA_PK = YES, SUPPLEMENTAL_LOG_DATA_UI = YES

    For more information, see Supplemental logging and Configure an Oracle database.

  • Familiarity with the capabilities and limits of DTS for Oracle database migration. Advanced Database & Application Migration (ADAM) is used for database evaluation to help ensure a smooth migration to the cloud. See Prepare an Oracle database and Overview.

  • A database account on the source Oracle database with the required permissions. See Required permissions.

  • A database account on the destination AnalyticDB for MySQL V3.0 cluster with read and write permissions.

For supported Oracle database versions, see Overview of data migration scenarios.

Required permissions

Create a database account for each database and grant the following permissions before configuring the migration task. Skip this step if the accounts already exist with these permissions.

DatabaseSchema migrationFull data migrationIncremental data migration
Self-managed Oracle databaseSchema owner permissionsSchema owner permissionsFine-grained permissions
AnalyticDB for MySQL V3.0 clusterRead and write permissions on the destination database

To create an account and grant permissions:

Limitations

Review the following limitations before configuring the task.

During schema migration, DTS does not migrate foreign keys from the source to the destination.
During full data migration and incremental data migration, DTS temporarily disables constraint checks and cascade operations on foreign keys at the session level. Cascade update and delete operations on the source during migration may cause data inconsistency.

Source database limitations

CategoryLimitation
BandwidthThe source server must have sufficient outbound bandwidth; otherwise migration speed decreases.
Oracle RAC over Express ConnectSpecify a virtual IP address (VIP) for the database.
Oracle RAC over Express Connect, VPN Gateway, Smart Access Gateway, Database Gateway, or Cloud Enterprise Network (CEN)Use a single VIP rather than a Single Client Access Name (SCAN) IP address. After specifying the VIP, node failover is not supported for the Oracle RAC database.
VARCHAR2 empty stringOracle evaluates an empty VARCHAR2 string as null. If the destination field has a NOT NULL constraint, the migration task fails.
Table constraintsTables must have PRIMARY KEY or UNIQUE constraints with all fields unique. Without these constraints, the destination may contain duplicate records.
Oracle 12c and laterTable names cannot exceed 30 bytes.
Table rename limitIf you select tables as migration objects and need to rename tables or columns: a single task supports up to 1,000 tables. For more than 1,000 tables, configure multiple tasks or migrate the entire database.
Incremental migration: log retentionRedo logs and archive logs must be retained for at least 7 days. Shorter retention periods may cause the task to fail, and in exceptional cases, data inconsistency or loss may occur. If the retention period is insufficient, the DTS service level agreement (SLA) does not guarantee reliability or performance.
Incremental migration: source operationsDo not use Oracle Data Pump to write data to the source during incremental migration. Data loss may occur.
DDL during migrationDo not perform DDL operations that change database or table schemas during schema migration or full data migration. The task will fail.
Full data migration: source writesDo not write data to the source during full data migration only (without incremental migration). Data inconsistency between source and destination may occur.

Destination cluster limitations

CategoryLimitation
Disk usageIf disk usage on any node in the AnalyticDB for MySQL V3.0 cluster exceeds 80%, an exception occurs in the DTS task and the DTS task is delayed. Estimate required space before starting and ensure the destination cluster has sufficient storage.
Primary keySpecify a custom primary key in the destination database or configure Primary Key Column during table field configuration. Missing primary keys may cause migration failures.
Backup conflictIf the destination cluster runs a backup while the DTS task is active, the DTS task fails.
External tablesExternal tables cannot be migrated.
Character setThe character sets of the source and destination databases must be compatible. Incompatible character sets may cause data inconsistency or task failure.
Time zoneThe time zones of source and destination databases must be the same.
Failed task recoveryDTS attempts to resume failed tasks within the last 7 days. Before switching workloads to the destination cluster, stop or release failed tasks, or run the REVOKE statement to revoke write permissions from DTS accounts. Otherwise, resumed tasks may overwrite destination data.
DDL execution failuresIf a DDL statement fails in the destination, the DTS task continues running. View failed statements in the task logs. See View task logs.
Schema migrationUse the schema migration feature of DTS to avoid task failures caused by incompatible data types.
Performance impactFull data migration increases load on both source and destination servers. Run migration during off-peak hours.
Table fragmentationConcurrent INSERT operations during full data migration cause table fragmentation. After full data migration, the used tablespace in the destination is larger than in the source.
DTS technical supportIf a DTS task fails, technical support attempts to restore it within 8 hours. During restoration, the task may be restarted and task parameters (not database parameters) may be modified. See Modify instance parameters for parameters that may be modified.

Billing

Migration typeInstance configuration feeInternet traffic fee
Schema migration and full data migrationFreeCharged when Access Method for the destination is set to Public IP Address. See Billing overview.
Incremental data migrationCharged. See Billing overview.

Supported SQL operations for incremental migration

Operation typeSQL statements
DMLINSERT, UPDATE, DELETE
UPDATE statements are converted to REPLACE INTO when written to AnalyticDB for MySQL V3.0. UPDATE operations on primary keys are converted to DELETE + INSERT.

Data type mappings

For Oracle-to-MySQL data type mappings, see Data type mappings between heterogeneous databases.

Create a migration task

This procedure has five steps:

  1. Go to the Data Migration page.

  2. Configure source and destination databases.

  3. Configure migration objects and advanced settings.

  4. Run the precheck and purchase the instance.

  5. Monitor the migration task.

Step 1: Go to the Data Migration page

Use one of the following methods to open the Data Migration page.

DTS console

  1. Log on to the DTS console.

  2. In the left-side navigation pane, click Data Migration.

  3. In the upper-left corner, select the region where the migration instance resides.

DMS console

The actual operation may vary based on the mode and layout of the DMS console. See Simple mode and Customize the layout and style of the DMS console.
  1. Log on to the DMS console.

  2. In the top navigation bar, move the pointer over Data + AI > DTS (DTS) > Data Migration.

  3. From the drop-down list to the right of Data Migration Tasks, select the region where the migration instance resides.

Step 2: Configure source and destination databases

  1. Click Create Task.

  2. Configure the task name and source database.

    Warning

    After configuring the source and destination databases, read the Limits displayed at the top of the page before proceeding.

    ParameterDescription
    Task NameDTS generates a name automatically. Specify a descriptive name to identify the task. The name does not need to be unique.
    Select Existing ConnectionSelect a registered database instance from the list, or configure the connection manually.
    Database TypeSelect Oracle.
    Access MethodSelect the access method for the source database. This example uses Self-managed Database on ECS. For other access methods, see Preparation overview.
    Instance RegionThe region where the source Oracle database resides.
    ECS Instance IDThe ID of the Elastic Compute Service (ECS) instance hosting the source Oracle database.
    Port NumberThe service port of the source Oracle database. Default: 1521.
    Oracle TypeSelect the database architecture: Non-RAC Instance (requires SID) or RAC or PDB Instance (requires Service Name). This example uses RAC or PDB Instance.
    Database AccountThe source Oracle database account with the required permissions.
    Database PasswordThe password for the source database account.
  3. Configure the destination database.

    ParameterDescription
    Select Existing ConnectionSelect a registered database instance, or configure the connection manually.
    Database TypeSelect AnalyticDB for MySQL 3.0.
    Access MethodSelect Alibaba Cloud Instance.
    Instance RegionThe region where the destination AnalyticDB for MySQL V3.0 cluster resides.
    Instance IDThe ID of the destination cluster.
    Database AccountThe database account for the destination cluster with read and write permissions.
    Database PasswordThe password for the destination database account.
  4. Click Test Connectivity and Proceed, then click Test Connectivity in the CIDR Blocks of DTS Servers dialog box.

    Make sure the CIDR blocks of DTS servers are added to the security settings of both source and destination databases. See Add the CIDR blocks of DTS servers.

Step 3: Configure migration objects and advanced settings

  1. On the Configure Objects page, set the migration parameters.

    ParameterDescription
    Migration TypesSelect the migration types based on your requirements: <br>- Schema Migration + Full Data Migration: migrates existing data without incremental changes. <br>- Schema Migration + Full Data Migration + Incremental Data Migration: keeps the destination in sync during migration, enabling a smooth cutover with no downtime. <br><br>
    Note

    If you skip Schema Migration, create the target tables manually and enable object name mapping. If you skip Incremental Data Migration, do not write to the source during migration.

    DDL and DML Operations to Be SynchronizedThe DDL or DML operations to migrate. To select operations for a specific table, right-click the table in Selected Objects.
    Merge TablesNo (default): migrates tables without merging. Yes: adds a __dts_data_source column to each table and merges all selected source tables into one destination table. See Enable the multi-table merging feature. <br><br>
    Warning

    Do not perform DDL schema changes on source tables when Merge Tables is enabled.

    Processing Mode of Conflicting TablesPrecheck and Report Errors (default): fails the precheck if destination tables share names with source tables. Ignore Errors and Proceed: skips the name conflict check. <br><br>
    Warning

    Selecting Ignore Errors and Proceed may result in data inconsistency. During full data migration, conflicting records are retained in the destination without update. During incremental migration, conflicting records are overwritten.

    Capitalization of Object Names in Destination InstanceControls the capitalization of database, table, and column names in the destination. Default: DTS default policy. See Specify the capitalization of object names in the destination instance.
    Source ObjectsSelect objects to migrate, then click 向右小箭头 to move them to Selected Objects. Columns, tables, and databases are all supported.
    Selected ObjectsRight-click a single object to rename it. Click Edit in the upper-right corner to rename multiple objects at once. See Database, table, and column name mapping. <br><br>
    Note

    Renaming an object may cause dependent objects to fail migration. To filter table data by condition, right-click the table and specify WHERE conditions. See Specify filter conditions.

  2. Click Next: Advanced Settings and configure the advanced parameters.

    ParameterDescription
    Dedicated Cluster for Task SchedulingDTS uses a shared cluster by default. To use a dedicated cluster, purchase one separately. See What is a DTS dedicated cluster?
    Retry Time for Failed ConnectionsHow long DTS retries a failed connection after the task starts. Valid values: 10–1,440 minutes. Default: 720 minutes. Set to more than 30 minutes. If DTS reconnects within this period, the task resumes. Otherwise, it fails. <br><br>
    Note

    When multiple tasks share the same source or destination, the last-configured retry time applies to all. DTS charges for the instance during retries.

    Retry Time for Other IssuesHow long DTS retries failed DDL or DML operations. Valid values: 1–1,440 minutes. Default: 10 minutes. Set to more than 10 minutes. Must be less than Retry Time for Failed Connections.
    Enable Throttling for Full Data MigrationLimits read/write resource usage during full data migration. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected.
    Enable Throttling for Incremental Data MigrationLimits resource usage during incremental migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected.
    Environment TagAn optional tag to identify the DTS instance.
    Actual Write CodeThe encoding format for data written to the destination.
    Configure ETLEnables the extract, transform, and load (ETL) feature. See What is ETL? and Configure ETL in a data migration or data synchronization task.
    Monitoring and AlertingConfigures alerts for task failures or latency exceeding a threshold. When enabled, specify the alert threshold and notification settings. See Configure monitoring and alerting when you create a DTS task.
  3. Click Next Step: Data Verification to configure the data verification task. See Configure a data verification task.

  4. (Optional) Click Next: Configure Database and Table Fields to set the Type, Primary Key Column, Distribution Key, Partition Key, Partitioning Rules, and Partition Lifecycle for each table in the destination cluster.

    This step is available only when Schema Migration is selected. Set Definition Status to All to view and modify all fields. The Primary Key Column supports composite primary keys (multiple columns). Specify at least one primary key column as both Distribution Key and Partition Key. See CREATE TABLE.

Step 4: Run the precheck and purchase the instance

  1. Click Next: Save Task Settings and Precheck. DTS runs a precheck before starting the migration. Address any failures before proceeding:

    • For failed items, click View Details to see the cause, fix the issue, then click Precheck Again.

    • For alert items that cannot be ignored, click View Details, fix the issue, then rerun the precheck.

    • For alert items that can be ignored, click Confirm Alert Details > Ignore > OK, then click Precheck Again. Ignoring alerts may result in data inconsistency.

    To preview the API parameters for this configuration, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.
  2. Wait until Success Rate reaches 100%, then click Next: Purchase Instance.

  3. On the Purchase Instance page, configure the instance class.

    ParameterDescription
    Resource GroupThe resource group for the migration instance. Default: default resource group. See What is Resource Management?
    Instance ClassThe instance class determines the migration speed. See Instance classes of data migration instances.
  4. Read and accept Data Transmission Service (Pay-as-you-go) Service Terms, then click Buy and Start > OK.

Step 5: Monitor the migration task

View task progress on the Data Migration page.

  • Full migration only (no incremental): the task stops automatically when complete. Status shows Completed.

  • With incremental migration: the task runs continuously and does not stop automatically. Status shows Running. Stop the task manually after verifying data consistency and before switching workloads to the destination cluster.

Verify migration results

After the task reaches Completed or before you cut over to the destination cluster, verify that data was migrated correctly.

  1. Connect to the destination AnalyticDB for MySQL V3.0 cluster.

  2. Compare row counts for key tables between the source and destination:

    -- Run on source Oracle database
    SELECT COUNT(*) FROM <schema_name>.<table_name>;
    
    -- Run on destination AnalyticDB for MySQL V3.0 cluster
    SELECT COUNT(*) FROM <table_name>;
  3. Spot-check a sample of records to confirm data accuracy.

  4. Review the DTS task logs for any warnings or errors. See View task logs.

For heterogeneous migrations involving data type conversion, always verify critical tables before cutting over.

What's next