All Products
Search
Document Center

Data Transmission Service:Migrate data between self-managed Oracle databases

Last Updated:Mar 28, 2026

Data Transmission Service (DTS) migrates data between two self-managed Oracle databases with minimal downtime. DTS supports schema migration, full data migration, and incremental data migration. Select all three types to keep your application running throughout the migration.

Prerequisites

Before you begin, make sure you have:

  • A source Oracle database running version 9i, 10g, 11g, 12c, 18c, or 19c

  • ARCHIVELOG mode enabled on the source database, with archived log files accessible and a retention period configured. See Managing Archived Redo Log Files

  • Supplemental logging enabled on the source database, with SUPPLEMENTAL_LOG_DATA_PK and SUPPLEMENTAL_LOG_DATA_UI set to Yes. See Supplemental Logging

  • Reviewed DTS capabilities and limits for Oracle migration. Use Advanced Database & Application Migration (ADAM) to evaluate your database before migrating. See Prepare an Oracle database and Overview

Billing

Migration typeInstance configuration feeInternet traffic fee
Schema migration and full data migrationFreeCharged only when data is migrated from Alibaba Cloud over the Internet. See Billing overview.
Incremental data migrationCharged. See Billing overview.

Migration types

Migration typeDescription
Schema migrationMigrates the schemas of selected objects to the destination Oracle database. Supported object types: table, view, synonym, trigger, stored procedure, stored function, package, and user-defined type.
Note

Trigger schema migration is not supported in this scenario. We recommend that you delete the triggers of the source database to prevent data inconsistency caused by triggers. See Configure a data synchronization task for a source database that contains a trigger.

Full data migrationMigrates all existing data from the source Oracle database to the destination. We recommend that you do not perform DDL operations on migrated objects during schema migration or full data migration.
Incremental data migrationAfter full data migration, DTS polls redo log files from the source and applies incremental changes to the destination. This keeps the destination in sync without interrupting your application.

Limitations

During schema migration, DTS migrates foreign keys from the source database to the destination database. During full data migration and incremental data migration, DTS temporarily disables foreign key constraint checks and cascade operations at the session level. Cascade and delete operations on the source database during migration may cause data inconsistency.
CategoryLimitationWorkaround
Source databaseThe source database server must have sufficient outbound bandwidth; insufficient bandwidth reduces migration speed.Increase outbound bandwidth on the source server before starting migration.
Source databaseIf the source is an Oracle RAC database connected over Express Connect, you must specify a VIP for the source database when configuring the connection. If the source is an Oracle RAC database connected over Express Connect, VPN Gateway, Smart Access Gateway, Database Gateway, or Cloud Enterprise Network (CEN), you can use a single VIP rather than a Single Client Access Name (SCAN) IP address. Node failover is not supported after you specify a VIP.Use a VIP and plan for manual failover if needed.
Source databaseIf a VARCHAR2 field in the source contains an empty string (evaluated as null in Oracle) and the corresponding destination field has a NOT NULL constraint, the migration task fails.Remove the NOT NULL constraint from the destination field, or ensure the source field is not empty before migrating.
Source databaseTables to be migrated must have PRIMARY KEY or UNIQUE constraints with all fields unique; otherwise, the destination may contain duplicate records.Add PRIMARY KEY or UNIQUE constraints to tables before migrating.
Source databaseFor Oracle 12c and later: table names cannot exceed 30 bytes.Rename tables that exceed 30 bytes before migrating.
Source databaseIf you select individual tables and need to rename tables or columns in the destination, a single task supports up to 1,000 tables. For larger migrations, configure multiple tasks or migrate the entire database.Split the migration into multiple tasks, or migrate the entire database without per-table renaming.
Incremental migrationRedo logging and archive logging must be enabled.Enable both before starting the migration task.
Incremental migrationFor incremental data migration only: retain redo logs and archive logs for more than 24 hours. For full data migration followed by incremental data migration: retain redo logs and archive logs for at least 7 days. After full migration completes, you can set the retention period to more than 24 hours. Insufficient log retention may cause task failure and, in extreme cases, data loss. Otherwise, the Service Level Agreement (SLA) of DTS does not guarantee service reliability or performance.Set the log retention period before starting migration. Extend retention during the full migration phase.
Operations on the source databaseDuring schema migration and full data migration: do not perform DDL operations on migrated objects; the task fails if schemas change mid-migration.Schedule DDL operations outside of migration windows.
Operations on the source databaseDuring full data migration only: do not write to the source database during migration. To guarantee consistency, select all three migration types (schema, full, and incremental).Use incremental data migration to allow writes to the source during migration.
Operations on the source databaseDuring data migration: do not update LONGTEXT fields; the task fails if LONGTEXT fields are modified.Pause LONGTEXT updates on the source database for the duration of the migration.
OtherRun migrations during off-peak hours to reduce impact on source and destination database performance. Full data migration uses read and write resources on both databases.
OtherFull data migration causes table fragmentation in the destination. After migration, the destination tablespace is larger than the source.Allocate extra tablespace in the destination before migration.
OtherDTS retries failed tasks for up to 7 days. Before switching workloads to the destination, stop or release any failed tasks, or revoke DTS write permissions using REVOKE. Otherwise, resumed tasks overwrite destination data with source data.Run REVOKE to remove DTS write permissions before switching workloads.

SQL operations supported for incremental migration

Operation typeSQL statement
DMLINSERT, UPDATE, DELETE
DDLCREATE TABLE (tables with partitions or functions are excluded)<br>ALTER TABLE, ADD COLUMN, DROP COLUMN, RENAME COLUMN, ADD INDEX<br>DROP TABLE<br>RENAME TABLE, TRUNCATE TABLE, CREATE INDEX

Set up the database account

Log on to each Oracle database and create an account with the permissions listed below. Skip this step if an account with these permissions already exists.

DatabaseSchema migrationFull data migrationIncremental data migration
Source self-managed Oracle databaseSchema owner permissionsSchema owner permissionsFine-grained permissions
Destination self-managed Oracle databaseSchema owner permissions

To create accounts and grant permissions, see Prepare a database account and the Oracle documentation for CREATE USER and GRANT.

Important

For incremental data migration from an Oracle database, enable archive logging and supplemental logging on the source. See the Configure an Oracle database section of "Prepare an Oracle database."

Create a migration task

  1. Go to the Data Migration Tasks page.

    1. Log on to the Data Management (DMS) console.

    2. In the top navigation bar, click DTS.

    3. In the left-side navigation pane, choose DTS (DTS) > Data Migration.

    Console navigation may vary depending on your DMS mode and layout. See Simple mode and Customize the layout and style of the DMS console. Alternatively, go directly to the Data Migration Tasks page in the new DTS console.
  2. From the drop-down list next to Data Migration Tasks, select the region where the migration instance resides.

    In the new DTS console, select the region in the upper-left corner.
  3. Click Create Task. In the Create Data Migration Task wizard, configure the source and destination databases.

    Warning

    Read the limits displayed at the top of the page before proceeding. Skipping this step may cause task failure or data inconsistency.

    SectionParameterDescription
    N/ATask NameA name for the task. DTS assigns a default name—specify a descriptive name to make the task easy to identify. The name does not need to be unique.
    Source DatabaseDatabase TypeSelect Oracle.
    Access MethodSelect Public IP Address for this example. For other access methods, set up the required network environment first. See Preparation overview.
    Instance RegionThe region where the source Oracle database resides.
    Hostname or IP addressThe endpoint of the source Oracle database.
    Port NumberThe service port of the source Oracle database. Default: 1521. The port must be accessible over the Internet for this example.
    Oracle TypeSelect Non-RAC Instance to configure the SID parameter, or select RAC or PDB Instance to configure the Service Name parameter. This example uses Non-RAC Instance.
    Database AccountThe source database account. See Set up the database account for required permissions.
    Database PasswordThe password for the database account.
    Destination DatabaseDatabase TypeSelect Oracle.
    Access MethodSelect Public IP Address for this example. For other access methods, set up the required network environment first. See Preparation overview.
    Instance RegionThe region where the destination Oracle database resides.
    Hostname or IP addressThe endpoint of the primary node in the destination Oracle database.
    Port NumberThe service port of the destination Oracle database. Default: 1521.
    Oracle TypeSelect Non-RAC Instance to configure the SID parameter, or select RAC or PDB Instance to configure the Service Name parameter. This example uses Non-RAC Instance.
    Database AccountThe destination database account. See Set up the database account for required permissions.
    Database PasswordThe password for the database account.
  4. If the source database has an IP address whitelist configured, add the DTS server CIDR blocks to the whitelist. Then click Test Connectivity and Proceed.

    Warning

    Adding DTS server CIDR blocks to a database whitelist or security group rules carries security risks. Take preventive measures such as: strengthening account credentials, limiting exposed ports, authenticating API calls, and regularly auditing whitelist rules. For a more secure connection, use Express Connect, VPN Gateway, or Smart Access Gateway instead of the public Internet.

  5. Configure the objects to migrate and the advanced settings.

    ParameterDescription
    Migration TypesSelect Schema Migration and Full Data Migration for a one-time migration. Select all three—Schema Migration, Full Data Migration, and Incremental Data Migration—to maintain service continuity during migration. If you omit Incremental Data Migration, do not write to the source database while migration is running.
    Processing Mode of Conflicting TablesPrecheck and Report Errors: fails the precheck if the source and destination have tables with identical names. Use object name mapping to rename destination tables if deletion is not possible.<br><br>Ignore Errors and Proceed: skips the precheck for identical table names.
    Warning

    This may cause data inconsistency. If schemas match, records with duplicate primary keys are not migrated. If schemas differ, only specific columns may be migrated or the task may fail.

    Source ObjectsSelect objects from Source Objects and click 向右小箭头 to move them to Selected Objects.
    Selected ObjectsTo rename a single object in the destination, right-click it in Selected Objects. See Map the name of a single object. To rename multiple objects at once, click Batch Edit. See Map multiple object names at a time.
    Note

    Renaming an object may cause dependent objects to fail migration. To filter rows by condition, right-click a table and specify a WHERE clause. See Set filter conditions. To select specific DML or DDL operations per object, right-click the object in Selected Objects.

  6. Click Next: Advanced Settings and configure the following parameters.

    ParameterDescription
    Select the dedicated cluster used to schedule the taskDTS schedules tasks to a shared cluster by default. To use a dedicated cluster, purchase one separately. See What is a DTS dedicated cluster?
    Retry Time for Failed ConnectionsHow long DTS retries when a connection to the source or destination fails. Valid range: 10–1440 minutes. Default: 720 minutes. Set this to at least 30 minutes. If DTS reconnects within the retry window, the task resumes; otherwise, the task fails.
    Note

    If multiple tasks share the same source or destination, the most recently set value applies. DTS charges for the instance during retries—set this based on your business requirements and release the instance promptly when no longer needed.

    The wait time before a retry when other issues occur in the source and destination databasesHow long DTS retries when DDL or DML operations fail. Valid range: 1–1440 minutes. Default: 10 minutes. Set this to at least 10 minutes. This value must be less than Retry Time for Failed Connections.
    Enable Throttling for Full Data MigrationLimits read/write load on the source and destination during full data migration. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected.
    Enable Throttling for Incremental Data MigrationLimits load during incremental data migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected.
    Environment TagAn optional tag to identify the DTS instance.
    Actual Write CodeThe encoding format used when writing data to the destination database.
    Configure ETLWhether to enable extract, transform, and load (ETL). Select Yes to enter data processing statements in the code editor. See Configure ETL in a data migration or data synchronization task. Select No to skip ETL. See What is ETL?
    Monitoring and AlertingWhether to configure alerts for this task. Select Yes to set an alert threshold and specify contacts to notify when the task fails or migration latency exceeds the threshold. See Configure monitoring and alerting for a new DTS task. Select No to skip alerting.
  7. Click Next: Save Task Settings and Precheck. DTS runs a precheck before the task starts. If any check fails, click View Details next to the failed item, fix the issue, and click Precheck Again. If an alert appears for an item you want to ignore, click Confirm Alert Details, then click Ignore > OK > Precheck Again. Ignoring alert items may cause data inconsistency.

    To review the API call parameters before saving, hover over Next: Save Task Settings and Precheck and click Preview API Call.
  8. Wait until the success rate reaches 100%, then click Next: Purchase Instance.

  9. On the Purchase Instance page, configure the instance class.

    SectionParameterDescription
    New Instance ClassResource GroupThe resource group for the migration instance. Default: default resource group. See What is Resource Management?
    Instance ClassThe instance class determines migration speed. See Specifications of data migration instances.
  10. Read and select the checkbox to agree to Data Transmission Service (Pay-as-you-go) Service Terms.

  11. Click Buy and Start. Monitor task progress on the Task Management page.