Data Transmission Service (DTS) migrates data between two self-managed Oracle databases with minimal downtime. DTS supports schema migration, full data migration, and incremental data migration. Select all three types to keep your application running throughout the migration.
Prerequisites
Before you begin, make sure you have:
A source Oracle database running version 9i, 10g, 11g, 12c, 18c, or 19c
ARCHIVELOG mode enabled on the source database, with archived log files accessible and a retention period configured. See Managing Archived Redo Log Files
Supplemental logging enabled on the source database, with
SUPPLEMENTAL_LOG_DATA_PKandSUPPLEMENTAL_LOG_DATA_UIset toYes. See Supplemental LoggingReviewed DTS capabilities and limits for Oracle migration. Use Advanced Database & Application Migration (ADAM) to evaluate your database before migrating. See Prepare an Oracle database and Overview
Billing
| Migration type | Instance configuration fee | Internet traffic fee |
|---|---|---|
| Schema migration and full data migration | Free | Charged only when data is migrated from Alibaba Cloud over the Internet. See Billing overview. |
| Incremental data migration | Charged. See Billing overview. | — |
Migration types
| Migration type | Description |
|---|---|
| Schema migration | Migrates the schemas of selected objects to the destination Oracle database. Supported object types: table, view, synonym, trigger, stored procedure, stored function, package, and user-defined type. Note Trigger schema migration is not supported in this scenario. We recommend that you delete the triggers of the source database to prevent data inconsistency caused by triggers. See Configure a data synchronization task for a source database that contains a trigger. |
| Full data migration | Migrates all existing data from the source Oracle database to the destination. We recommend that you do not perform DDL operations on migrated objects during schema migration or full data migration. |
| Incremental data migration | After full data migration, DTS polls redo log files from the source and applies incremental changes to the destination. This keeps the destination in sync without interrupting your application. |
Limitations
During schema migration, DTS migrates foreign keys from the source database to the destination database. During full data migration and incremental data migration, DTS temporarily disables foreign key constraint checks and cascade operations at the session level. Cascade and delete operations on the source database during migration may cause data inconsistency.
| Category | Limitation | Workaround |
|---|---|---|
| Source database | The source database server must have sufficient outbound bandwidth; insufficient bandwidth reduces migration speed. | Increase outbound bandwidth on the source server before starting migration. |
| Source database | If the source is an Oracle RAC database connected over Express Connect, you must specify a VIP for the source database when configuring the connection. If the source is an Oracle RAC database connected over Express Connect, VPN Gateway, Smart Access Gateway, Database Gateway, or Cloud Enterprise Network (CEN), you can use a single VIP rather than a Single Client Access Name (SCAN) IP address. Node failover is not supported after you specify a VIP. | Use a VIP and plan for manual failover if needed. |
| Source database | If a VARCHAR2 field in the source contains an empty string (evaluated as null in Oracle) and the corresponding destination field has a NOT NULL constraint, the migration task fails. | Remove the NOT NULL constraint from the destination field, or ensure the source field is not empty before migrating. |
| Source database | Tables to be migrated must have PRIMARY KEY or UNIQUE constraints with all fields unique; otherwise, the destination may contain duplicate records. | Add PRIMARY KEY or UNIQUE constraints to tables before migrating. |
| Source database | For Oracle 12c and later: table names cannot exceed 30 bytes. | Rename tables that exceed 30 bytes before migrating. |
| Source database | If you select individual tables and need to rename tables or columns in the destination, a single task supports up to 1,000 tables. For larger migrations, configure multiple tasks or migrate the entire database. | Split the migration into multiple tasks, or migrate the entire database without per-table renaming. |
| Incremental migration | Redo logging and archive logging must be enabled. | Enable both before starting the migration task. |
| Incremental migration | For incremental data migration only: retain redo logs and archive logs for more than 24 hours. For full data migration followed by incremental data migration: retain redo logs and archive logs for at least 7 days. After full migration completes, you can set the retention period to more than 24 hours. Insufficient log retention may cause task failure and, in extreme cases, data loss. Otherwise, the Service Level Agreement (SLA) of DTS does not guarantee service reliability or performance. | Set the log retention period before starting migration. Extend retention during the full migration phase. |
| Operations on the source database | During schema migration and full data migration: do not perform DDL operations on migrated objects; the task fails if schemas change mid-migration. | Schedule DDL operations outside of migration windows. |
| Operations on the source database | During full data migration only: do not write to the source database during migration. To guarantee consistency, select all three migration types (schema, full, and incremental). | Use incremental data migration to allow writes to the source during migration. |
| Operations on the source database | During data migration: do not update LONGTEXT fields; the task fails if LONGTEXT fields are modified. | Pause LONGTEXT updates on the source database for the duration of the migration. |
| Other | Run migrations during off-peak hours to reduce impact on source and destination database performance. Full data migration uses read and write resources on both databases. | — |
| Other | Full data migration causes table fragmentation in the destination. After migration, the destination tablespace is larger than the source. | Allocate extra tablespace in the destination before migration. |
| Other | DTS retries failed tasks for up to 7 days. Before switching workloads to the destination, stop or release any failed tasks, or revoke DTS write permissions using REVOKE. Otherwise, resumed tasks overwrite destination data with source data. | Run REVOKE to remove DTS write permissions before switching workloads. |
SQL operations supported for incremental migration
| Operation type | SQL statement |
|---|---|
| DML | INSERT, UPDATE, DELETE |
| DDL | CREATE TABLE (tables with partitions or functions are excluded)<br>ALTER TABLE, ADD COLUMN, DROP COLUMN, RENAME COLUMN, ADD INDEX<br>DROP TABLE<br>RENAME TABLE, TRUNCATE TABLE, CREATE INDEX |
Set up the database account
Log on to each Oracle database and create an account with the permissions listed below. Skip this step if an account with these permissions already exists.
| Database | Schema migration | Full data migration | Incremental data migration |
|---|---|---|---|
| Source self-managed Oracle database | Schema owner permissions | Schema owner permissions | Fine-grained permissions |
| Destination self-managed Oracle database | Schema owner permissions | — | — |
To create accounts and grant permissions, see Prepare a database account and the Oracle documentation for CREATE USER and GRANT.
For incremental data migration from an Oracle database, enable archive logging and supplemental logging on the source. See the Configure an Oracle database section of "Prepare an Oracle database."
Create a migration task
Go to the Data Migration Tasks page.
Log on to the Data Management (DMS) console.
In the top navigation bar, click DTS.
In the left-side navigation pane, choose DTS (DTS) > Data Migration.
Console navigation may vary depending on your DMS mode and layout. See Simple mode and Customize the layout and style of the DMS console. Alternatively, go directly to the Data Migration Tasks page in the new DTS console.
From the drop-down list next to Data Migration Tasks, select the region where the migration instance resides.
In the new DTS console, select the region in the upper-left corner.
Click Create Task. In the Create Data Migration Task wizard, configure the source and destination databases.
WarningRead the limits displayed at the top of the page before proceeding. Skipping this step may cause task failure or data inconsistency.
Section Parameter Description N/A Task Name A name for the task. DTS assigns a default name—specify a descriptive name to make the task easy to identify. The name does not need to be unique. Source Database Database Type Select Oracle. Access Method Select Public IP Address for this example. For other access methods, set up the required network environment first. See Preparation overview. Instance Region The region where the source Oracle database resides. Hostname or IP address The endpoint of the source Oracle database. Port Number The service port of the source Oracle database. Default: 1521. The port must be accessible over the Internet for this example. Oracle Type Select Non-RAC Instance to configure the SID parameter, or select RAC or PDB Instance to configure the Service Name parameter. This example uses Non-RAC Instance. Database Account The source database account. See Set up the database account for required permissions. Database Password The password for the database account. Destination Database Database Type Select Oracle. Access Method Select Public IP Address for this example. For other access methods, set up the required network environment first. See Preparation overview. Instance Region The region where the destination Oracle database resides. Hostname or IP address The endpoint of the primary node in the destination Oracle database. Port Number The service port of the destination Oracle database. Default: 1521. Oracle Type Select Non-RAC Instance to configure the SID parameter, or select RAC or PDB Instance to configure the Service Name parameter. This example uses Non-RAC Instance. Database Account The destination database account. See Set up the database account for required permissions. Database Password The password for the database account. If the source database has an IP address whitelist configured, add the DTS server CIDR blocks to the whitelist. Then click Test Connectivity and Proceed.
WarningAdding DTS server CIDR blocks to a database whitelist or security group rules carries security risks. Take preventive measures such as: strengthening account credentials, limiting exposed ports, authenticating API calls, and regularly auditing whitelist rules. For a more secure connection, use Express Connect, VPN Gateway, or Smart Access Gateway instead of the public Internet.
Configure the objects to migrate and the advanced settings.
Parameter Description Migration Types Select Schema Migration and Full Data Migration for a one-time migration. Select all three—Schema Migration, Full Data Migration, and Incremental Data Migration—to maintain service continuity during migration. If you omit Incremental Data Migration, do not write to the source database while migration is running. Processing Mode of Conflicting Tables Precheck and Report Errors: fails the precheck if the source and destination have tables with identical names. Use object name mapping to rename destination tables if deletion is not possible.<br><br>Ignore Errors and Proceed: skips the precheck for identical table names. WarningThis may cause data inconsistency. If schemas match, records with duplicate primary keys are not migrated. If schemas differ, only specific columns may be migrated or the task may fail.
Source Objects Select objects from Source Objects and click
to move them to Selected Objects.Selected Objects To rename a single object in the destination, right-click it in Selected Objects. See Map the name of a single object. To rename multiple objects at once, click Batch Edit. See Map multiple object names at a time. NoteRenaming an object may cause dependent objects to fail migration. To filter rows by condition, right-click a table and specify a WHERE clause. See Set filter conditions. To select specific DML or DDL operations per object, right-click the object in Selected Objects.
Click Next: Advanced Settings and configure the following parameters.
Parameter Description Select the dedicated cluster used to schedule the task DTS schedules tasks to a shared cluster by default. To use a dedicated cluster, purchase one separately. See What is a DTS dedicated cluster? Retry Time for Failed Connections How long DTS retries when a connection to the source or destination fails. Valid range: 10–1440 minutes. Default: 720 minutes. Set this to at least 30 minutes. If DTS reconnects within the retry window, the task resumes; otherwise, the task fails. NoteIf multiple tasks share the same source or destination, the most recently set value applies. DTS charges for the instance during retries—set this based on your business requirements and release the instance promptly when no longer needed.
The wait time before a retry when other issues occur in the source and destination databases How long DTS retries when DDL or DML operations fail. Valid range: 1–1440 minutes. Default: 10 minutes. Set this to at least 10 minutes. This value must be less than Retry Time for Failed Connections. Enable Throttling for Full Data Migration Limits read/write load on the source and destination during full data migration. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected. Enable Throttling for Incremental Data Migration Limits load during incremental data migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected. Environment Tag An optional tag to identify the DTS instance. Actual Write Code The encoding format used when writing data to the destination database. Configure ETL Whether to enable extract, transform, and load (ETL). Select Yes to enter data processing statements in the code editor. See Configure ETL in a data migration or data synchronization task. Select No to skip ETL. See What is ETL? Monitoring and Alerting Whether to configure alerts for this task. Select Yes to set an alert threshold and specify contacts to notify when the task fails or migration latency exceeds the threshold. See Configure monitoring and alerting for a new DTS task. Select No to skip alerting. Click Next: Save Task Settings and Precheck. DTS runs a precheck before the task starts. If any check fails, click View Details next to the failed item, fix the issue, and click Precheck Again. If an alert appears for an item you want to ignore, click Confirm Alert Details, then click Ignore > OK > Precheck Again. Ignoring alert items may cause data inconsistency.
To review the API call parameters before saving, hover over Next: Save Task Settings and Precheck and click Preview API Call.
Wait until the success rate reaches 100%, then click Next: Purchase Instance.
On the Purchase Instance page, configure the instance class.
Section Parameter Description New Instance Class Resource Group The resource group for the migration instance. Default: default resource group. See What is Resource Management? Instance Class The instance class determines migration speed. See Specifications of data migration instances. Read and select the checkbox to agree to Data Transmission Service (Pay-as-you-go) Service Terms.
Click Buy and Start. Monitor task progress on the Task Management page.