Data Transmission Service (DTS) supports migrating data from a self-managed Oracle database to an ApsaraDB RDS for PostgreSQL instance with minimal downtime. Run schema migration, full data migration, and incremental data migration together to keep your source database serving traffic while the migration is in progress.
Prerequisites
Before you begin, make sure that:
-
The Oracle database version is 9i, 10g, 11g, 12c, 18c, or 19c.
-
The Oracle database is running in ARCHIVELOG mode with archive log files accessible and a suitable retention period configured. For details, see Managing Archived Redo Log Files.
-
Supplemental logging is enabled for the Oracle database, with
SUPPLEMENTAL_LOG_DATA_PKandSUPPLEMENTAL_LOG_DATA_UIboth set toYes. For details, see Supplemental Logging. -
All tables to migrate contain primary keys or UNIQUE NOT NULL indexes.
-
The destination ApsaraDB RDS for PostgreSQL instance is created. For details, see Create an ApsaraDB RDS for PostgreSQL instance.
-
You have reviewed the capabilities and limits of DTS for Oracle migrations. Use Advanced Database & Application Migration (ADAM) for a database compatibility assessment before migration. For details, see Prepare an Oracle database and Overview.
Billing
| Migration type | Task configuration fee | Internet traffic fee |
|---|---|---|
| Schema migration and full data migration | Free | Charged only when data is migrated from Alibaba Cloud over the Internet. See Billing overview. |
| Incremental data migration | Charged. See Billing overview. | — |
Migration types
| Migration type | Description |
|---|---|
| Schema migration | Migrates object schemas from Oracle to ApsaraDB RDS for PostgreSQL. Supported object types: table, view, synonym, trigger, stored procedure, stored function, package, and custom type. Note
DTS is incompatible with triggers. Delete triggers from the source database before migration to prevent data inconsistency. For details, see Configure a data synchronization task for a source database that contains a trigger. |
| Full data migration | Migrates all existing data from the Oracle database to ApsaraDB RDS for PostgreSQL. Do not perform DDL operations on objects being migrated during this phase. |
| Incremental data migration | After full data migration completes, DTS polls redo log files from the Oracle database and applies incremental changes to ApsaraDB RDS for PostgreSQL. This keeps your source database operational throughout the migration. |
SQL operations supported for incremental migration
| Operation type | SQL statements |
|---|---|
| DML | INSERT, UPDATE, DELETE |
| DDL | CREATE TABLE (excluding statements with partitioning clauses, subpartitioning clauses, or functions; CREATE TABLE AS SELECT is not supported), ALTER TABLE (ADD COLUMN, ADD INDEX, DROP COLUMN, DROP INDEX, MODIFY COLUMN, RENAME COLUMN), DROP TABLE, RENAME TABLE, TRUNCATE TABLE, CREATE INDEX |
Limitations
-
During schema migration, DTS migrates foreign keys from the source database to the destination database.
-
During full data migration and incremental data migration, DTS temporarily disables constraint checks and cascade operations on foreign keys at the session level. If you perform cascade and delete operations on the source database during migration, data inconsistency may occur.
Source database limitations
-
The source database server must have sufficient outbound bandwidth. Insufficient bandwidth reduces migration speed.
-
If the source database is an Oracle RAC database connected over Express Connect, you must specify a virtual IP address (VIP) when configuring the source database.
-
If the source database is an Oracle RAC database connected over Express Connect, VPN Gateway, Smart Access Gateway, Database Gateway, or Cloud Enterprise Network (CEN), use a single VIP rather than a Single Client Access Name (SCAN) IP address when configuring the source database. After specifying the VIP, node failover is not supported for the Oracle RAC database.
-
If a VARCHAR2 field in Oracle contains an empty string (which evaluates as null in Oracle) and the corresponding field in PostgreSQL has a NOT NULL constraint, the migration task fails.
-
Tables to migrate must have PRIMARY KEY or UNIQUE constraints with all fields unique. Without these, the destination database may contain duplicate records.
-
For Oracle 12c or later, table names cannot exceed 30 bytes.
-
If you select tables as migration objects and need to rename tables or columns in the destination, a single task supports up to 1,000 tables. For more than 1,000 tables, configure multiple tasks or migrate the entire database.
-
For incremental migration, redo logging and archive logging must be enabled: Failure to meet these retention requirements may cause DTS to lose access to the required logs, resulting in task failure or, in extreme cases, data loss or inconsistency. The DTS Service Level Agreement (SLA) does not cover failures caused by insufficient log retention.
-
Incremental migration only: redo logs and archive logs must be retained for more than 24 hours.
-
Full data migration + incremental data migration: redo logs and archive logs must be retained for at least 7 days. After full data migration completes, you can reduce the retention period to more than 24 hours.
-
-
During schema migration and full data migration, do not perform DDL operations on migrated objects. This causes migration failures.
-
During full data migration only (without incremental), do not write to the source database. To ensure data consistency, run schema migration, full data migration, and incremental data migration together.
-
Do not update LONGTEXT fields during migration. This causes migration failures.
Other limitations
-
Run migrations during off-peak hours. Full data migration uses read and write resources on both source and destination databases, increasing database load.
-
After full data migration, the destination tablespace will be larger than the source due to concurrent INSERT fragmentation.
-
DTS automatically retries failed migration tasks for up to 7 days. Before switching workloads to the destination, stop or release any failed tasks. Alternatively, execute the
REVOKEstatement to revoke write permissions from the DTS accounts on the destination database. Otherwise, a resumed failed task may overwrite destination data with source data. -
Character sets of the source and destination databases must be compatible. Incompatible character sets cause data inconsistency or migration failure.
-
Use the schema migration feature of DTS to avoid failures from incompatible data types.
-
The source and destination databases must use the same time zone.
-
DTS does not write the null terminator
\0to ApsaraDB RDS for PostgreSQL. If migrated data contains this character, data inconsistency may occur. -
CHECK constraints in Oracle are converted to NOT NULL constraints in ApsaraDB RDS for PostgreSQL.
-
DTS migrates custom type objects to ApsaraDB RDS for PostgreSQL. Built-in objects automatically generated by Oracle are not migrated — ApsaraDB RDS for PostgreSQL supports these natively.
Set up database accounts
Log on to the Oracle database, create an account for DTS to use, and grant the required permissions.
If you already have an account with the permissions in the table below, skip this step.
| Database | Schema migration | Full data migration | Incremental data migration |
|---|---|---|---|
| Self-managed Oracle database | Schema owner permissions | Schema owner permissions | Fine-grained permissions |
| ApsaraDB RDS for PostgreSQL | Schema owner permissions | — | — |
To create accounts and grant permissions:
-
Self-managed Oracle databases: Prepare a database account, CREATE USER, and GRANT
-
ApsaraDB RDS for PostgreSQL instances: Create an account
Create a migration task
-
Go to the Data Migration Tasks page.
-
Log on to the Data Management (DMS) console.
-
In the top navigation bar, click DTS.
-
In the left-side navigation pane, choose DTS (DTS) > Data Migration.
Steps may vary depending on the DMS console mode and layout. For details, see Simple mode and Customize the layout and style of the DMS console. You can also go directly to the Data Migration Tasks page of the new DTS console.
-
-
From the drop-down list next to Data Migration Tasks, select the region where the migration instance resides.
In the new DTS console, select the region from the upper-left corner.
-
Click Create Task and configure the source and destination databases.
WarningAfter configuring source and destination databases, review the limits displayed at the top of the page before proceeding. Skipping this review may cause task failures or data inconsistency.
Source database
Parameter Description Task Name A name for the migration task. DTS assigns a default name. Use a descriptive name to make the task easy to identify. Database Type Select Oracle. Connection Type The method used to access the source Oracle database. In this example, Public IP Address is selected. For other access methods, set up the required environment first. See Preparation overview. Instance Region The region where the source Oracle database resides. Hostname or IP address The IP address of the source Oracle database. Port Number The service port of the source Oracle database. Default: 1521. The port must be accessible over the Internet for this example. Oracle Type The architecture of the source Oracle database. Select Non-RAC Instance to configure the SID parameter, or RAC or PDB Instance to configure the Service Name parameter. This example uses Non-RAC Instance. Database Account The account for the source Oracle database. For required permissions, see Set up database accounts. Database Password The password for the database account. Destination database
Parameter Description Database Type Select PostgreSQL. Connection Type Select Alibaba Cloud Instance. Instance Region The region where the destination ApsaraDB RDS for PostgreSQL instance resides. Instance ID The ID of the destination ApsaraDB RDS for PostgreSQL instance. Database Name The name of the destination database. Database Account The account for the destination instance. For required permissions, see Set up database accounts. Database Password The password for the database account. -
If your source database uses an IP address whitelist, add the DTS server CIDR blocks to the whitelist. Then click Test Connectivity and Proceed.
WarningAdding DTS server CIDR blocks to a database whitelist or security group may introduce security risks. Before using DTS, take preventive measures such as: strengthening username and password security, limiting exposed ports, authenticating API calls, regularly auditing whitelist and security group rules, and removing unauthorized CIDR blocks. Consider connecting the database to DTS over Express Connect, VPN Gateway, or Smart Access Gateway instead of using a public IP address.
-
Configure migration objects and settings.
Parameter Description Migration Types Select Schema Migration, Full Data Migration, and Incremental Data Migration to ensure service continuity. Select only Schema Migration and Full Data Migration for a one-time migration with downtime. NoteWithout incremental data migration, do not write to the source database during migration.
Processing Mode of Conflicting Tables Precheck and Report Errors: checks for table name conflicts before starting. The task fails the precheck if conflicts exist. Use object name mapping to resolve conflicts without deleting or renaming destination tables. Ignore Errors and Proceed: skips the conflict check. WarningThis option may cause data inconsistency. If schemas match, DTS skips records with duplicate primary keys. If schemas differ, only specific columns are migrated or the task fails.
Source Objects Select objects from Source Objects and click the arrow icon to add them to Selected Objects. Selected Objects Right-click an object to rename it or set WHERE filter conditions. Click Batch Edit to rename multiple objects at once. NoteRenaming an object with the object name mapping feature may cause dependent objects to fail migration. For WHERE conditions, see Set filter conditions. For per-table incremental SQL operation selection, right-click a table and choose the SQL operations to include.
-
Click Next: Advanced Settings and configure the following parameters.
Parameter Description Select the dedicated cluster used to schedule the task By default, DTS schedules the task to a shared cluster. To use a dedicated cluster, purchase one first. See What is a DTS dedicated cluster? Retry Time for Failed Connections How long DTS retries if the source or destination database becomes unreachable after the task starts. Valid values: 10–1,440 minutes. Default: 720 minutes. Set this to at least 30 minutes. If DTS reconnects within this period, the task resumes automatically. NoteIf multiple tasks share a source or destination database, the most recently specified retry time applies. DTS charges for the instance during retry periods. We recommend that you specify the retry time based on your business requirements and release the DTS instance promptly after the source database and destination instance are released.
The wait time before a retry when other issues occur in the source and destination databases. How long DTS retries if DDL or DML operations fail after the task starts. Valid values: 1–1,440 minutes. Default: 10 minutes. Set this to at least 10 minutes. This value must be less than the Retry Time for Failed Connections value. Enable Throttling for Full Data Migration Limits DTS resource usage during full data migration to reduce load on database servers. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected. Enable Throttling for Incremental Data Migration Limits DTS resource usage during incremental data migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected. Environment Tag An optional tag to identify the DTS instance. Actual Write Code The encoding format for data written to the destination database. Configure ETL Whether to enable the extract, transform, and load (ETL) feature. Select Yes to enter data processing statements in the code editor. See Configure ETL in a data migration or data synchronization task. Select No to skip ETL. See also: What is ETL? Monitoring and Alerting Whether to configure alerts for task failures or migration latency exceeding a threshold. Select Yes to specify an alert threshold and contacts. See Configure monitoring and alerting for a new DTS task. -
Click Next: Save Task Settings and Precheck. To preview the API parameters for this configuration, hover over Next: Save Task Settings and Precheck and click Preview API Call.
DTS runs a precheck before starting the task. The task can only start after passing the precheck. - If the precheck fails, click View Details next to each failed item, fix the issues, and click Precheck Again. - If the precheck produces an alert, either fix the issue and rerun the precheck, or click Confirm Alert Details > Ignore > OK > Precheck Again to proceed. Ignoring alerts may cause data inconsistency.
-
Wait for the success rate to reach 100%, then click Next: Purchase Instance.
-
On the Purchase Instance page, configure the following parameters.
Parameter Description Resource Group The resource group for the migration instance. Default: default resource group. See What is Resource Management? Instance Class The instance class determines migration speed. See Specifications of data migration instances. -
Read and select the Data Transmission Service (Pay-as-you-go) Service Terms check box.
-
Click Buy and Start. Monitor the task progress on the Task Management page.