Use Data Transmission Service (DTS) to migrate data from an ApsaraDB RDS for PPAS instance to a PolarDB for PostgreSQL (Compatible with Oracle) cluster with minimal downtime. DTS supports schema migration, full data migration, and incremental data migration. Select all three types to keep your source instance serving traffic while the migration runs.
Prerequisites
Before you begin, make sure that you have:
-
A PolarDB for PostgreSQL (Compatible with Oracle) cluster with available storage space larger than the total data size in the source ApsaraDB RDS for PPAS instance. See Create a PolarDB for PostgreSQL(Compatible with Oracle) cluster
-
A database account for the source ApsaraDB RDS for PPAS instance with the required permissions (see Required permissions)
-
A database account for the destination PolarDB for Oracle cluster with schema owner permissions. See Create database accounts
Naming consideration: If any database, table, or field names in the source instance contain uppercase letters, enclose those names in double quotation marks (") when creating the corresponding objects in the PolarDB for Oracle cluster.
Migration types
DTS supports three migration types for this scenario. Select all three for a live migration with no downtime.
| Migration type | What it migrates | Notes |
|---|---|---|
| Schema migration | Table, view, synonym, trigger, stored procedure, stored function, package, and user-defined type schemas | DTS does not support triggers. If any object includes a trigger, data inconsistency may occur. |
| Full data migration | Historical data from the source database | Do not perform DDL operations on migrated objects during schema migration or full data migration. |
| Incremental data migration | Ongoing changes captured from redo log files: INSERT, UPDATE, DELETE | DDL operations are not synchronized. Enables service continuity during migration. |
Recommendation: Select Schema Migration, Full Data Migration, and Incremental Data Migration together. If you select only the first two, do not write to the source instance during migration to avoid data inconsistency.
Billing
| Migration type | Task configuration fee | Internet traffic fee |
|---|---|---|
| Schema migration + full data migration | Free | Charged only when data is migrated from Alibaba Cloud over the Internet. See Billing overview. |
| Incremental data migration | Charged. See Billing overview. | — |
Required permissions
Grant the following permissions to the database accounts used for migration.
| Database | Schema migration | Full data migration | Incremental data migration |
|---|---|---|---|
| ApsaraDB RDS for PPAS | Read | Read | Superuser role |
| PolarDB for Oracle cluster | Schema owner | Schema owner | Schema owner |
Before you start
Performance impact
DTS reads from the source database during full data migration and writes to the destination during incremental migration. This increases load on both databases.
To reduce the risk of slow queries, out of memory (OOM) errors, or service interruption:
-
Run the migration when CPU utilization on both source and destination databases is below 30%.
-
Make sure the PolarDB for Oracle cluster specs meet or exceed the source ApsaraDB RDS for PPAS instance specs (see Specification mapping).
Factors that increase load: slow SQL queries on the source, tables without primary keys, deadlocks on the destination.
Pre-migration constraints
Review these constraints before starting the migration task. If your setup does not meet these requirements, migration cannot proceed.
-
Each data migration task migrates data from a single database. To migrate multiple databases, create a separate task for each.
-
Source tables must have PRIMARY KEY or UNIQUE constraints with all fields unique. Without these constraints, the destination database may contain duplicate records.
-
DTS migrates foreign keys during schema migration. During full and incremental data migration, DTS temporarily disables foreign key constraint checks and cascade operations at the session level. If you perform cascade or delete operations on the source during this window, data inconsistency may occur.
Operational restrictions
Follow these rules while the migration task is running.
-
Do not perform DDL operations on objects being migrated. This applies during schema migration, full data migration, and incremental data migration.
-
If a migration task fails, DTS automatically resumes it. Stop or release the task before switching your workload to the destination cluster — otherwise, resumed task writes may overwrite data in the destination.
Specification mapping
Select a PolarDB for Oracle cluster with specs at or above your ApsaraDB RDS for PPAS instance. Insufficient CPU or memory after migration can cause slow queries or OOM errors.
If you have specific requirements for connection count or IOPS, see Specifications of compute nodes.
| ApsaraDB RDS for PPAS instance | Recommended PolarDB for Oracle cluster | ||
|---|---|---|---|
| Instance type | CPU and memory | Instance type | CPU and memory |
| rds.ppas.t1.small | 1 core, 1 GB | polar.o.x4.medium | 2 cores, 8 GB |
| ppas.x4.small.2 | 1 core, 4 GB | polar.o.x4.medium | 2 cores, 8 GB |
| ppas.x4.medium.2 | 2 cores, 8 GB | polar.o.x4.medium | 2 cores, 8 GB |
| ppas.x8.medium.2 | 2 cores, 16 GB | polar.o.x4.large | 4 cores, 16 GB |
| ppas.x4.large.2 | 4 cores, 16 GB | polar.o.x4.large | 4 cores, 16 GB |
| ppas.x8.large.2 | 4 cores, 32 GB | polar.o.x4.xlarge | 8 cores, 32 GB |
| ppas.x4.xlarge.2 | 8 cores, 32 GB | polar.o.x4.xlarge | 8 cores, 32 GB |
| ppas.x8.xlarge.2 | 8 cores, 64 GB | polar.o.x8.xlarge | 8 cores, 64 GB |
| ppas.x4.2xlarge.2 | 16 cores, 64 GB | polar.o.x8.2xlarge | 16 cores, 128 GB |
| ppas.x8.2xlarge.2 | 16 cores, 128 GB | polar.o.x8.2xlarge | 16 cores, 128 GB |
| ppas.x4.4xlarge.2 | 32 cores, 128 GB | polar.o.x8.4xlarge | 32 cores, 256 GB |
| ppas.x8.4xlarge.2 | 32 cores, 256 GB | polar.o.x8.4xlarge | 32 cores, 256 GB |
| rds.ppas.st.h43 | 60 cores, 470 GB | polar.o.x8.8xlarge | 64 cores, 512 GB |
Create a migration task
Step 1: Open the Data Migration Tasks page
-
Log on to the Data Management (DMS) console.
-
In the top navigation bar, click DTS.
-
In the left-side navigation pane, choose DTS (DTS) > Data Migration.
You can also go directly to the Data Migration Tasks page of the new DTS console. Console layout may vary — see Simple mode and Configure the DMS console based on your business requirements.
Step 2: Select a region
From the drop-down list next to Data Migration Tasks, select the region where the migration instance resides.
In the new DTS console, select the region in the upper-left corner.
Step 3: Configure source and destination databases
Click Create Task, then configure the parameters below.
Source database (ApsaraDB RDS for PPAS)
| Parameter | Value |
|---|---|
| Task name | DTS auto-generates a name. Specify a meaningful name to identify the task. Task names do not need to be unique. |
| Database type | Select PostgreSQL. ApsaraDB RDS for PPAS does not appear as an option. |
| Access method | Select Express Connect, VPN Gateway, or Smart Access Gateway. |
| Instance region | The region where the source ApsaraDB RDS for PPAS instance resides. |
| Replicate data across Alibaba Cloud accounts | Select No for same-account migration. |
| Connected VPC | The virtual private cloud (VPC) connected to the source instance. |
| IP address | The private IP address of the ApsaraDB RDS for PPAS instance. |
| Port number | The service port of the source instance. Default: 3433. |
| Database account | The account for the source instance. |
| Database password | The password for the source account. |
Destination database (PolarDB for Oracle)
| Parameter | Value |
|---|---|
| Database type | Select PolarDB (Compatible with Oracle). |
| Access method | Select Alibaba Cloud Instance. |
| Instance region | The region where the destination PolarDB for Oracle cluster resides. |
| Instance ID | The ID of the destination PolarDB for Oracle cluster. |
| Database name | The name of the destination database in the cluster. |
| Database account | The account for the destination cluster. See Required permissions. |
| Database password | The password for the destination account. |
Step 4: Test connectivity
Click Test Connectivity and Proceed.
DTS automatically adds its server CIDR blocks to the whitelist of Alibaba Cloud database instances and to the security group rules of Elastic Compute Service (ECS) instances hosting self-managed databases. For self-managed databases in data centers or on third-party clouds, add DTS CIDR blocks to the database whitelist manually. See Add the CIDR blocks of DTS servers to the security settings of on-premises databases. Adding DTS CIDR blocks to a whitelist or security group introduces security exposure. Before proceeding, take precautions such as: using strong passwords, limiting exposed ports, authenticating API calls, and auditing whitelist rules regularly. After the task completes or is released, remove the DTS CIDR blocks from the whitelist.
Step 5: Select objects and configure settings
Basic settings
| Parameter | Description |
|---|---|
| Task stages | Select Schema Migration, Full Data Migration, and Incremental Data Migration for a live migration. Select only the first two for an offline migration (no writes to the source during migration). |
| Processing mode of conflicting tables | Precheck and Report Errors: The precheck fails if the destination already contains tables with the same names as the source. Use object name mapping to rename destination objects if needed. Ignore Errors and Proceed: Skips the check. If schemas differ, only some columns may migrate or the task may fail. Data inconsistency is likely — use with caution. |
| Capitalization of object names in destination instance | Controls the case of database, table, and column names in the destination. Default: DTS default policy. See Specify the capitalization of object names in the destination instance. |
| Source objects | Select objects from the Source Objects list and click the right arrow icon to move them to Selected Objects. Note
The time field supports the TIMESTAMP data type. If a time field value in the source is |
| Selected objects | To rename a single object, right-click it in the Selected Objects section. To rename multiple objects at once, click Batch Edit in the upper-right corner. See Map object names. Note
Renaming an object may cause dependent objects to fail migration. |
Advanced settings
| Parameter | Description |
|---|---|
| Set alerts | No: No alerts. Yes: Set alert thresholds and contacts to receive notifications when the task fails or migration latency exceeds the threshold. See Configure monitoring and alerting when you create a DTS task. |
| Retry time for failed connection | The time window for DTS to retry failed connections. Valid values: 10–1440 minutes. Default: 120 minutes. Set this to at least 30 minutes. If DTS reconnects within the window, the task resumes automatically. Note
If multiple tasks share the same source or destination, the most recently set retry window takes precedence. DTS instance charges apply during retry periods. |
| Configure ETL | Yes: Enter data processing statements in the code editor. See Configure ETL in a data migration or data synchronization task. No: Skip ETL configuration. See also What is ETL?. |
Step 6: Run the precheck
Click Next: Save Task Settings and Precheck.
DTS runs a precheck automatically. The task cannot start until the precheck passes.
-
If an item fails: click View Details next to the failed item, resolve the issue, and rerun the precheck.
-
If an alert is triggered:
-
If the alert cannot be ignored: resolve the issue and rerun the precheck.
-
If the alert can be ignored: click Confirm Alert Details, then Ignore in the dialog, then OK, then Precheck Again. Ignoring alerts may lead to data inconsistency.
-
Step 7: Purchase the migration instance
Wait for the precheck success rate to reach 100%, then click Next: Purchase Instance.
On the Purchase Instance page, set the Instance Class parameter. The instance class determines migration speed. See Specifications of data migration instances.
Step 8: Start the task
-
Read and select the checkbox for Data Transmission Service (Pay-as-you-go) Service Terms.
-
Click Buy and Start.
The migration task appears in the task list. Monitor its progress there.
Cut over to the destination cluster
After incremental data migration reaches a stable state (migration latency approaches zero), you can cut over. Complete the following steps before switching your workload.
-
Handle sequences. After cutover, newly written sequences in the destination do not automatically continue from the maximum value of the source sequences. Query the maximum sequence value in the source before switching, then set it as the initial value in the destination. Run the following command on the source ApsaraDB RDS for PPAS instance to generate the
setvalstatements:do language plpgsql $$ declare nsp name; rel name; val int8; begin for nsp,rel in select nspname,relname from pg_class t2 , pg_namespace t3 where t2.relnamespace=t3.oid and t2.relkind='S' loop execute format($_$select last_value from %I.%I$_$, nsp, rel) into val; raise notice '%', format($_$select setval('%I.%I'::regclass, %s);$_$, nsp, rel, val+1); end loop; end; $$;Run the generated
setvalstatements on the destination PolarDB for Oracle cluster. -
Stop the migration task. Before redirecting your application to the destination, stop or release the DTS migration task. If the task is still running and DTS auto-resumes it after a failure, resumed writes from the source will overwrite data in the destination.
-
Switch your application connection to the PolarDB for Oracle cluster endpoint. To enable read/write splitting and reduce load on the cluster, use the cluster endpoint. See View or apply for an endpoint.
What's next
-
Specifications of compute nodes — right-size your PolarDB for Oracle cluster for connection count and IOPS requirements
-
Map object names — rename objects during migration to resolve naming conflicts
-
Billing overview — understand DTS pricing for incremental migration tasks