All Products
Search
Document Center

Data Transmission Service:Migrate data from a PolarDB-X 2.0 instance to a self-managed Oracle database

Last Updated:Mar 28, 2026

Data Transmission Service (DTS) lets you migrate data from a PolarDB-X 2.0 instance to a self-managed Oracle database. Choose schema migration and full data migration for a one-time cutover, or add incremental data migration to keep the source and destination in sync during the transition window, minimizing downtime.

Prerequisites

Before you begin, ensure that you have:

  • A self-managed Oracle database with available storage space larger than the total data size of the source PolarDB-X 2.0 instance

  • A schema created in the Oracle database (a schema is automatically created when you create a user in Oracle)

  • Database accounts with the required permissions on both the source and destination databases (see Required permissions)

Choose a migration type

Select the migration types that match your scenario before you start.

ScenarioMigration types to select
One-time cutover with no writes during migrationSchema migration + Full data migration
Minimize downtime by keeping source and destination in sync during transitionSchema migration + Full data migration + Incremental data migration
If you do not select Incremental Data Migration, we recommend that you do not write data to the source database during migration to maintain data consistency.

Limitations

Source database limitations

LimitationImpactAction
Tables must have PRIMARY KEY or UNIQUE constraints with all fields uniqueDestination database may contain duplicate recordsAdd primary key or unique constraints before migration
When renaming tables or columns during migration, a single task supports up to 5,000 tablesRequest error for tasks exceeding the limitSplit into multiple tasks or migrate the entire database
Tables whose names contain uppercase letters can only undergo schema migrationFull and incremental migration are not supported for these tablesPlan for manual data transfer for these tables
The binlog_row_image parameter must be set to full (binary logging is enabled by default)Precheck fails and the task cannot startSet binlog_row_image to full. For details, see Parameter settings
DTS migrates foreign keys from the source database during schema migration.

General limitations

  • Migrate during off-peak hours to reduce load on both database servers. Full data migration uses read and write resources on the source and destination.

  • After full data migration completes, concurrent INSERT operations cause table fragmentation in the destination. The used tablespace in the destination will be larger than in the source.

  • Write data to the destination only through DTS during migration to prevent data inconsistency.

  • At most one column's attributes can be changed. Basic data types cannot be changed.

  • DTS creates a database named dts_health_check and a table named ha_health_check in the source instance. DTS updates this table regularly to advance the binary log position.

Billing

Migration typeInstance configuration feeInternet traffic fee
Schema migration and full data migrationFree of chargeCharged only when data is migrated from Alibaba Cloud over the Internet. For details, see Billing overview.
Incremental data migrationCharged. For details, see Billing overview.

SQL operations supported for incremental migration

DTS supports the following DML operations for incremental data migration: INSERT, UPDATE, and DELETE.

Required permissions

Grant the following permissions to the database accounts used for migration.

DatabaseSchema migrationFull data migrationIncremental data migration
Source PolarDB-X 2.0 instanceSELECTSELECTREPLICATION SLAVE, REPLICATION CLIENT, and SELECT on objects to be migrated
Destination self-managed Oracle databaseOwner permissions on the schema

To create a database account and grant permissions:

Migrate data

Step 1: Open the Data Migration Tasks page

  1. Log on to the Data Management (DMS) console.

  2. In the top navigation bar, click DTS.

  3. In the left-side navigation pane, choose DTS (DTS) > Data Migration.

Navigation may vary based on your DMS console mode and layout. For details, see Simple mode and Customize the layout and style of the DMS console. You can also go directly to the Data Migration Tasks page of the new DTS console.

Step 2: Select the region

From the drop-down list next to Data Migration Tasks, select the region where the data migration instance resides.

In the new DTS console, select the region in the upper-left corner.

Step 3: Configure source and destination databases

Click Create Task. In the Create Task wizard, configure the following parameters.

Source database

ParameterDescription
Task NameThe name of the task. DTS automatically assigns a name to the task. We recommend that you specify a descriptive name that makes it easy to identify the task. You do not need to specify a unique task name.
Select an existing DMS database instance. (Optional. If you have not registered a DMS database instance, ignore this option and configure database settings in the section below.)Select an existing instance to auto-populate its parameters, or leave blank to configure manually.
Database TypeSelect PolarDB-X 2.0.
Access MethodSelect Alibaba Cloud Instance.
Instance RegionThe region where the source PolarDB-X 2.0 instance resides.
Replicate Data Across Alibaba Cloud AccountsSelect No if the source and destination are under the same account.
Instance IDThe ID of the source PolarDB-X 2.0 instance.
Database AccountThe database account of the source instance. For required permissions, see Required permissions.
Database PasswordThe password for the database account.

Destination database

ParameterDescription
Select an existing DMS database instance. (Optional. If you have not registered a DMS database instance, ignore this option and configure database settings in the section below.)Select an existing instance to auto-populate its parameters, or leave blank to configure manually.
Database TypeSelect Oracle.
Access MethodSelect Self-managed Database on ECS. If you use a different access method, set up the required network environment first. For details, see Preparation overview.
Instance RegionThe region where the destination self-managed Oracle database resides.
ECS Instance IDThe ID of the Elastic Compute Service (ECS) instance hosting the Oracle database.
Port NumberThe service port of the Oracle database. Default value: 1521.
Oracle TypeSelect the architecture: Non-RAC Instance (requires SID) or RAC or PDB Instance (requires Service Name).
Database AccountThe account of the destination Oracle database. For required permissions, see Required permissions.
Database PasswordThe password for the database account.

Step 4: Test connectivity

In the lower part of the page, click Test Connectivity and Proceed.

Step 5: Configure network access for DTS

Add the CIDR blocks of DTS servers to the security settings of the self-managed database (such as firewall rules) to allow inbound access. Then click Test Connectivity and Proceed.

DTS handles network access automatically in most cases:

  • Alibaba Cloud database instances (ApsaraDB RDS, ApsaraDB for MongoDB, and others): DTS automatically adds its CIDR blocks to the instance IP address whitelist.

  • Self-managed database on a single ECS instance: DTS automatically adds its CIDR blocks to the ECS security group rules. Make sure the ECS instance can reach the database.

  • Self-managed database on multiple ECS instances: Manually add DTS CIDR blocks to the security group rules of each ECS instance.

  • On-premises or third-party cloud databases: Manually add DTS CIDR blocks to the database IP address whitelist.

For the full list of DTS CIDR blocks, see Add the CIDR blocks of DTS servers to the security settings of on-premises databases.

Warning

Adding DTS CIDR blocks to your database or ECS security settings introduces security exposure. Before proceeding, take preventive measures such as: strengthening account credentials, limiting exposed ports, enabling API authentication, regularly auditing IP whitelists and security group rules and forbidding unauthorized CIDR blocks, and connecting to DTS via Express Connect, VPN Gateway, or Smart Access Gateway.

Step 6: Select objects and configure migration settings

Configure the following migration parameters.

ParameterDescription
Migration TypesSelect the types based on your scenario. See Choose a migration type.
Processing Mode of Conflicting TablesPrecheck and Report Errors: fails the precheck if the destination contains tables with identical names. Use object name mapping to rename conflicting tables that cannot be deleted. See Map object names. Ignore Errors and Proceed: skips the precheck for identical table names. Records with matching primary keys are skipped; schema mismatches may cause partial migration or task failure.
Capitalization of Object Names in Destination InstanceControls the casing of database names, table names, and column names in the destination. Default: DTS default policy. See Specify the capitalization of object names in the destination instance.
Source ObjectsSelect columns, tables, or databases and click the arrow icon to move them to Selected Objects. Selecting tables or columns excludes views, triggers, and stored procedures.
Selected ObjectsRight-click the destination database and select Edit Schema. In the Schema Name field, enter the schema in the Oracle database that will receive the data. Use uppercase letters for the schema name. To filter rows, right-click a table and specify WHERE conditions. See Use SQL conditions to filter data. Renaming objects with the object name mapping feature may cause dependent objects to fail migration.

Step 7: Configure advanced settings

Click Next: Advanced Settings and configure the following parameters.

ParameterDescription
Select the dedicated cluster used to schedule the taskBy default, DTS schedules the task on the shared cluster. Purchase and select a dedicated cluster for predictable performance. See What is a DTS dedicated cluster.
Set AlertsNo: no alerting. Yes: sends notifications to alert contacts when the task fails or migration latency exceeds the threshold. See Configure monitoring and alerting.
Retry Time for Failed ConnectionsHow long DTS retries after a connection failure. Valid values: 10–1440 minutes. Default: 720 minutes. Set to more than 30 minutes. If the connection is restored within this window, the task resumes; otherwise, it fails. If multiple tasks share the same source or destination, the most recently set value applies. DTS charges for the instance during retry.
The wait time before a retry when other issues occur in the source and destination databases.How long DTS retries after DDL or DML operation failures. Valid values: 1–1440 minutes. Default: 10 minutes. Set to more than 10 minutes. This value must be less than Retry Time for Failed Connections.
Enable Throttling for Full Data MigrationLimits read/write throughput during full data migration to reduce database load. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Displayed only when Full Data Migration is selected.
Enable Throttling for Incremental Data MigrationLimits throughput during incremental data migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Displayed only when Incremental Data Migration is selected.
Environment TagAn optional tag to identify the DTS instance.
Configure ETLYes: enables extract, transform, and load (ETL) processing. Enter data processing statements in the code editor. See Configure ETL in a data migration or data synchronization task and What is ETL?. No: disables ETL.
Whether to delete SQL operations on heartbeat tables of forward and reverse tasksYes: does not write heartbeat SQL to the source database. A latency indicator may appear on the DTS instance. No: writes heartbeat SQL to the source, which may affect features such as physical backup and cloning.

Step 8: Run the precheck

Click Next: Save Task Settings and Precheck.

You can hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters to view the API parameters for this task.

DTS runs a precheck before starting the migration. The task cannot start until the precheck passes.

  • If a check item fails, click View Details next to it, fix the issue, and click Precheck Again.

  • If a check item shows an alert that can be ignored, click Confirm Alert Details > Ignore > OK, then click Precheck Again. Ignoring alerts may result in data inconsistency.

Wait until the success rate reaches 100%, then click Next: Purchase Instance.

Step 9: Purchase the migration instance

On the Purchase Instance page, configure the following parameters.

SectionParameterDescription
New Instance ClassResource GroupThe resource group for the migration instance. Default: default resource group. See What is Resource Management?
Instance ClassThe instance class determines migration speed. Select based on your data volume and latency requirements. See Specifications of data migration instances.

Read and select the Data Transmission Service (Pay-as-you-go) Service Terms check box, then click Buy and Start.

The task appears in the task list. Monitor its progress there.