All Products
Search
Document Center

Data Transmission Service:Synchronize data from a self-managed Oracle database to a PolarDB-X 2.0 instance

Last Updated:Mar 28, 2026

Data Transmission Service (DTS) synchronizes data from a self-managed Oracle database to a PolarDB-X 2.0 instance in real time. It supports schema synchronization, full data synchronization, and incremental data synchronization, and preserves DML and DDL operations from the source.

Prerequisites

Before you begin, make sure that:

  • Your Oracle database is one of these versions: 19c, 18c, 12c, 11g, 10g, or 9i

  • The Oracle database runs in ARCHIVELOG mode with accessible archived log files and an appropriate retention period — see Managing Archived Redo Log Files

  • Supplemental logging is enabled on the Oracle database, with SUPPLEMENTAL_LOG_DATA_PK and SUPPLEMENTAL_LOG_DATA_UI set to Yes — see Supplemental Logging

  • The PolarDB-X 2.0 instance has more available storage space than the total data size of the Oracle database

  • You have reviewed the capabilities and limits of DTS for Oracle synchronization, and used Advanced Database & Application Migration (ADAM) for database evaluation — see Prepare an Oracle database and Overview

Billing

Synchronization typeCost
Schema synchronization and full data synchronizationFree
Incremental data synchronizationCharged — see Billing overview

Supported synchronization topologies

  • One-way one-to-one synchronization

  • One-way one-to-many synchronization

  • One-way many-to-one synchronization

  • One-way cascade synchronization

For details, see Synchronization topologies.

SQL operations that can be synchronized

TypeStatements
DMLINSERT, UPDATE, DELETE
DDLCREATE TABLE, ALTER TABLE, DROP TABLE, RENAME TABLE, TRUNCATE TABLE; CREATE VIEW, ALTER VIEW, DROP VIEW; CREATE PROCEDURE, ALTER PROCEDURE, DROP PROCEDURE; CREATE FUNCTION, DROP FUNCTION; CREATE TRIGGER, DROP TRIGGER; CREATE INDEX, DROP INDEX

Permissions required

DatabaseRequired permissionReferences
Self-managed Oracle databaseFine-grained permissionsPrepare a database account, CREATE USER, GRANT
Important

To synchronize incremental data from an Oracle database, enable archive logging and supplemental logging first. See the Configure an Oracle database section of "Prepare an Oracle database".

Limitations

DTS synchronizes foreign keys during schema synchronization. During full and incremental data synchronization, DTS temporarily disables constraint checks and cascade operations on foreign keys at the session level. If you run cascade update or delete operations on the source database while a synchronization task is running, data inconsistency may occur.

Source database limitations

LimitationImpactAction
Tables must have PRIMARY KEY or UNIQUE constraints, with all fields uniqueThe destination database may contain duplicate recordsAdd constraints before starting the task
For Oracle 12c or later: table names cannot exceed 30 bytesThe table cannot be synchronizedRename tables to stay within the limit
When editing destination objects (renaming tables or columns): max 1,000 tables per taskA request error occursConfigure multiple tasks to synchronize tables in batches, or synchronize the entire database instead
Oracle RAC database connected over Express Connect: must specify a VIP for the database when configuring the data synchronization taskThe synchronization task fails or behaves unexpectedlySpecify the VIP when configuring the task
Oracle RAC database: must use a VIP rather than a Single Client Access Name (SCAN) IP address when configuring the data synchronization task. Node failover of the Oracle RAC database is not supported after specifying a VIPThe synchronization task fails or behaves unexpectedlySpecify the VIP when configuring the task; plan for the absence of node failover support
Redo logs and archive logs must be retained long enough: more than 24 hours for incremental-only tasks; at least 7 days for full + incremental tasks. After full synchronization completes, you can reduce retention to more than 24 hoursDTS cannot obtain redo logs or archive logs, causing task failure, data inconsistency, or data loss. DTS service level agreements (SLAs) do not apply if retention is insufficientSet retention periods before starting the task, and verify them after full synchronization completes
Do not perform a primary/secondary switchover while a synchronization task is runningThe task failsSchedule switchovers outside synchronization windows
Do not update LONGTEXT fields during synchronizationThe task failsPause updates to LONGTEXT fields while the task runs
Do not execute DDL statements during schema synchronization or full data synchronizationThe task failsSchedule schema changes outside synchronization windows
Oracle VARCHAR2 empty strings are treated as NULL values. If the corresponding destination column has a NOT NULL constraint, synchronization failsThe task failsRemove the NOT NULL constraint from the destination column before synchronization

Other limitations

LimitationImpactAction
Do not use Oracle Data Pump to write data to the source database during incremental data synchronizationData loss may occurPause Data Pump operations while incremental synchronization is running
DTS uses read and write resources of both databases during initial full data synchronization, which increases server loadDatabase performance degrades during peak hoursRun synchronization tasks during off-peak hours
Concurrent INSERT operations during initial full data synchronization cause table fragmentation in the destination databaseThe destination tablespace is larger than the source tablespace after initial full data synchronizationExpected behavior; no action needed unless storage is constrained
Using tools other than DTS to write to the destination database during synchronization risks data inconsistency; online DDL operations via Data Management (DMS) may cause data lossData inconsistency or data loss in the destination databaseUse only DTS to write data to the destination database during synchronization
Synchronization latency is based on the timestamp of the latest synchronized data in the destination and the current time in the source. No DML activity on the source causes the latency metric to driftLatency readings are inaccurateRun a DML operation on the source to refresh the latency. If you selected an entire database as the sync object, you can create a heartbeat table — it is updated every second

Create a synchronization task

Step 1: Go to the Data Synchronization Tasks page

  1. Log on to the Data Management (DMS) console.

  2. In the top navigation bar, click Data + AI.

  3. In the left-side navigation pane, choose DTS (DTS) > Data Synchronization.

Navigation may vary depending on the DMS console mode. See Simple mode and Customize the layout and style of the DMS console. You can also go directly to the Data Synchronization Tasks page of the new DTS console.

Step 2: Select the region

On the right side of the Data Synchronization Tasks page, select the region where the data synchronization instance resides.

In the new DTS console, select the region in the top navigation bar.

Step 3: Configure source and destination databases

Click Create Task. In the Create Data Synchronization Task wizard, configure the following parameters.

Task settings

ParameterDescription
Task NameA name for the DTS task. DTS generates a name automatically. Specify a descriptive name to make the task easy to identify. The name does not need to be unique.

Source Database

ParameterDescription
Select a DMS database instanceSelect an existing database instance to have DTS populate the parameters automatically, or leave blank and configure the parameters below manually.
Database TypeSelect Oracle.
Connection TypeSelect the network method for connecting to the source database. This example uses Self-managed Database on ECS. If you use a different access method, set up the corresponding network environment first — see Preparation overview.
Instance RegionThe region where the source Oracle database resides.
Instance IDThe ID of the Elastic Compute Service (ECS) instance hosting the Oracle database.
Port NumberThe service port of the Oracle database. Default: 1521.
Oracle TypeThe architecture of the source Oracle database. This example uses RAC or PDB Instance. Select Non-RAC Instance to configure SID, or RAC or PDB Instance to configure Service Name.
Database AccountThe account for the source Oracle database.
Database PasswordThe password for the database account.

Destination Database

ParameterDescription
Select a DMS database instanceSelect an existing database instance to have DTS populate the parameters automatically, or leave blank and configure the parameters below manually.
Database TypeSelect PolarDB-X 2.0.
Connection TypeSelect Alibaba Cloud Instance.
Instance RegionThe region where the destination PolarDB-X 2.0 instance resides.
Database AccountThe database account for the PolarDB-X 2.0 instance. The account must have read and write permissions on the instance.
Database PasswordThe password for the database account.

Step 4: Test connectivity

Click Test Connectivity and Proceed.

DTS automatically handles network access based on your database type:

  • Alibaba Cloud database instances (such as ApsaraDB RDS for MySQL or ApsaraDB for MongoDB): DTS adds its CIDR blocks to the instance whitelist automatically.

  • Self-managed databases on ECS: DTS adds its CIDR blocks to the ECS instance security group rules automatically. Make sure the ECS instance can reach the database.

  • On-premises databases or third-party cloud databases: Add the DTS CIDR blocks to your database whitelist manually — see Add the CIDR blocks of DTS servers.

Warning

Adding DTS CIDR blocks to whitelists or security group rules introduces network exposure. Before proceeding, take preventive measures: strengthen database credentials, restrict exposed ports, authenticate API calls, review whitelist and security group rules regularly, and remove unauthorized CIDR blocks. Alternatively, connect the database to DTS via Express Connect, VPN Gateway, or Smart Access Gateway.

Step 5: Configure synchronization objects and settings

Configure the following parameters.

Synchronization Type

ParameterDescription
Synchronization TypeBy default, Incremental Data Synchronization is selected. Also select Schema Synchronization and Full Data Synchronization. After the precheck passes, DTS synchronizes the historical data of the selected objects, which serves as the baseline for incremental synchronization.

Conflict handling

ParameterDescription
Processing Mode for Existing Destination TablesPrecheck and Report Errors: checks for tables in the destination with the same name as source tables. The precheck passes only if no identical table names exist. If names conflict and the destination tables cannot be deleted or renamed, use the object name mapping feature to rename synchronized tables — see Map object names. Ignore Errors and Proceed: skips the identical-name check. During full data synchronization, existing destination records with matching primary or unique key values are retained (not overwritten). During incremental data synchronization, they are overwritten. If source and destination schemas differ, synchronization may fail or only some columns are synchronized. Use with caution.

Object selection

ParameterDescription
Source ObjectsSelect objects from the Source Objects section and click the right arrow icon to move them to the Selected Objects section. You can select columns, tables, or databases. If you select tables or columns, DTS does not synchronize other object types such as views, triggers, or stored procedures.
Selected ObjectsTo rename a single object in the destination, right-click it in the Selected Objects section — see the Map the name of a single object section of "Map object names". To rename multiple objects at once, click Batch Edit — see Map multiple object names at a time. To filter which SQL operations to synchronize for a specific object, right-click it and select the SQL operations. To filter rows using WHERE conditions, right-click an object and specify the conditions — see Set filter conditions.

Step 6: Configure advanced settings

Click Next: Advanced Settings and configure the following parameters.

ParameterDescription
Dedicated Cluster for Task SchedulingBy default, DTS schedules the task to the shared cluster. To improve stability, purchase a dedicated cluster — see What is a DTS dedicated cluster.
Retry Time for Failed ConnectionsHow long DTS retries when the source or destination database is unreachable. Valid values: 10–1440 minutes. Default: 720 minutes. Set this to a value greater than 30 minutes. If DTS reconnects within the retry window, it resumes the task. Otherwise, the task fails. If multiple tasks share a source or destination database and have different retry windows, the shortest window applies. DTS charges for the instance during retries — set this value based on your business needs and release the instance when no longer needed.
Retry Time for Other IssuesHow long DTS retries when DDL or DML operations fail after the task starts. Valid values: 1–1440 minutes. Default: 10 minutes. Set this to more than 10 minutes. This value must be less than Retry Time for Failed Connections.
Enable Throttling for Full Data MigrationLimits the load on databases during full data synchronization. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Displayed only when Full Data Synchronization is selected.
Enable Throttling for Incremental Data SynchronizationLimits the load on databases during incremental data synchronization. Configure RPS of Incremental Data Synchronization and Data synchronization speed for incremental synchronization (MB/s).
Environment TagTags the DTS instance with an environment label. Select based on your needs.
Actual Write CodeThe encoding format for data written to the destination database. Select based on your needs.
Configure ETLSpecifies whether to configure the extract, transform, and load (ETL) feature. Select Yes to enter data processing statements in the code editor — see Configure ETL in a data migration or data synchronization task. Select No to skip. See also What is ETL?
Monitoring and AlertingConfigures alerts for task failure or high synchronization latency. Select Yes to set alert thresholds and notification contacts — see Configure monitoring and alerting when you create a DTS task. Select No to skip.

Step 7: Run the precheck

Click Next: Save Task Settings and Precheck.

To preview the API parameters for this task configuration before saving, hover over the button and click Preview OpenAPI parameters.

DTS runs a precheck before starting the task. The task can only start after the precheck passes.

  • If the precheck fails: click View Details next to each failed item, resolve the issue, and click Precheck Again.

  • If an alert is triggered: if the alert can be ignored, click Confirm Alert Details, then Ignore, then OK, and then Precheck Again. Ignoring an alert may cause data inconsistency.

Step 8: Purchase the instance

Wait for the Success Rate to reach 100%, then click Next: Purchase Instance.

On the purchase page, configure the following parameters.

ParameterDescription
Billing MethodSubscription: pay upfront for a fixed term. More cost-effective for long-term use. Pay-as-you-go: billed hourly. Suitable for short-term use. Release the instance when no longer needed to avoid charges.
Resource Group SettingsThe resource group for the data synchronization instance. Default: default resource group — see What is Resource Management?
Instance ClassThe synchronization speed of the instance. Select based on your throughput requirements — see Instance classes of data synchronization instances.
Subscription DurationAvailable only for the Subscription billing method. Valid options: 1–9 months, or 1, 2, 3, or 5 years.

Step 9: Start the task

Read and select Data Transmission Service (Pay-as-you-go) Service Terms, then click Buy and Start. In the confirmation dialog, click OK.

The task appears in the task list. You can monitor its progress there.

FAQ

What happens if redo logs or archive logs are not retained long enough?

DTS cannot retrieve the log data it needs for incremental synchronization, which causes the task to fail. In some cases, this leads to data inconsistency or data loss. To recover, extend the retention period and restart the task. For incremental-only tasks, keep logs for more than 24 hours. For tasks that include full synchronization, keep logs for at least 7 days — you can reduce the retention to more than 24 hours after full synchronization completes.

The source database had a primary/secondary switchover while the task was running. What do I do?

The synchronization task fails when a primary/secondary switchover occurs on the source. Restart the task after the switchover completes and the source database is stable.

Synchronization latency is increasing. How do I fix it?

If no DML operations have run on the source for an extended period, the latency metric drifts and may not reflect actual delay. Run a DML operation on the source to refresh it. For long-running tasks where the source database goes idle regularly, create a heartbeat table — when you select an entire database as the sync object, DTS updates the heartbeat table every second to keep the latency metric accurate.

Can I use Oracle Data Pump while incremental synchronization is running?

No. Writing data to the source database with Oracle Data Pump during incremental synchronization causes data loss. Pause Data Pump operations until the synchronization task completes or is paused.

What's next