All Products
Search
Document Center

Data Transmission Service:Synchronize data from a self-managed Oracle database to a PolarDB for MySQL cluster

Last Updated:Mar 28, 2026

Data Transmission Service (DTS) synchronizes data from a self-managed Oracle database to a PolarDB for MySQL cluster without interrupting your on-premises applications.

Prerequisites

Before you begin, make sure you have:

  • A PolarDB for MySQL cluster with available storage exceeding the total size of the source Oracle database. See Purchase a pay-as-you-go cluster and Purchase a subscription cluster

  • A database in the destination PolarDB for MySQL cluster to receive the synchronized data. See Create a database

  • The source Oracle database running in ARCHIVELOG mode, with archived log files accessible and an appropriate retention period set. See Managing Archived Redo Log Files

  • Supplemental logging enabled on the source Oracle database, with SUPPLEMENTAL_LOG_DATA_PK and SUPPLEMENTAL_LOG_DATA_UI set to Yes. See Supplemental Logging

  • Familiarity with DTS capabilities and limitations for Oracle sources. Advanced Database & Application Migration (ADAM) is available for database evaluation to help plan your migration. See Prepare an Oracle database and Overview

Billing

Synchronization typeFee
Schema synchronization and full data synchronizationFree
Incremental data synchronizationCharged. See Billing overview

SQL operations supported

Operation typeSQL statements
DMLINSERT, UPDATE, DELETE
DDLCREATE TABLE (functions not supported in the statement); ALTER TABLE, ADD COLUMN, DROP COLUMN, RENAME COLUMN, ADD INDEX; DROP TABLE; RENAME TABLE, TRUNCATE TABLE, CREATE INDEX (only operations by the current database account)

Permissions required

DatabaseRequired permissionReferences
Self-managed Oracle databaseFine-grained permissionsPrepare a database account, CREATE USER, GRANT
PolarDB for MySQL clusterWrite permissions on the destination databaseCreate and manage a database account — use a privileged account
Important

For incremental data synchronization from Oracle, enable archive logging and supplemental logging on the Oracle database before starting the task. See Configure an Oracle database.

Limitations

During schema synchronization, DTS synchronizes foreign keys from the source to the destination database. During full and incremental data synchronization, DTS temporarily disables foreign key constraint checks and cascade operations at the session level. If you run cascade update or delete operations on the source during synchronization, data inconsistency may occur.

Source database limitations

  • Tables to be synchronized must have PRIMARY KEY or UNIQUE constraints with all fields unique. Without these constraints, the destination database may contain duplicate records.

  • If your Oracle database is version 12c or later, table names cannot exceed 30 bytes.

  • If you select individual tables (not an entire database) and need to rename tables or columns in the destination: a single task supports up to 1,000 tables. For more than 1,000 tables, configure multiple tasks in batches or synchronize the entire database instead.

  • If the source database is an Oracle Real Application Cluster (RAC) database connected over Express Connect, you must specify a virtual IP address (VIP) for the database when you configure the data synchronization task.

  • If the self-managed Oracle database is an Oracle RAC database, you can use only a VIP rather than a Single Client Access Name (SCAN) IP address when you configure the data synchronization task. After you specify the VIP, node failover of the Oracle RAC database is not supported.

  • If a primary/secondary switchover occurs on the source database while the task is running, the task fails.

  • The task fails if the source contains empty strings of the VARCHAR2 type and the corresponding destination column has a NOT NULL constraint. Oracle processes empty VARCHAR2 strings as null values.

Important

Redo logs and archive logs on the source database must be retained for more than seven days. If DTS cannot obtain these logs, the task fails. In exceptional cases, data inconsistency or loss may occur. Retention periods shorter than seven days are not covered by the DTS service level agreement (SLA).

Destination and general limitations

  • PolarDB for MySQL table names are not case-sensitive. If a source Oracle table name contains uppercase letters, PolarDB for MySQL converts them to lowercase before creating the table.

    If the source Oracle database contains table names that differ only in capitalization, data inconsistency or task failure may occur. Use the object name mapping feature to rename conflicting objects during task configuration.
  • Evaluate the impact on source and destination database performance before synchronizing. Run the task during off-peak hours when possible.

  • During full data synchronization, concurrent INSERT operations cause table fragmentation in the destination. After full synchronization completes, the destination tablespace is larger than the source.

  • DTS calculates synchronization latency based on the timestamp of the latest synchronized record in the destination versus the current source timestamp. If no DML operations run on the source for an extended period, the latency display may be inaccurate. Run a DML operation on the source to refresh the latency. If you synchronize an entire database, create a heartbeat table — DTS updates it every second to keep the latency accurate.

  • Write data to the destination database only through DTS during synchronization. Writing from other sources causes data inconsistency.

  • If DDL statements fail to execute in the destination database, the task continues to run. View failed DDL statements in the task logs. See View task logs.

  • Character sets of the source and destination databases must be compatible. Incompatible character sets may cause data inconsistency or task failure.

  • Use the schema migration feature of DTS. Otherwise, the synchronization task may fail due to incompatible data types.

  • The time zones of the source and destination databases must match.

Configure a synchronization task

Step 1: Open the Data Synchronization Tasks page

  1. Log on to the Data Management (DMS) console.

  2. In the top navigation bar, click Data + AI.

  3. In the left-side navigation pane, choose DTS (DTS) > Data Synchronization.

    The navigation path may vary based on the DMS console mode. See Simple mode and Customize the layout and style of the DMS console. You can also go directly to the Data Synchronization Tasks page in the new DTS console.
  4. On the right side of the page, select the region where the data synchronization instance resides.

    In the new DTS console, select the region in the top navigation bar.

Step 2: Configure source and destination databases

Click Create Task. In the Create Data Synchronization Task wizard, configure the following parameters.

Task name

ParameterDescription
Task NameDTS generates a name automatically. Specify a descriptive name to identify the task. The name does not need to be unique.

Source database

ParameterDescription
Select an existing DMS database instance(Optional) Select an existing database to have DTS populate the parameters automatically. If you don't select one, configure the parameters below.
Database TypeSelect Oracle.
Access MethodSelect the access method for the source database. This example uses Self-managed Database on ECS. For other access methods, set up the required environment first. See Preparation overview.
Instance RegionThe region where the source Oracle database resides.
ECS Instance IDThe ID of the Elastic Compute Service (ECS) instance hosting the source Oracle database.
Port NumberThe service port of the source Oracle database. Default: 1521.
Oracle TypeThe architecture of the source Oracle database: Non-RAC Instance (requires SID) or RAC or PDB Instance (requires Service Name). This example uses RAC or PDB Instance.
Database AccountThe account for the source Oracle database. See Permissions required.
Database PasswordThe password for the database account.

Destination database

ParameterDescription
Select an existing DMS database instance(Optional) Select an existing database to have DTS populate the parameters automatically. If you don't select one, configure the parameters below.
Database TypeSelect PolarDB for MySQL.
Access MethodSelect Alibaba Cloud Instance.
Instance RegionThe region where the destination PolarDB for MySQL cluster resides.
PolarDB Cluster IDThe ID of the destination PolarDB for MySQL cluster.
Database AccountThe database account for the destination PolarDB for MySQL cluster. See Permissions required.
Database PasswordThe password for the database account.
EncryptionWhether to encrypt the connection to the destination database. See Configure SSL encryption.

Step 3: Test connectivity

Click Test Connectivity and Proceed.

If the source or destination database is an Alibaba Cloud database instance, such as an ApsaraDB RDS for MySQL or ApsaraDB for MongoDB instance, DTS automatically adds the CIDR blocks of DTS servers to the whitelist of the instance. If the source or destination database is a self-managed database hosted on an Elastic Compute Service (ECS) instance, DTS automatically adds the CIDR blocks of DTS servers to the security group rules of the ECS instance, and you must ensure that the ECS instance can access the database. For databases in data centers or from third-party cloud providers, manually add the DTS CIDR blocks to the database whitelist. See Add the CIDR blocks of DTS servers.

Warning

Adding DTS CIDR blocks to your whitelist or security group rules introduces security risks. Before proceeding, take preventive measures such as: strengthening credentials, restricting exposed ports, authenticating API calls, regularly auditing whitelist rules, and removing unauthorized CIDR blocks. Alternatively, connect through Express Connect, VPN Gateway, or Smart Access Gateway.

Step 4: Select objects and configure synchronization settings

Configure the following synchronization parameters.

Synchronization settings

ParameterDescription
Synchronization TypesIncremental Data Synchronization is selected by default. Also select Schema Synchronization and Full Data Synchronization. After the precheck completes, DTS synchronizes historical data from the source to the destination as the basis for subsequent incremental synchronization.
Processing Mode of Conflicting TablesPrecheck and Report Errors: DTS checks for tables with identical names in the source and destination. If identical names exist, the precheck fails and the task cannot start. If tables in the destination cannot be deleted or renamed, use the object name mapping feature to rename objects before starting. Ignore Errors and Proceed: DTS skips the identical-name precheck. During full synchronization, existing destination records with the same primary key or unique key are retained. During incremental synchronization, they are overwritten. If schemas differ, synchronization may be partial or the task may fail.
Capitalization of object names in destination instanceControls the capitalization of database names, table names, and column names in the destination. The default is DTS default policy. See Specify the capitalization of object names in the destination instance.
Source ObjectsSelect databases, tables, or columns from Source Objects and click 向右 to move them to Selected Objects.
Selected ObjectsIf the destination database account is not a privileged account, or the source schema name does not comply with PolarDB for MySQL naming conventions, right-click the schema in Selected Objects and set Schema Name to the target database name. To rename a table after synchronization, right-click the table and specify the new name. See Map object names. To filter rows, right-click a table and specify filter conditions. See Set filter conditions.
If you rename an object using object name mapping, other objects that depend on it may fail to synchronize.

Step 5: Configure advanced settings

Click Next: Advanced Settings and configure the following.

Data verification

For data verification configuration, see Configure data verification.

Advanced settings

ParameterDescription
Dedicated Cluster for Task SchedulingBy default, DTS schedules the task to the shared cluster. For improved task stability, purchase a dedicated cluster. See What is a DTS dedicated cluster.
Select the engine type of the destination databaseThe storage engine for the destination database. InnoDB is the default. X-Engine is an online transaction processing (OLTP) database storage engine.
Retry Time for Failed ConnectionsHow long DTS retries failed connections after the task starts. Valid values: 10–1,440 minutes. Default: 720 minutes. Set this to 30 minutes or more. If DTS reconnects within this window, the task resumes; otherwise, it fails. If multiple tasks share the same source or destination database, the shortest retry window takes precedence. DTS charges you during the retry period.
Retry Time for Other IssuesHow long DTS retries failed DDL or DML operations. Valid values: 1–1,440 minutes. Default: 10 minutes. Set this to 10 minutes or more. This value must be less than Retry Time for Failed Connections.
Enable Throttling for Full Data MigrationLimits the load on the destination during full synchronization by configuring Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Displayed only when Full Data Synchronization is selected.
Enable Throttling for Incremental Data SynchronizationLimits the load on the destination during incremental synchronization. Configure RPS of Incremental Data Synchronization and Data synchronization speed for incremental synchronization (MB/s).
Environment TagA tag to identify the DTS instance. Select based on your requirements.
Actual Write CodeThe encoding format for writing data to the destination database.
Configure ETLWhether to enable the extract, transform, and load (ETL) feature. Select Yes to enter data processing statements in the code editor. See Configure ETL in a data migration or data synchronization task. Select No to skip ETL.
Monitoring and AlertingWhether to configure alerts for the task. Select Yes to set an alert threshold and notification contacts — DTS notifies them if the task fails or synchronization latency exceeds the threshold. See Configure monitoring and alerting.

Step 6: Save settings and run the precheck

Click Next: Save Task Settings and Precheck.

To preview the API parameters for this task configuration, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.

DTS runs a precheck before the task can start. If the precheck fails:

  • Click View Details next to each failed item, resolve the issue, then rerun the precheck.

  • For alert items: if the item cannot be ignored, resolve it and rerun. If the item can be ignored, click Confirm Alert Details > Ignore > OK > Precheck Again. Ignoring an alert may result in data inconsistency.

Step 7: Purchase the synchronization instance

Wait for the Success Rate to reach 100%, then click Next: Purchase Instance.

On the buy page, configure the following.

ParameterDescription
Billing MethodSubscription: pay upfront; more cost-effective for long-term use. Pay-as-you-go: billed hourly; suitable for short-term use. Release the instance when no longer needed to stop billing.
Resource Group SettingsThe resource group for the synchronization instance. Default: default resource group. See What is Resource Management?
Instance ClassDTS provides instance classes with varying synchronization speeds. See Instance classes of data synchronization instances.
Subscription DurationAvailable only for the Subscription billing method. Options: 1–9 months, 1 year, 2 years, 3 years, or 5 years.

Read and select Data Transmission Service (Pay-as-you-go) Service Terms, then click Buy and Start. In the confirmation dialog box, click OK.

The task appears in the task list. Monitor its progress there.

What's next