All Products
Search
Document Center

Data Transmission Service:Migrate data from PolarDB-X 2.0 to MaxCompute

Last Updated:Mar 28, 2026

Use Data Transmission Service (DTS) to migrate data from a PolarDB-X 2.0 instance to a MaxCompute project. DTS supports one-time full migration and continuous incremental migration, so you can choose a strategy based on your acceptable downtime.

How migration works

DTS supports three migration types, which you can combine:

Migration typeWhat it transfersSupports incremental CDC
Schema migrationTable definitions and indexesNo
Full data migrationAll existing rows at a point in timeNo
Incremental data migrationChange data (INSERT, UPDATE, DELETE, ADD COLUMN) after full migration completesYes

Naming conventions in MaxCompute

DTS uses specific naming conventions for tables created in MaxCompute:

  • Schema migration: DTS adds the _base suffix to the source table name. For example, if the source table is named customer, the destination table in MaxCompute is named customer_base. The destination table suffixed with _base is known as a full baseline table.

  • Full data migration: Historical data is migrated to the corresponding _base table (for example, customercustomer_base). This serves as the baseline for subsequent incremental synchronization.

  • Incremental migration: DTS creates an incremental log table in MaxCompute. The table name follows the format DestinationTableName_log (for example, customer_log). Incremental data is migrated to this table in real time.

Choose a migration strategy

GoalMigration types to select
One-time transfer with acceptable downtimeSchema migration + Full data migration
Continuous sync with minimal downtimeSchema migration + Full data migration + Incremental data migration
Important

If you select full data migration only, stop all writes to the source database during migration to prevent data inconsistency between the source and destination. To ensure data consistency, we recommend that you select schema migration, full data migration, and incremental data migration as the migration types.

SQL operations supported by incremental data migration

Operation typeSupported statements
DMLINSERT, UPDATE, DELETE
DDLADD COLUMN (ADD COLUMN operations that include attribute columns are not supported)

Prerequisites

Before you begin, make sure that you have:

Limitations

Review these limitations before starting the migration task.

Source database limitations

  • Tables must have PRIMARY KEY or UNIQUE constraints, with all fields unique. Tables without these constraints may produce duplicate records in MaxCompute.

  • Read-only instances of Enterprise Edition PolarDB-X 2.0 are not supported.

  • If you rename tables or columns during migration (object name mapping), a single task supports up to 1,000 tables. For more than 1,000 tables, split into multiple tasks or migrate the entire database instead.

  • TABLEGROUP and databases or schemas with the Locality attribute are not supported.

  • Tables with reserved word names (for example, select) are not supported.

Binlog requirements for incremental data migration

If you include incremental data migration, configure the following binlog settings on the source PolarDB-X 2.0 instance before starting:

SettingRequired valueNotes
Binary loggingEnabledRequired for DTS to read change data
binlog_row_imagefullDTS rejects any other value during precheck
Binary log retention (incremental-only tasks)> 24 hoursInsufficient retention may cause task failure or data loss; DTS SLA does not apply
Binary log retention (full + incremental tasks)>= 7 daysAfter full migration completes, you can reduce retention to more than 24 hours

Operational restrictions

  • During schema migration and full data migration: do not perform DDL operations (schema changes) on the source database. The task fails if DDL operations are detected.

  • During full and incremental migration: DTS temporarily disables constraint checks and foreign key cascade operations at the session level. Cascade update or delete operations on the source database may cause data inconsistency in the destination.

  • If you change the network type of the PolarDB-X 2.0 instance during migration, update the network connection settings in the DTS migration task as well.

Other limits

  • Evaluate the performance of the source and destination databases before you start data migration. Perform data migration during off-peak hours. Full data migration consumes read and write resources on the source and destination databases, which can increase the database load.

  • DTS attempts to resume failed migration tasks that are created within the last seven days. Therefore, before you switch your business to the destination instance, you must end or release the task, or use the revoke command to revoke the write permissions of the DTS account on the destination database. This prevents data in the destination from being overwritten after the task is automatically resumed.

  • If an instance fails, DTS helpdesk will try to recover the instance within 8 hours. During the recovery process, operations such as restarting the instance and adjusting parameters may be performed. When parameters are adjusted, only the parameters of the DTS instance are modified. The parameters of the database are not modified.

  • DTS periodically updates the dts_health_check.ha_health_check table in the source database to advance the binary log offset.

Permissions required

Grant the following permissions to the database accounts used by DTS:

DatabaseSchema migrationFull data migrationIncremental data migration
PolarDB-X 2.0SELECTSELECTREPLICATION SLAVE, REPLICATION CLIENT, and SELECT on objects to be migrated
MaxComputeRead and writeRead and write

For instructions on granting permissions to the PolarDB-X 2.0 account, see Account permission issues during data synchronization.

Billing

Migration typeTask configuration feeInternet traffic fee
Schema migration + Full data migrationFreeFree when using Cloud Instance access. If you set the destination access method to Public IP Address, internet traffic fees apply. See Billing overview.
Incremental data migrationCharged. See Billing overview.

Create a migration task

The following steps use the DTS console. You can also use the DMS console — navigate to Data + AI > DTS (DTS) > Data Migration, then follow the same steps from step 2 onward.

Note

The actual DMS console layout may vary. See Simple mode and Customize the layout and style of the DMS console for more information.

Step 1: Open the Data Migration page

  1. Log on to the DTS console.DTS consoleDMS console

  2. In the left-side navigation pane, click Data Migration.

  3. In the upper-left corner, select the region where the migration instance will reside.

Step 2: Create a task and configure source and destination databases

Warning

After you select the source and destination instances, read the Limits at the top of the page to confirm that the migration task can be created and run successfully.

  1. Click Create Task.

  2. Configure the source database:

    ParameterDescription
    Task NameDTS generates a name automatically. Enter a descriptive name for easy identification. The name does not need to be unique.
    Select Existing ConnectionIf the PolarDB-X 2.0 instance is already registered with DTS, select it from the drop-down list to pre-fill the parameters below. Otherwise, configure the parameters manually.
    Database TypeSelect PolarDB-X 2.0.
    Access MethodSelect Cloud Instance.
    Instance RegionSelect the region where the source PolarDB-X 2.0 instance resides.
    Cross-accountSelect No for migrations within the same Alibaba Cloud account.
    Instance IDSelect the ID of the source PolarDB-X 2.0 instance.
    Database AccountEnter the database account. See Permissions required for the required permissions.
    Database PasswordEnter the password for the database account.
  3. Configure the destination database:

    ParameterDescription
    Database TypeSelect MaxCompute.
    Access MethodSelect Cloud Instance.
    Instance RegionSelect the region where the destination MaxCompute project resides.
    ProjectEnter the name of the destination MaxCompute project.
    AccessKey ID of Alibaba Cloud AccountEnter the AccessKey ID created in Prerequisites.
    AccessKey Secret of Alibaba Cloud AccountEnter the AccessKey Secret.
  4. Click Test Connectivity and Proceed.

  5. Click OK to grant DTS the necessary permissions to access the destination MaxCompute project.

Step 3: Configure objects to migrate

On the Configure Objects page, configure the following parameters:

ParameterDescription
Migration TypesSelect the migration types based on your strategy. See Choose a migration strategy. If you skip Schema Migration, create the destination tables in MaxCompute manually and enable object name mapping in Selected Objects.
Processing Mode for Existing Destination TablesPrecheck and Report Errors: fails the precheck if the destination already has tables with the same names. Ignore Errors and Proceed: skips the name conflict check. During full migration, DTS skips records with conflicting primary keys and keeps the existing destination record. During incremental migration, DTS overwrites the existing destination record. If source and destination schemas differ, the task may migrate only some columns or fail.
Additional Column RuleDTS adds metadata columns to destination tables. Select New Rule or Old Rule based on your schema. Evaluate potential name conflicts before selecting.
Partition Definition of Incremental Data TableSelect a partition name if needed. See Partitions.
Capitalization of Object Names in Destination InstanceControls the capitalization of database, table, and column names in the destination. See Specify the capitalization of object names in the destination instance.

In the Source Objects section, select the objects to migrate. Move them to the Selected Objects section.

Step 4: Configure advanced settings

Click Next: Advanced Settings and configure the following:

ParameterDescription
Dedicated Cluster for Task SchedulingBy default, DTS schedules the task to the shared cluster. To improve stability, purchase and specify a dedicated cluster. See What is a DTS dedicated cluster.
Retry Time for Failed ConnectionsHow long DTS retries a failed database connection before marking the task as failed. Valid values: 10–1,440 minutes. Default: 720 minutes. Set to at least 30 minutes. During retries, DTS instance fees continue to accrue.
Retry Time for Other IssuesHow long DTS retries a failed DDL or DML operation. Valid values: 1–1,440 minutes. Default: 10 minutes. Set to at least 10 minutes. This value must be less than Retry Time for Failed Connections.
Enable Throttling for Full Data MigrationLimits the read and write load on the source and destination during full data migration. Configure QPS to the source database, RPS (rows per second), and migration speed (MB/s). Available only when Full Data Migration is selected.
Enable Throttling for Incremental Data MigrationLimits the load on the destination during incremental data migration. Configure RPS and migration speed (MB/s). Available only when Incremental Data Migration is selected.
Whether to delete SQL operations on heartbeat tablesYesalert notification settings: DTS does not write to the heartbeat table. A latency indicator may appear for the DTS instance. No: DTS writes to the heartbeat table. This may affect physical backup and cloning of the source database.
Environment TagOptional. Tag to identify the environment, such as production or test.
Configure ETLWhether to enable extract, transform, and load (ETL). Select Yes to enter data processing statements. See Configure ETL in a data migration or data synchronization task.
Monitoring and AlertingWhether to configure alerts. If the task fails or migration latency exceeds the threshold, alert contacts receive notifications. Select Yes to configure the alert threshold and notification settings. See Configure monitoring and alerting.

Step 5: Run the precheck

Click Next: Save Task Settings and Precheck.

  • If the precheck passes, proceed to the next step.

  • If the precheck fails, click View Details, fix the reported issues, and rerun the precheck.

  • For alert items: either fix the issue or click Confirm Alert Details > Ignore > OK > Precheck Again if the alert is acceptable.

Step 6: Purchase the instance and start the task

  1. After the precheck shows Success Rate at 100%, click Next: Purchase Instance.

  2. On the Purchase Instance page, configure:

    ParameterDescription
    Resource GroupThe resource group for the migration instance. Default: default resource group. See What is Resource Management?.
    Instance ClassThe instance class determines migration speed. See Instance classes of data migration instances.
  3. Read and accept the Data Transmission Service (Pay-as-you-go) Service Terms.

  4. Click Buy and Start, then click OK in the confirmation dialog.

Monitor the migration task

Go to the Data Migration page to monitor progress:

  • Full data migration only: The task stops automatically when complete. Status shows Completed.

  • Incremental data migration included: The task runs continuously and never stops automatically. Status shows Running.

Before switching business workloads to MaxCompute, end or release the task, or use the revoke command to remove the DTS account's write permissions on the destination database. This is necessary because DTS attempts to resume failed migration tasks created within the last seven days, and you must prevent the task from automatically resuming and overwriting data in the destination.

Incremental log table structure

When DTS writes incremental data to MaxCompute, each incremental log table includes the following metadata fields in addition to the source table columns.

Note

Run set odps.sql.allow.fullscan=true; in MaxCompute before querying incremental log tables.

FieldDescription
record_idA unique, incrementally increasing ID for each log record. For UPDATE operations, DTS splits the change into two records (before and after the update) that share the same record_id.
operation_flagThe type of operation: I (INSERT), D (DELETE), or U (UPDATE).
utc_timestampThe timestamp of the binary log entry, in UTC.
before_flagWhether the row contains the column values before the update. Values: Y or N.
after_flagWhether the row contains the column values after the update. Values: Y or N.

Data type mappings

For data type mappings between PolarDB-X 2.0 and MaxCompute, see Data type mappings for initial schema synchronization.

What's next