All Products
Search
Document Center

Data Transmission Service:Migrate data from PolarDB-X 2.0 to DataHub

Last Updated:Mar 30, 2026

Use Data Transmission Service (DTS) to stream data changes from a PolarDB-X 2.0 instance into a DataHub project. DTS supports schema migration and incremental data migration for this path, letting your source database continue serving traffic while data flows to DataHub in near real time.

Before you begin

Before you begin, make sure you have:

How it works

DTS connects to the PolarDB-X 2.0 instance, reads the binary logs, and publishes change events to the target DataHub project. The migration runs in two phases:

  • Schema migration: DTS replicates the table schemas (including foreign keys) from the source to DataHub before data flows.

  • Incremental data migration: After schema migration completes, DTS continuously captures INSERT, UPDATE, and DELETE operations and ADD COLUMN DDL statements from the binary log and writes them to DataHub. The source instance keeps serving application traffic throughout.

Billing

Migration type Link configuration fee Data transfer fee
Schema migration Free Not charged in this example
Incremental data migration Charged See Billing overview

Limitations

Source database

Limitation Details
Bandwidth The server hosting the source database must have sufficient outbound bandwidth. Insufficient bandwidth reduces migration speed.
Unsupported instance types Read-only instances of Enterprise Edition PolarDB-X 2.0 are not supported.
Table constraints Tables must have a PRIMARY KEY or UNIQUE constraint with all fields unique. Otherwise, the destination may contain duplicate records.
Table count If you select tables as migration objects and rename them in the destination, a single task supports up to 1,000 tables. For larger migrations, split the tables across multiple tasks or migrate at the database level.
Binary logging (required for incremental migration) Binary logging must be enabled and binlog_row_image must be set to full. Precheck fails if either condition is not met.
Binary log retention Incremental-only migration: retain binary logs for more than 24 hours. Full + incremental migration: retain binary logs for at least 7 days; after full migration completes, you can reduce retention to more than 24 hours. If the binary logs expire before DTS can read them, the task fails and data inconsistency or loss may occur. The DTS SLA does not cover failures caused by insufficient log retention.
DDL during migration Do not run DDL operations that change database or table schemas during schema migration or full data migration. The task fails if DDL is detected.
Constraint checks DTS temporarily disables constraint checks and foreign key cascade operations at the session level during migration. Cascade updates or deletes in the source during this period may cause data inconsistency.
Network type changes If you change the network type of the PolarDB-X 2.0 instance during migration, update the network connection settings in the DTS task to match.
Full-only migration Do not write to the source during full-only migration. For data consistency, select both schema migration and incremental data migration.
Unsupported objects Tables in a TABLEGROUP or a database/schema with the Locality attribute cannot be migrated. Tables whose names are SQL reserved words (for example, select) are also not supported.

Other limits

Limitation Details
Migration granularity Only table-level migration is supported.
String field size The maximum size of a single String field in the destination DataHub project is 2 MB.
Off-peak migration Evaluate source and destination database performance before starting. Running migration during off-peak hours reduces the impact on database load.
Task resumption DTS automatically retries a failed task for up to 7 days. Before switching workloads to the destination, stop or release the task — or revoke write permissions from the DTS database account — to prevent the resumed task from overwriting destination data.
Instance recovery If a DTS instance fails, DTS helpdesk attempts recovery within 8 hours. During recovery, DTS may restart the instance or adjust DTS instance parameters (database parameters are not modified). For the list of parameters that may change, see Modify instance parameters.
DTS periodically updates the dts_health_check.ha_health_check table in the source database to advance the binary log offset.

Required permissions

Grant the source database account the following permissions before creating the task.

Permission Migration type When it's used
SELECT Schema migration, incremental data migration Reads table schemas and data during schema migration and incremental migration
REPLICATION SLAVE Incremental data migration Connects to the source and reads binary log events
REPLICATION CLIENT Incremental data migration Runs SHOW MASTER STATUS and SHOW BINARY LOGS to track log positions

For instructions on granting permissions, see Account permissions required for data synchronization.

Data type mappings

For data type mappings applied during schema migration, see Data type mappings for initial schema synchronization.

Create the migration task

The end-to-end configuration involves five steps:

  1. Open the Data Migration page.

  2. Configure source and destination databases.

  3. Select objects and configure settings.

  4. Save settings and run the precheck.

  5. Purchase the instance and start migration.

Step 1: Open the Data Migration page

Use one of the following consoles to open the Data Migration page.

DTS console

  1. Log on to the DTS console.DTS console

  2. In the left-side navigation pane, click Data Migration.

  3. In the upper-left corner, select the region where the migration instance resides.

DMS console

Steps may vary based on your DMS console mode and layout. See Simple mode and Customize the layout and style of the DMS console.
  1. Log on to the DMS console.DMS console

  2. In the top navigation bar, move the pointer over Data + AI > DTS (DTS) > Data Migration.

  3. From the drop-down list to the right of Data Migration Tasks, select the region where the migration instance resides.

Step 2: Configure source and destination databases

  1. Click Create Task.

    Warning

    After selecting the source and destination instances, read the Limits section at the top of the configuration page before proceeding.

  2. Configure the task name and source database.

    Parameter Description
    Task Name DTS generates a name automatically. Use a descriptive name to identify the task. Uniqueness is not required.
    Select Existing Connection Select a registered instance from the drop-down list, or configure the connection manually if the instance is not registered. For registered instances, DTS auto-fills the remaining parameters. See Manage database connections.
    Database Type Select PolarDB-X 2.0.
    Connection Type Select Cloud Instance.
    Instance Region Select the region where the source PolarDB-X 2.0 instance resides.
    Cross-account Select No (same Alibaba Cloud account).
    Instance ID Select the ID of the source PolarDB-X 2.0 instance.
    Database Account Enter the database account. See Required permissions for the minimum permissions needed.
    Database Password Enter the password for the database account.
  3. Configure the destination database.

    Parameter Description
    Select Existing Connection Select a registered DataHub instance, or configure the connection manually.
    Database Type Select DataHub.
    Connection Type Select Cloud Instance.
    Instance Region Select the region where the destination DataHub project resides.
    Project Select the destination DataHub project.
  4. Click Test Connectivity and Proceed.

    DTS server CIDR blocks must be added to the security group or whitelist of the source and destination databases before the connection test can pass. See Add DTS server IP addresses to a whitelist.

Step 3: Select objects and configure settings

Configure objects

On the Configure Objects page, configure the following settings.

Parameter Description
Migration Types Select Schema Migration and Incremental Migration. Only these two types are supported for the PolarDB-X 2.0 to DataHub path. If you skip incremental migration, do not write to the source instance during migration.
Naming Rules of Additional Columns DTS adds system columns to each destination table. Select New Rule or Old Rule based on your destination table schema. Verify that the additional column names do not conflict with existing column names before selecting. See Names and definitions of additional columns.
Processing Mode of Conflicting Tables Precheck and Report Errors: the task fails the precheck if destination tables share names with source tables. Use object name mapping to rename conflicting tables before proceeding. Ignore Errors and Proceed: skips the duplicate-name check. With the same schema and matching primary keys, full migration skips existing records and incremental migration overwrites them. With different schemas, only matching columns are migrated or the task fails entirely.
Capitalization of Object Names in Destination Instance Controls the capitalization of database, table, and column names in DataHub. Defaults to DTS default policy. See Specify the capitalization of object names in the destination instance.
Source Objects Select databases, tables, or columns from the Source Objects list and click Rightwards arrow to add them to Selected Objects. Selecting only tables excludes views, triggers, and stored procedures.
Selected Objects To rename a single object, right-click it in the list. See Map the name of a single object. To rename multiple objects at once, click Batch Edit. See Map multiple object names at a time. Note that renaming an object may cause dependent objects to fail migration. To filter rows, right-click a table and set a WHERE condition. See Set filter conditions. To limit which SQL operations are migrated, right-click a migration object and select the operations.

Configure advanced settings

Click Next: Advanced Settings and configure the following parameters.

Parameter Description
Dedicated Cluster for Task Scheduling By default, DTS schedules the task to a shared cluster. For higher stability, purchase a dedicated cluster. See What is a DTS dedicated cluster.
Retry Time for Failed Connections How long DTS retries when a connection to the source or destination fails. Range: 10–1,440 minutes. Default: 720 minutes. Set a value greater than 30. If multiple tasks share the same source or destination database, the last-configured value takes effect. DTS charges for the instance during retry periods.
Retry Time for Other Issues How long DTS retries when DDL or DML operations fail. Range: 1–1,440 minutes. Default: 10 minutes. Set a value greater than 10. This value must be less than Retry Time for Failed Connections.
Enable Throttling for Incremental Data Migration Limits the migration speed to reduce load on the destination database. When enabled, configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when incremental data migration is selected.
Whether to delete SQL operations on heartbeat tables of forward and reverse tasks Yesalert notification settings: DTS does not write heartbeat operations to the source. A migration latency may be shown. No: DTS writes heartbeat operations to the source. This may affect physical backup and cloning of the source database.
Environment Tag Optional tag to identify the instance environment.
Configure ETL Yes: Enable the extract, transform, and load (ETL) feature and enter data processing statements. See Configure ETL in a data migration or data synchronization task and What is ETL?. No: Disable ETL.
Monitoring and Alerting Yes: Configure alert thresholds and notification contacts. Alerts fire when the task fails or migration latency exceeds the threshold. See Configure monitoring and alerting. No: Disable alerting.

Step 4: Save settings and run the precheck

  1. Click Next: Save Task Settings and Precheck. To review the OpenAPI parameters for this task before saving, hover over the button and click Preview OpenAPI parameters.

  2. Wait for the precheck to complete.

    • If a check item fails, click View Details, fix the reported issue, and click Precheck Again.

    • If a check item shows an alert that can be safely ignored, click Confirm Alert Details > View Details > Ignore > OK, then click Precheck Again. Ignoring alert items may cause data inconsistency.

    The migration task cannot start until all precheck items pass.

Step 5: Purchase the instance and start migration

  1. Wait for Success Rate to reach 100%, then click Next: Purchase Instance.

  2. On the Purchase Instance page, configure the instance.

    Parameter Description
    Resource Group The resource group for the migration instance. Default: default resource group. See What is Resource Management?.
    Instance Class The instance class determines migration speed. Select a class based on your data volume and timing requirements. See Instance classes of data migration instances.
  3. Read and accept Data Transmission Service (Pay-as-you-go) Service Terms.

  4. Click Buy and Start, then click OK.

Verify the task

After the task starts, go to the Data Migration page to monitor its status.

  • Schema migration + incremental migration: The task runs continuously and does not stop automatically. The Status column shows Running.

  • Schema migration only: The task stops automatically. The Status column shows Completed.

What's next

  • To stop the migration and switch workloads to DataHub, stop or release the DTS task before redirecting traffic. If you do not stop the task, DTS may resume it within 7 days and overwrite data in the destination.

  • To modify the migration speed after the task starts, use the throttling settings in the task configuration.

  • For data type mapping details, see Data type mappings for initial schema synchronization.