All Products
Search
Document Center

Data Transmission Service:Configure a data migration task

Last Updated:Mar 28, 2026

Data Transmission Service (DTS) lets you migrate data between different data sources—including migrations to Alibaba Cloud, migrations between instances within Alibaba Cloud, and database splitting and scale-out scenarios. This topic explains how to configure a data migration task in a DTS dedicated cluster, using an ApsaraDB RDS for MySQL instance as both source and destination.

Prerequisites

Before you begin, make sure you have:

  • A DTS dedicated cluster. See Create a DTS dedicated cluster

  • Source and destination ApsaraDB RDS for MySQL instances. See Create an ApsaraDB RDS for MySQL instance

  • Destination instance available storage larger than the total data size of the source instance

  • (If source/destination and DTS dedicated cluster are in different regions) Internet access enabled for the source or destination instance

Billing

You are charged for DTS dedicated cluster resources only. Creating a migration task in a dedicated cluster has no additional fee. If you migrate data from an Alibaba Cloud database to another cloud, you are charged for the generated Internet traffic. See Billing of DTS dedicated clusters.

Migration types

DTS supports three migration types that you combine based on your goal:

Migration typeWhat it doesWhen to use it
Schema migrationCopies object schemas (tables, views, triggers, stored procedures, functions) from source to destination. Changes the SECURITY attribute from DEFINER to INVOKER for views, stored procedures, and functions. Does not migrate user accounts—grant INVOKER the necessary read and write permissions separately.Always include this unless the destination schema already exists.
Full data migrationCopies all existing data from source to destination.Include this for initial data seeding. Do not write to the source during a full-only migration.
Incremental data migrationAfter full migration completes, continuously applies changes from the source to the destination using binary logs. Keeps source and destination in sync without service interruption.Include this to minimize downtime and keep services running during migration.

Recommended combination: Select all three types (schema migration + full data migration + incremental data migration) to migrate data without interrupting your application. Use full-only migration (schema migration + full data migration) only for one-time migrations where downtime is acceptable.

SQL operations supported for incremental data migration

Operation typeSQL operations
DMLINSERT, UPDATE, DELETE
DDLALTER TABLE, ALTER VIEW; CREATE FUNCTION, CREATE INDEX, CREATE PROCEDURE, CREATE TABLE, CREATE VIEW; DROP INDEX, DROP TABLE; RENAME TABLE; TRUNCATE TABLE

Required permissions

Grant the database accounts used by DTS the following permissions before starting the task:

DatabaseSchema migrationFull data migrationIncremental data migration
Source ApsaraDB RDS for MySQLSELECTSELECTRead and write permissions
Destination ApsaraDB RDS for MySQLRead and write permissionsRead and write permissionsRead and write permissions

If you plan to migrate accounts, the source account also needs SELECT on mysql.user, mysql.db, mysql.columns_priv, and mysql.tables_priv. The destination account needs CREATE USER and GRANT OPTION. System accounts (root, mysql.infoschema, mysql.session, mysql.sys) and accounts that already exist in the destination cannot be migrated.

For instructions on creating accounts and granting permissions, see Create an account on an ApsaraDB RDS for MySQL instance and Modify the permissions of a standard account on an ApsaraDB RDS for MySQL instance.

Limits

During schema migration, DTS migrates foreign keys from the source database to the destination database. During full data migration and incremental data migration, DTS temporarily disables foreign key constraint checks and cascade operations at the session level. If you perform cascade or delete operations on the source database during migration, data inconsistency may occur.
CategoryLimit
Source databaseThe source server must have sufficient outbound bandwidth, or migration speed decreases.
Tables must have primary key or UNIQUE constraints with all fields unique. Without these, the destination database may contain duplicate records.
If you select tables as migration objects and need to rename tables or columns in the destination, a single task supports up to 1,000 tables. Exceeding this limit causes a request error—split the tables into multiple tasks, or migrate the entire database instead.
For incremental data migration: set binlog_format to row, set binlog_row_image to full, and enable binary logging. If the self-managed MySQL database runs in a dual-primary cluster, set log_slave_updates to ON so DTS can read all binary logs.
For incremental-only migration: retain binary logs for more than 24 hours. For full data migration + incremental data migration: retain binary logs for at least 7 days. After full migration completes, you can reduce retention to more than 24 hours. Insufficient retention causes DTS to fail obtaining binary logs, which may result in task failure or data loss.
Do not perform DDL operations during schema migration or full data migration. Do not write to the source database during full-only migration—doing so causes data inconsistency.
OtherUse the same engine version for source and destination MySQL databases.
Run migrations during off-peak hours. Full data migration uses read and write resources on both instances and increases server load.
Full data migration causes fragmentation in destination tables due to concurrent INSERT operations. After migration, the destination tablespace is larger than the source tablespace.
DTS uses ROUND(COLUMN,PRECISION) to read FLOAT and DOUBLE columns. The default precision is 38 digits for FLOAT and 308 digits for DOUBLE. Verify that these precision settings meet your requirements.
DTS automatically retries failed tasks for up to 7 days. Before switching workloads to the destination database, stop or release the migration task—or run REVOKE to remove DTS write permissions on the destination. Otherwise, the resumed task overwrites destination data with source data.
Special casesFor self-managed MySQL: a primary/secondary failover during an active migration task causes the task to fail.
For self-managed MySQL: if no DML operations run on the source for an extended period, migration latency reporting may be inaccurate. Run a DML operation to refresh the latency. If you migrate an entire database, create a heartbeat table that writes data every second.
DTS periodically executes CREATE DATABASE IF NOT EXISTS \test\`` on the source database to advance the binary log position.
For ApsaraDB RDS for MySQL destinations: DTS automatically creates the destination database unless the source database name is invalid. If the name is invalid, create the database manually before configuring the task.

Configure a migration task

The configuration covers four stages: connecting source and destination databases, selecting migration objects, configuring advanced settings, and starting the task.

Step 1: Connect the source and destination databases

  1. Go to the Dedicated Cluster page.

  2. In the top navigation bar, select the region where your DTS dedicated cluster resides.

  3. Find the cluster, then in the Actions column, choose Configure Task > Configure Data Migration Task.

  4. On the configuration page, read the Limits notice at the top before proceeding.

  5. Configure the Source Database parameters:

    ParameterDescription
    Task NameAuto-generated. Specify a descriptive name to identify the task. Uniqueness is not required.
    Select an existing database connection (optional)Select an existing connection to auto-populate source parameters. Skip this if configuring manually.
    Database TypeSelect MySQL.
    Connection TypeSelect Alibaba Cloud Instance.
    Instance RegionSet by the DTS dedicated cluster. Cannot be changed.
    Replicate Data Across Alibaba Cloud AccountsSelect No for same-account migration.
    RDS Instance IDThe ID of the source ApsaraDB RDS for MySQL instance. Source and destination can be the same instance.
    Database AccountThe account with the permissions listed in Required permissions.
    Database PasswordPassword for the database account.
    Connection MethodSelect Non-encrypted or SSL-encrypted. For SSL encryption, configure SSL on the instance before starting this task.
  6. Configure the Destination Database parameters:

    ParameterDescription
    Select an existing DMS database instance (optional)Select an existing connection to auto-populate destination parameters. Skip this if configuring manually.
    Database TypeSelect MySQL.
    Connection TypeSelect Alibaba Cloud Instance.
    Instance RegionSet by the DTS dedicated cluster. Cannot be changed.
    RDS Instance IDThe ID of the destination ApsaraDB RDS for MySQL instance.
    Database AccountThe account with the permissions listed in Required permissions.
    Database PasswordPassword for the database account.
    Connection MethodSelect Non-encrypted or SSL-encrypted. For SSL encryption, configure SSL on the instance before starting this task.
  7. Click Test Connectivity and Proceed. DTS automatically adds its server CIDR blocks to the IP whitelist of Alibaba Cloud database instances, or to the security group rules of an Elastic Compute Service (ECS) instance hosting a self-managed database. For self-managed databases in on-premises data centers or third-party clouds, manually add the DTS server CIDR blocks to the database's IP whitelist. See Add the CIDR blocks of DTS servers to the security settings of on-premises databases.

    Warning

    Adding DTS server CIDR blocks to IP whitelists or security group rules introduces security exposure. Before proceeding, take preventive measures: strengthen account and password security, limit exposed ports, authenticate API calls, audit the IP whitelist and security group rules regularly to remove unauthorized CIDR blocks, and consider using Express Connect, VPN Gateway, or Smart Access Gateway to connect the database to DTS.

Step 2: Select migration objects

Configure migration type, conflict handling, and the objects to migrate:

ParameterDescription
Migration TypeSelect the combination that fits your goal. For full migration only: select Schema Migration and Full Data Migration. To keep services running during migration: select Schema Migration, Full Data Migration, and Incremental Data Migration.
Processing Mode of Conflicting TablesPrecheck and Report Errors: checks whether the destination contains tables with the same names as the source. The precheck passes only if no name conflicts exist. Use object name mapping to rename conflicting tables in the destination. Ignore Errors and Proceed: skips the name conflict check. If schemas match, DTS skips records with duplicate primary keys. If schemas differ, only specific columns are migrated or the task fails. Use with caution—data inconsistency may occur.
Method to Migrate Triggers in Source DatabaseApplies only when Schema Migration is selected. Choose a method based on your requirements. See Synchronize or migrate triggers from the source database.
Enable Migration AssessmentApplies only when Schema Migration is selected. Select Yes to check whether source and destination schemas (index lengths, stored procedures, dependent tables) meet migration requirements. Assessment results are visible during the precheck and do not affect precheck pass/fail.
Capitalization of Object Names in Destination InstanceControls the case of database, table, and column names in the destination. The default is DTS default policy. See Specify the capitalization of object names in the destination instance.
Source ObjectsSelect objects from the Source Objects list and click the Rightwards arrow icon to add them to Selected Objects. Selectable granularity: columns, tables, or databases. Selecting tables or columns excludes views, triggers, and stored procedures from migration.
Selected ObjectsRight-click an object to rename it or set WHERE conditions to filter rows. Click Batch Edit (upper-right) to rename multiple objects at once. See Map object names and Use SQL conditions to filter data. Renaming an object may cause dependent objects to fail migration.

Step 3: Configure advanced settings

Click Next: Advanced Settings and configure the following:

Data verification

To verify data consistency after migration, see Enable data verification.

Advanced settings

ParameterDescription
Select the dedicated cluster used to schedule the taskYour DTS dedicated cluster is selected by default.
Set AlertsSelect Yes to receive notifications when the task fails or migration latency exceeds a threshold. Specify the alert threshold and contacts. See Configure monitoring and alerting when you create a DTS task.
Copy the temporary table of the Online DDL tool that is generated in the source table to the destination databaseApplies when using Data Management (DMS) or gh-ost for online DDL operations on the source. Yes: migrates temporary table data generated by online DDL (may extend migration time for large datasets). No, Adapt to DMS Online DDL: skips temporary table data; migrates only the original DDL. Tables in the destination may be locked. No, Adapt to gh-ost: skips temporary table data; migrates only the original DDL. Use default or custom regular expressions to filter out gh-ost shadow tables. Tables in the destination may be locked.
Important

pt-online-schema-change is not supported. Using it causes the task to fail.

Whether to Migrate AccountsSelect Yes to migrate source database accounts. See the permissions requirements in Required permissions.
Method to Migrate Triggers in Source DatabaseChoose a trigger migration method. See Synchronize or migrate triggers from the source database.
Retry Time for Failed ConnectionsHow long DTS retries after a connection failure. Valid values: 10–1,440 minutes. Default: 720. Set to at least 30 minutes. If reconnection succeeds within this window, DTS resumes the task. If the shared source or destination instance is used by multiple tasks, the value set most recently takes effect across all tasks.
Note

When DTS retries a connection, you are charged for the DTS instance. Specify this value based on your business requirements. You can also release the DTS instance at your earliest opportunity after the source and destination instances are released.

The wait time before a retry when other issues occur in the source and destination databasesHow long DTS retries after DDL or DML operation failures. Valid values: 1–1,440 minutes. Default: 10. Set to at least 10 minutes. This value must be smaller than Retry Time for Failed Connections.

Step 4: Run the precheck and start the task

  1. Click Next: Save Task Settings and Precheck. To preview the API parameters used to configure this task, hover over the button and click Preview OpenAPI parameters.

  2. Wait for the precheck to complete. If the precheck fails:

    • For failed items: click View Details, fix the issues, then click Precheck Again.

    • For alert items that cannot be ignored: click View Details, fix the issues, then click Precheck Again.

    • For alert items that can be ignored: click Confirm Alert Details > Ignore > OK > Precheck Again. Ignoring an alert may result in data inconsistency.

  3. When the success rate reaches 100%, click Next: Select DTS Instance Type.

  4. In the New Instance Class section, set Instance Class for the task. The minimum is 1 DTS unit (DU) and the maximum is the number of remaining available DUs in the cluster.

  5. Read and select the Data Transmission Service (Pay-as-you-go) Service Terms check box.

  6. Click Start Task > OK.

To monitor progress, go to the cluster details page and click Cluster Task List in the left-side navigation pane.

What's next

After the task starts and incremental migration is running:

  • Monitor migration latency in Cluster Task List. If latency is high due to inactivity on the source, run a DML operation to refresh it.

  • Before switching application traffic to the destination database, verify data consistency using data verification.

  • Stop or release the migration task before switching workloads, or run REVOKE to remove DTS write permissions on the destination. This prevents a resumed task from overwriting destination data.