Data Transmission Service (DTS) lets you migrate data between different data sources—including migrations to Alibaba Cloud, migrations between instances within Alibaba Cloud, and database splitting and scale-out scenarios. This topic explains how to configure a data migration task in a DTS dedicated cluster, using an ApsaraDB RDS for MySQL instance as both source and destination.
Prerequisites
Before you begin, make sure you have:
A DTS dedicated cluster. See Create a DTS dedicated cluster
Source and destination ApsaraDB RDS for MySQL instances. See Create an ApsaraDB RDS for MySQL instance
Destination instance available storage larger than the total data size of the source instance
(If source/destination and DTS dedicated cluster are in different regions) Internet access enabled for the source or destination instance
Billing
You are charged for DTS dedicated cluster resources only. Creating a migration task in a dedicated cluster has no additional fee. If you migrate data from an Alibaba Cloud database to another cloud, you are charged for the generated Internet traffic. See Billing of DTS dedicated clusters.
Migration types
DTS supports three migration types that you combine based on your goal:
| Migration type | What it does | When to use it |
|---|---|---|
| Schema migration | Copies object schemas (tables, views, triggers, stored procedures, functions) from source to destination. Changes the SECURITY attribute from DEFINER to INVOKER for views, stored procedures, and functions. Does not migrate user accounts—grant INVOKER the necessary read and write permissions separately. | Always include this unless the destination schema already exists. |
| Full data migration | Copies all existing data from source to destination. | Include this for initial data seeding. Do not write to the source during a full-only migration. |
| Incremental data migration | After full migration completes, continuously applies changes from the source to the destination using binary logs. Keeps source and destination in sync without service interruption. | Include this to minimize downtime and keep services running during migration. |
Recommended combination: Select all three types (schema migration + full data migration + incremental data migration) to migrate data without interrupting your application. Use full-only migration (schema migration + full data migration) only for one-time migrations where downtime is acceptable.
SQL operations supported for incremental data migration
| Operation type | SQL operations |
|---|---|
| DML | INSERT, UPDATE, DELETE |
| DDL | ALTER TABLE, ALTER VIEW; CREATE FUNCTION, CREATE INDEX, CREATE PROCEDURE, CREATE TABLE, CREATE VIEW; DROP INDEX, DROP TABLE; RENAME TABLE; TRUNCATE TABLE |
Required permissions
Grant the database accounts used by DTS the following permissions before starting the task:
| Database | Schema migration | Full data migration | Incremental data migration |
|---|---|---|---|
| Source ApsaraDB RDS for MySQL | SELECT | SELECT | Read and write permissions |
| Destination ApsaraDB RDS for MySQL | Read and write permissions | Read and write permissions | Read and write permissions |
If you plan to migrate accounts, the source account also needs SELECT on mysql.user, mysql.db, mysql.columns_priv, and mysql.tables_priv. The destination account needs CREATE USER and GRANT OPTION. System accounts (root, mysql.infoschema, mysql.session, mysql.sys) and accounts that already exist in the destination cannot be migrated.
For instructions on creating accounts and granting permissions, see Create an account on an ApsaraDB RDS for MySQL instance and Modify the permissions of a standard account on an ApsaraDB RDS for MySQL instance.
Limits
During schema migration, DTS migrates foreign keys from the source database to the destination database. During full data migration and incremental data migration, DTS temporarily disables foreign key constraint checks and cascade operations at the session level. If you perform cascade or delete operations on the source database during migration, data inconsistency may occur.
| Category | Limit |
|---|---|
| Source database | The source server must have sufficient outbound bandwidth, or migration speed decreases. |
| Tables must have primary key or UNIQUE constraints with all fields unique. Without these, the destination database may contain duplicate records. | |
| If you select tables as migration objects and need to rename tables or columns in the destination, a single task supports up to 1,000 tables. Exceeding this limit causes a request error—split the tables into multiple tasks, or migrate the entire database instead. | |
For incremental data migration: set binlog_format to row, set binlog_row_image to full, and enable binary logging. If the self-managed MySQL database runs in a dual-primary cluster, set log_slave_updates to ON so DTS can read all binary logs. | |
| For incremental-only migration: retain binary logs for more than 24 hours. For full data migration + incremental data migration: retain binary logs for at least 7 days. After full migration completes, you can reduce retention to more than 24 hours. Insufficient retention causes DTS to fail obtaining binary logs, which may result in task failure or data loss. | |
| Do not perform DDL operations during schema migration or full data migration. Do not write to the source database during full-only migration—doing so causes data inconsistency. | |
| Other | Use the same engine version for source and destination MySQL databases. |
| Run migrations during off-peak hours. Full data migration uses read and write resources on both instances and increases server load. | |
| Full data migration causes fragmentation in destination tables due to concurrent INSERT operations. After migration, the destination tablespace is larger than the source tablespace. | |
DTS uses ROUND(COLUMN,PRECISION) to read FLOAT and DOUBLE columns. The default precision is 38 digits for FLOAT and 308 digits for DOUBLE. Verify that these precision settings meet your requirements. | |
DTS automatically retries failed tasks for up to 7 days. Before switching workloads to the destination database, stop or release the migration task—or run REVOKE to remove DTS write permissions on the destination. Otherwise, the resumed task overwrites destination data with source data. | |
| Special cases | For self-managed MySQL: a primary/secondary failover during an active migration task causes the task to fail. |
| For self-managed MySQL: if no DML operations run on the source for an extended period, migration latency reporting may be inaccurate. Run a DML operation to refresh the latency. If you migrate an entire database, create a heartbeat table that writes data every second. | |
DTS periodically executes CREATE DATABASE IF NOT EXISTS \test\`` on the source database to advance the binary log position. | |
| For ApsaraDB RDS for MySQL destinations: DTS automatically creates the destination database unless the source database name is invalid. If the name is invalid, create the database manually before configuring the task. |
Configure a migration task
The configuration covers four stages: connecting source and destination databases, selecting migration objects, configuring advanced settings, and starting the task.
Step 1: Connect the source and destination databases
Go to the Dedicated Cluster page.
In the top navigation bar, select the region where your DTS dedicated cluster resides.
Find the cluster, then in the Actions column, choose Configure Task > Configure Data Migration Task.
On the configuration page, read the Limits notice at the top before proceeding.
Configure the Source Database parameters:
Parameter Description Task Name Auto-generated. Specify a descriptive name to identify the task. Uniqueness is not required. Select an existing database connection (optional) Select an existing connection to auto-populate source parameters. Skip this if configuring manually. Database Type Select MySQL. Connection Type Select Alibaba Cloud Instance. Instance Region Set by the DTS dedicated cluster. Cannot be changed. Replicate Data Across Alibaba Cloud Accounts Select No for same-account migration. RDS Instance ID The ID of the source ApsaraDB RDS for MySQL instance. Source and destination can be the same instance. Database Account The account with the permissions listed in Required permissions. Database Password Password for the database account. Connection Method Select Non-encrypted or SSL-encrypted. For SSL encryption, configure SSL on the instance before starting this task. Configure the Destination Database parameters:
Parameter Description Select an existing DMS database instance (optional) Select an existing connection to auto-populate destination parameters. Skip this if configuring manually. Database Type Select MySQL. Connection Type Select Alibaba Cloud Instance. Instance Region Set by the DTS dedicated cluster. Cannot be changed. RDS Instance ID The ID of the destination ApsaraDB RDS for MySQL instance. Database Account The account with the permissions listed in Required permissions. Database Password Password for the database account. Connection Method Select Non-encrypted or SSL-encrypted. For SSL encryption, configure SSL on the instance before starting this task. Click Test Connectivity and Proceed. DTS automatically adds its server CIDR blocks to the IP whitelist of Alibaba Cloud database instances, or to the security group rules of an Elastic Compute Service (ECS) instance hosting a self-managed database. For self-managed databases in on-premises data centers or third-party clouds, manually add the DTS server CIDR blocks to the database's IP whitelist. See Add the CIDR blocks of DTS servers to the security settings of on-premises databases.
WarningAdding DTS server CIDR blocks to IP whitelists or security group rules introduces security exposure. Before proceeding, take preventive measures: strengthen account and password security, limit exposed ports, authenticate API calls, audit the IP whitelist and security group rules regularly to remove unauthorized CIDR blocks, and consider using Express Connect, VPN Gateway, or Smart Access Gateway to connect the database to DTS.
Step 2: Select migration objects
Configure migration type, conflict handling, and the objects to migrate:
| Parameter | Description |
|---|---|
| Migration Type | Select the combination that fits your goal. For full migration only: select Schema Migration and Full Data Migration. To keep services running during migration: select Schema Migration, Full Data Migration, and Incremental Data Migration. |
| Processing Mode of Conflicting Tables | Precheck and Report Errors: checks whether the destination contains tables with the same names as the source. The precheck passes only if no name conflicts exist. Use object name mapping to rename conflicting tables in the destination. Ignore Errors and Proceed: skips the name conflict check. If schemas match, DTS skips records with duplicate primary keys. If schemas differ, only specific columns are migrated or the task fails. Use with caution—data inconsistency may occur. |
| Method to Migrate Triggers in Source Database | Applies only when Schema Migration is selected. Choose a method based on your requirements. See Synchronize or migrate triggers from the source database. |
| Enable Migration Assessment | Applies only when Schema Migration is selected. Select Yes to check whether source and destination schemas (index lengths, stored procedures, dependent tables) meet migration requirements. Assessment results are visible during the precheck and do not affect precheck pass/fail. |
| Capitalization of Object Names in Destination Instance | Controls the case of database, table, and column names in the destination. The default is DTS default policy. See Specify the capitalization of object names in the destination instance. |
| Source Objects | Select objects from the Source Objects list and click the |
| Selected Objects | Right-click an object to rename it or set WHERE conditions to filter rows. Click Batch Edit (upper-right) to rename multiple objects at once. See Map object names and Use SQL conditions to filter data. Renaming an object may cause dependent objects to fail migration. |
Step 3: Configure advanced settings
Click Next: Advanced Settings and configure the following:
Data verification
To verify data consistency after migration, see Enable data verification.
Advanced settings
| Parameter | Description |
|---|---|
| Select the dedicated cluster used to schedule the task | Your DTS dedicated cluster is selected by default. |
| Set Alerts | Select Yes to receive notifications when the task fails or migration latency exceeds a threshold. Specify the alert threshold and contacts. See Configure monitoring and alerting when you create a DTS task. |
| Copy the temporary table of the Online DDL tool that is generated in the source table to the destination database | Applies when using Data Management (DMS) or gh-ost for online DDL operations on the source. Yes: migrates temporary table data generated by online DDL (may extend migration time for large datasets). No, Adapt to DMS Online DDL: skips temporary table data; migrates only the original DDL. Tables in the destination may be locked. No, Adapt to gh-ost: skips temporary table data; migrates only the original DDL. Use default or custom regular expressions to filter out gh-ost shadow tables. Tables in the destination may be locked. Important pt-online-schema-change is not supported. Using it causes the task to fail. |
| Whether to Migrate Accounts | Select Yes to migrate source database accounts. See the permissions requirements in Required permissions. |
| Method to Migrate Triggers in Source Database | Choose a trigger migration method. See Synchronize or migrate triggers from the source database. |
| Retry Time for Failed Connections | How long DTS retries after a connection failure. Valid values: 10–1,440 minutes. Default: 720. Set to at least 30 minutes. If reconnection succeeds within this window, DTS resumes the task. If the shared source or destination instance is used by multiple tasks, the value set most recently takes effect across all tasks. Note When DTS retries a connection, you are charged for the DTS instance. Specify this value based on your business requirements. You can also release the DTS instance at your earliest opportunity after the source and destination instances are released. |
| The wait time before a retry when other issues occur in the source and destination databases | How long DTS retries after DDL or DML operation failures. Valid values: 1–1,440 minutes. Default: 10. Set to at least 10 minutes. This value must be smaller than Retry Time for Failed Connections. |
Step 4: Run the precheck and start the task
Click Next: Save Task Settings and Precheck. To preview the API parameters used to configure this task, hover over the button and click Preview OpenAPI parameters.
Wait for the precheck to complete. If the precheck fails:
For failed items: click View Details, fix the issues, then click Precheck Again.
For alert items that cannot be ignored: click View Details, fix the issues, then click Precheck Again.
For alert items that can be ignored: click Confirm Alert Details > Ignore > OK > Precheck Again. Ignoring an alert may result in data inconsistency.
When the success rate reaches 100%, click Next: Select DTS Instance Type.
In the New Instance Class section, set Instance Class for the task. The minimum is 1 DTS unit (DU) and the maximum is the number of remaining available DUs in the cluster.
Read and select the Data Transmission Service (Pay-as-you-go) Service Terms check box.
Click Start Task > OK.
To monitor progress, go to the cluster details page and click Cluster Task List in the left-side navigation pane.
What's next
After the task starts and incremental migration is running:
Monitor migration latency in Cluster Task List. If latency is high due to inactivity on the source, run a DML operation to refresh it.
Before switching application traffic to the destination database, verify data consistency using data verification.
Stop or release the migration task before switching workloads, or run
REVOKEto remove DTS write permissions on the destination. This prevents a resumed task from overwriting destination data.