Use Data Transmission Service (DTS) to migrate data from an ApsaraDB MyBase for MySQL instance to an ApsaraDB RDS for MySQL instance. DTS supports schema migration, full data migration, and incremental data migration — run all three together to replicate data with zero application downtime.
Prerequisites
Before you begin, make sure that you have:
-
An ApsaraDB MyBase for MySQL instance. For more information, see Create an ApsaraDB MyBase for MySQL instance.
-
An ApsaraDB RDS for MySQL instance with available storage space larger than the total data size of the source instance. For more information, see Create an ApsaraDB RDS for MySQL instance.
-
Database accounts with the required permissions on both instances. See Permissions required.
Migration types
DTS supports three migration types for this scenario. Run all three together to minimize downtime.
| Migration type | What it does | Cost |
|---|---|---|
| Schema migration | Copies schemas (tables, views, triggers, stored procedures, and stored functions) to the destination | Free |
| Full data migration | Copies all existing data to the destination | Free |
| Incremental data migration | Continuously replicates changes from the source after full migration completes, keeping the destination in sync until cutover | Charged — see Billing overview |
During schema migration, DTS changes the SECURITY attribute from DEFINER to INVOKER for views, stored procedures, and stored functions. User information is not migrated. To call a view, stored procedure, or stored function in the destination database, grant the required read and write permissions to INVOKER.
Permissions required
Configure database accounts with the following permissions before creating the migration task.
| Instance | Schema migration | Full data migration | Incremental data migration |
|---|---|---|---|
| ApsaraDB MyBase for MySQL (source) | SELECT | SELECT | Read and write |
| ApsaraDB RDS for MySQL (destination) | Read and write | ||
For instructions on creating accounts and granting permissions:
-
ApsaraDB MyBase for MySQL: Create a database account and Manage a database account
-
ApsaraDB RDS for MySQL: Create an account and Modify the permissions of an account
Limitations
Review the following limitations before configuring the migration task.
Source database requirements
-
Tables must have a PRIMARY KEY or UNIQUE constraint with all fields unique. Otherwise, the destination database may contain duplicate records.
-
When migrating individual tables (not entire databases), a single migration task supports up to 1,000 tables. For larger migrations, split the tables across multiple tasks or migrate the entire database instead.
-
For incremental data migration, configure binary logging as follows:
ImportantSet the binary log retention period based on your migration type:
Incremental migration only: retain binary logs for more than 24 hours
Full + incremental migration: retain binary logs for at least 7 days
If DTS cannot obtain the required binary logs, the task fails. In exceptional circumstances, data inconsistency or loss may occur. After full data migration completes, you can reduce the retention period to more than 24 hours.
Parameter Required value Binary logging Enabled binlog_formatrowbinlog_row_imagefull -
During schema migration and full data migration, do not run DDL operations that change database or table schemas. This causes the migration task to fail.
-
During full and incremental data migration, DTS temporarily disables foreign key constraint checks and cascade operations at the session level. Running cascade or delete operations on the source database during this period may cause data inconsistency.
-
If you run full data migration only (without incremental), do not write to the source database while migration is running.
General limitations
-
Use the same MySQL engine version for source and destination to avoid compatibility issues.
-
DTS uses
ROUND(COLUMN,PRECISION)to retrieve values from FLOAT and DOUBLE columns. If you do not specify a precision, DTS applies 38 digits for FLOAT and 308 digits for DOUBLE. Verify that these precision settings meet your requirements. -
DTS automatically creates the destination database unless the source database name is invalid. If the source database name is invalid, create the database manually before configuring the task.
-
DTS resumes failed tasks for up to 7 days. Before switching workloads to the destination, stop or release any failed tasks. Alternatively, run
REVOKEto remove the DTS account's write permissions on the destination — otherwise the source data overwrites the destination after the task resumes. -
DTS updates the
dts_health_check.ha_health_checktable in the source database periodically to advance the binary log position. -
Full data migration uses concurrent INSERT operations, which causes table fragmentation in the destination. After full migration, the destination tablespace is larger than the source.
SQL operations supported for incremental migration
| Operation type | SQL statements |
|---|---|
| DML | INSERT, UPDATE, DELETE |
| DDL | ALTER TABLE, ALTER VIEW; CREATE FUNCTION, CREATE INDEX, CREATE PROCEDURE, CREATE TABLE, CREATE VIEW; DROP INDEX, DROP TABLE; RENAME TABLE; TRUNCATE TABLE |
Create a migration task
Step 1: Open the data migration page
Use one of the following methods to open the Data Migration page.
DTS console
-
Log on to the DTS console.
-
In the left-side navigation pane, click Data Migration.
-
In the upper-left corner, select the region where the migration instance will run.
DMS console
The exact navigation path varies based on your DMS console layout. For more information, see Simple mode and Customize the layout and style of the DMS console.
-
Log on to the DMS console.
-
In the top navigation bar, go to Data Development > DTS (DTS) > Data Migration.
-
From the drop-down list next to Data Migration Tasks, select the region where the migration instance will run.
Step 2: Create a task
Click Create Task to open the task configuration page.
If the page shows New Configuration Page in the upper-right corner, click it to switch to the new interface. Skip this if Back to Previous Version is already displayed.
Specific parameters differ between the new and previous configuration page versions. Use the new version.
Step 3: Configure source and destination databases
After configuring source and destination databases, read the Limits shown at the top of the page before proceeding.
Configure the following parameters for the source and destination databases.
Source database (ApsaraDB MyBase for MySQL)
| Parameter | Value |
|---|---|
| Task Name | Enter a descriptive name. Task names do not need to be unique. |
| Select a DMS database instance | Leave blank and configure the fields below. |
| Database Type | MySQL |
| Access Method | Alibaba Cloud Instance |
| Instance Region | The region where the source ApsaraDB MyBase for MySQL instance resides |
| Replicate Data Across Alibaba Cloud Accounts | Select No for same-account migration. For cross-account migration, select Yes and follow the instructions in Configure a DTS task across Alibaba Cloud accounts. |
| RDS Instance ID | The ID of the source ApsaraDB MyBase for MySQL instance |
| Database Account | The source database account. See Permissions required. |
| Database Password | The password for the database account |
| Encryption | Select Non-encrypted or SSL-encrypted. To use SSL encryption, enable it on the ApsaraDB RDS for MySQL instance first. See Use a cloud certificate to enable SSL encryption. |
Destination database (ApsaraDB RDS for MySQL)
| Parameter | Value |
|---|---|
| Select a DMS database instance | Leave blank and configure the fields below. |
| Database Type | MySQL |
| Access Method | Alibaba Cloud Instance |
| Instance Region | The region where the destination ApsaraDB RDS for MySQL instance resides |
| Replicate Data Across Alibaba Cloud Accounts | No |
| RDS Instance ID | The ID of the destination ApsaraDB RDS for MySQL instance |
| Database Account | The destination database account. See Permissions required. |
| Database Password | The password for the database account |
| Encryption | Select Non-encrypted or SSL-encrypted |
Step 4: Test connectivity
Click Test Connectivity and Proceed.
DTS automatically adds its server CIDR blocks to the IP address whitelist of Alibaba Cloud database instances and to the security group rules of Elastic Compute Service (ECS) instances.
Adding DTS server CIDR blocks to your whitelist or security group rules creates security risks. Before using DTS, take preventive measures such as: using strong passwords, restricting exposed ports, authenticating API calls, auditing whitelist rules regularly, and blocking unauthorized CIDR blocks. Alternatively, connect the database to DTS over Express Connect, VPN Gateway, or Smart Access Gateway.
If the source or destination is a self-managed database in a data center or hosted by a third-party provider, manually add the DTS server CIDR blocks to the database's IP address whitelist. For self-managed databases hosted on ECS instances, DTS adds the CIDR blocks to the security group rules automatically for single-ECS deployments; for multi-ECS deployments, add the CIDR blocks to each ECS instance's security group rules manually. See Add the CIDR blocks of DTS servers.
Step 5: Configure migration objects
On the Configure Objects page, set the following parameters.
Migration types
Select the migration types to run:
-
Schema Migration + Full Data Migration: migrates schema and existing data only; stops when the initial copy is complete.
-
Schema Migration + Full Data Migration + Incremental Data Migration (recommended): migrates schema and data, then continuously replicates changes until you stop the task. Use this combination to minimize downtime.
If you run full data migration without incremental, do not write to the source database during migration.
Object configuration parameters
| Parameter | Description |
|---|---|
| Method to Migrate Triggers in Source Database | Available only when both Schema Migration and Incremental Data Migration are selected. See Synchronize or migrate triggers from the source database. |
| Enable Migration Assessment | Available only with Schema Migration selected. Set to Yes to check whether source and destination schemas (index lengths, stored procedures, and dependent tables) are compatible. Assessment results appear during the precheck but do not affect the precheck outcome. |
| Processing Mode of Conflicting Tables | Precheck and Report Errors (default): fails the precheck if identically named tables exist in the destination. Use object name mapping to rename conflicting tables before proceeding. Ignore Errors and Proceed: skips the conflict check. During full migration, existing records in the destination are retained; during incremental migration, they are overwritten. If schemas differ, only matching columns are migrated. Use with caution. |
| Capitalization of Object Names in Destination Instance | Controls how database, table, and column names are capitalized in the destination. Default: DTS default policy. See Specify the capitalization of object names in the destination instance. |
| Source Objects | Select the objects to migrate and click the arrow icon to add them to Selected Objects. If you select tables or columns, DTS does not migrate views, triggers, or stored procedures. |
| Selected Objects | Right-click an object to rename it or set WHERE filter conditions. Click Batch Edit to rename multiple objects at once. Renaming an object may cause dependent objects to fail migration. |
Click Next: Advanced Settings.
Advanced settings
| Parameter | Description |
|---|---|
| Dedicated Cluster for Task Scheduling | By default, DTS uses the shared cluster. For improved task stability, purchase a dedicated cluster. See What is a DTS dedicated cluster. |
| Copy the temporary table of the Online DDL tool | Applies when using DMS or gh-ost for online DDL on the source. Yes: migrates temporary table data (may increase latency if the dataset is large). No, Adapt to DMS Online DDL: migrates only the original DDL; may lock destination tables. No, Adapt to gh-ost: migrates only the original DDL; may lock destination tables. Note: pt-online-schema-change is not supported and causes the task to fail. |
| Whether to Migrate Accounts | Set to Yes to migrate source database accounts. Verify account permissions after migration. |
| Retry Time for Failed Connections | How long DTS retries after a connection failure. Range: 10–1440 minutes. Default: 720 minutes. We recommend that you set the parameter to a value greater than 30 minutes. Charges continue during the retry period. If you specify different retry times across multiple tasks sharing the same source or destination, the value specified most recently takes precedence. |
| Retry Time for Other Issues | How long DTS retries after DDL or DML failures. Range: 1–1440 minutes. Default: 10 minutes. We recommend that you set the parameter to a value greater than 10 minutes. Must be less than Retry Time for Failed Connections. |
| Enable Throttling for Full Data Migration | Limits migration speed during full migration to reduce load on database servers. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). |
| Enable Throttling for Incremental Data Migration | Limits migration speed during incremental migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). |
| Environment Tag | Optional label to identify the DTS instance. |
| Whether to delete SQL operations on heartbeat tables | Yes: DTS does not write heartbeat SQL to the source; latency metrics may be inaccurate. No: DTS writes heartbeat SQL; physical backup and cloning of the source may be affected. |
| Configure ETL | Enables the extract, transform, and load (ETL) feature to transform data during migration. See Configure ETL in a data migration or data synchronization task. |
| Monitoring and Alerting | Set to Yes to receive notifications when the task fails or migration latency exceeds a threshold. See Configure monitoring and alerting. |
Click Next Step: Data Verification to configure data verification. For details, see Configure a data verification task.
Step 6: Run the precheck
Click Next: Save Task Settings and Precheck.
DTS runs a precheck before starting migration. To preview the API parameters for this configuration, hover over the button and click Preview OpenAPI parameters before proceeding.
-
If the precheck passes, the task starts automatically.
-
If any item fails, click View Details to see the cause, fix the issue, and click Precheck Again.
-
If an alert appears for an item that can be safely ignored: click Confirm Alert Details, then Ignore in the dialog, then OK, and run the precheck again. Ignoring an alert may cause data inconsistency.
Step 7: Purchase a migration instance
On the Purchase Instance page, configure the instance class.
| Parameter | Description |
|---|---|
| Resource Group | The resource group for the migration instance. Default: default resource group. See What is Resource Management? |
| Instance Class | Controls migration speed. Select based on your data volume and time constraints. See Instance classes of data migration instances. |
Read and accept the Data Transmission Service (Pay-as-you-go) Service Terms, then click Buy and Start and confirm with OK.
Track migration progress on the Data Migration page.
Performance considerations
DTS full data migration adds measurable load to both the source and destination databases. To reduce impact:
-
Schedule migration during off-peak hours.
-
Enable throttling under Advanced Settings to limit QPS and migration speed.
Note that after full data migration, the destination tablespace is larger than the source due to table fragmentation from concurrent INSERT operations.