All Products
Search
Document Center

ApsaraDB RDS:Migrate data from a self-managed MySQL database to an ApsaraDB RDS for MySQL instance

Last Updated:Mar 30, 2026

Use Data Transmission Service (DTS) to migrate your self-managed MySQL database—whether on-premises, on an Elastic Compute Service (ECS) instance, or on another cloud—to ApsaraDB RDS for MySQL with minimal to zero downtime.

This guide covers:

  • Choosing a migration solution

  • Preparing the source database and destination RDS instance

  • Configuring and running the migration task

  • Verifying data and switching over your application

Choose a migration solution

DTS supports three migration types that you combine to build a solution:

Migration type What it does
Schema migration Copies the structure of databases, tables, views, triggers, stored procedures, and functions. DTS converts DEFINER to INVOKER in views and stored procedures.
Full data migration Copies all existing data from the source database to the destination RDS instance.
Incremental data migration After full data migration starts, continuously copies new changes from the source database—enabling migration with near-zero downtime.

Combine these types based on your tolerance for downtime:

Solution Downtime Data consistency Limitations Cost Best for
Schema + full + incremental (recommended) Zero Consistent after migration completes, even if writes continue during migration Incremental migration runs until you stop it manually Paid (incremental) Production environments, zero-downtime requirement
Schema + full Duration of full migration Consistent only if source is read-only during migration; inconsistent if writes occur Source must be quiesced for consistent data Free Test environments, acceptable downtime

Billing

Schema migration, full data migration, and public network traffic are free. The following items are charged:

  • Incremental data migration: Charged while running. Not charged when paused or failed.

  • Data verification: Charged based on the amount of data verified. See data verification fees.

Prerequisites

Before you begin, ensure that you have:

  • A source MySQL database running version 5.1, 5.5, 5.6, 5.7, or 8.0

  • A destination RDS for MySQL instance with more available storage space than the source database

Limitations

Review the following limitations before configuring the migration task.

Source database requirements:

  • All tables to be migrated must have a primary key or a unique constraint with unique values. Otherwise, the destination RDS instance may contain duplicate data.

  • Avoid DDL operations (schema changes) during schema migration and full data migration. The migration task fails if DDL operations are detected.

  • If running schema + full migration only (without incremental), avoid writing to the source database during migration to maintain data consistency.

  • The source database server must have sufficient outbound bandwidth. Insufficient bandwidth reduces migration speed.

  • For incremental data migration, binary logging must be enabled and configured correctly (see Configure binary logging).

  • If the source database is a dual-primary cluster, avoid primary/secondary switchovers while the migration task is running. The task fails if a switchover occurs.

  • If the source database runs MySQL 8.0.23 or later and contains invisible columns (including auto-generated invisible primary keys for tables without a primary key), make them visible before migration:

    ALTER TABLE <table_name> ALTER COLUMN <column_name> SET VISIBLE;

    See Invisible Columns and Generated Invisible Primary Keys for details.

Content that cannot be migrated:

  • Parsers defined using comment syntax

  • Data from physical backup and recovery operations or foreign key cascade operations (not recorded in binary logs)

  • Indexes and partitions

Other limitations:

  • Source and destination MySQL versions must match.

  • If the data to be migrated contains four-byte characters (rare characters or emojis), the destination RDS instance and its tables must use the utf8mb4 character set. For schema migration, set character_set_server to utf8mb4 on the destination RDS instance.

  • If column name mappings are configured, a single migration task supports up to 1,000 tables. Create multiple tasks for larger migrations.

  • DTS uses ROUND(COLUMN, PRECISION) to retrieve values from FLOAT and DOUBLE columns. If no precision is specified, FLOAT defaults to 38 digits and DOUBLE defaults to 308 digits. Verify that these precision settings meet your requirements.

  • DTS retries failed tasks for up to 7 days. Before switching workloads to the destination, stop or release any failed tasks—or revoke DTS write permissions on the destination—to prevent stale data from overwriting new data.

  • If a DTS task fails to run, DTS technical support will try to restore the task within 8 hours. During the restoration, the task may be restarted and its parameters may be modified.

  • DTS automatically creates the destination database if the source database name follows ApsaraDB RDS for MySQL naming conventions. If it does not, create the database manually before configuring the migration task.

  • Online DDL operations on the source database using pt-online-schema-change are not supported and will cause the task to fail. Use DMS or gh-ost instead.

  • DTS periodically executes CREATE DATABASE IF NOT EXISTS on the source database (creating a test database) to advance the binary log position.

  • After an HA switchover in the destination RDS instance, run ANALYZE TABLE <table_name> to confirm that data is written to disk and not only in memory.

  • If the source database has Transparent Data Encryption (TDE) enabled, all three migration types are supported. If the EncDB feature is enabled, full data migration is not supported.

  • In incremental data migration, a read-only ApsaraDB RDS for MySQL V5.6 instance cannot be used as the source database because it does not record transaction logs.

Phase 1: Prepare for migration

Step 1: Authorize DTS to access cloud resources

  1. Open the quick authorization page with your Alibaba Cloud account and click Authorize.

  2. If you see EntityAlreadyExists.Role and EntityAlreadyExists.Role.Policy messages, authorization is already complete.

screenshot_2025-03-21_13-37-47

Step 2: Create database accounts

Account for the source database

Run the following statements on the source database:

-- Replace dts_user and Your_Password123 with actual values.
CREATE USER 'dts_user'@'%' IDENTIFIED BY 'Your_Password123';

-- Required for schema migration and full data migration.
GRANT SELECT ON *.* TO 'dts_user'@'%';

-- Required for incremental data migration.
GRANT REPLICATION CLIENT, REPLICATION SLAVE, SHOW VIEW ON *.* TO 'dts_user'@'%';

-- Required for DTS to create a heartbeat table to advance the binary log position.
GRANT CREATE ON *.* TO 'dts_user'@'%';

FLUSH PRIVILEGES;

The following table summarizes the minimum permissions required for each migration type:

Migration type Required permissions
Schema migration SELECT
Full data migration SELECT
Incremental data migration SELECT; REPLICATION CLIENT, REPLICATION SLAVE, SHOW VIEW; CREATE (for heartbeat table)

Account for the destination RDS instance

  1. In the RDS console, select the region and click the ID of the destination RDS instance.

  2. In the left navigation pane, click Accounts, then click Create Account.

  3. Set Account Type to Privileged Account and complete the remaining parameters.

The destination account requires read and write permissions on the destination RDS instance.

Step 3: Configure access to the source database

Select the access method that matches your source database deployment:

Source database Access method Configuration
On-premises with a public IP address Public IP Add DTS server CIDR blocks to the source database IP whitelist
On-premises without a public IP address Cloud Enterprise Network (CEN), Database Gateway, or VPN Gateway/Express Connect/Smart Access Gateway (SAG) Add DTS server CIDR blocks to the IP whitelist and complete the network access configuration for your chosen method
Database on an ECS instance ECS instance No configuration required

Configure binary logging for incremental data migration

Skip this step if you are not performing incremental data migration.

Binary logging must be enabled on the source database for incremental data migration. Configure the following parameters, then restart MySQL for the changes to take effect.

Binary logging parameters

Parameter Required value Notes
log_bin mysql_bin (or any path) Enables binary logging
binlog_format row Required for DTS to capture row-level changes
binlog_row_image full Required for MySQL 5.6 and later
server_id Any integer greater than 1 Must be unique in a replication topology
expire_logs_days 7 or greater MySQL versions earlier than 8.0. Default: 0 (never expires)
binlog_expire_logs_seconds 604800 or greater (7 days) MySQL 8.0 and later. Default: 2592000 (30 days)
log_slave_updates ON Dual-primary clusters only

Retain binary logs for at least 7 days. If the retention period is too short, DTS may fail to obtain the required binary logs, and data inconsistency or loss may occur.

Configure on Linux

  1. Edit /etc/my.cnf:

    log_bin=mysql_bin
    binlog_format=row
    
    # MySQL earlier than 8.0:
    # expire_logs_days=7
    
    # MySQL 8.0 and later:
    # binlog_expire_logs_seconds=604800
    
    server_id=2
    binlog_row_image=full
    
    # Dual-primary clusters only:
    # log_slave_updates=ON
  2. Restart MySQL:

    /etc/init.d/mysqld restart

Configure on Windows

  1. Edit my.ini:

    log_bin=mysql_bin
    binlog_format=row
    
    # MySQL earlier than 8.0:
    # expire_logs_days=7
    
    # MySQL 8.0 and later:
    # binlog_expire_logs_seconds=604800
    
    server_id=2
    binlog_row_image=full
    
    # Dual-primary clusters only:
    # log_slave_updates=ON
  2. Restart MySQL:

    net stop mysql
    net start mysql

Phase 2: Configure the migration task

  1. Log on to the DTS console, click Data Migration in the left navigation pane, then click Create Task.

  2. Configure the source database and destination RDS instance.

    Source database

    Parameter Value
    Database Type MySQL
    Access Method Select the method configured in Step 3 (for example, Public IP)
    Instance Region Region where the source database is located
    Domain Name or IP Public endpoint or IP address of the source database
    Port Service port of the source database. Default: 3306
    Database Account Account created in Step 2
    Database Password Password for the account
    Encryption Non-encrypted if SSL is not enabled; SSL-encrypted if SSL is enabled (upload a CA Certificate and set the CA Key)

    Destination RDS instance

    Parameter Value
    Database Type MySQL
    Access Method Alibaba Cloud Instance
    Instance Region Region of the destination RDS instance
    Replicate Data Across Alibaba Cloud Accounts No
    RDS Instance ID ID of the destination RDS instance
    Database Account Privileged account created in Step 2
    Database Password Password for the account
    Encryption Non-encrypted or SSL-encrypted. If SSL-encrypted, enable SSL encryption on the destination RDS instance first
  3. Click Test Connectivity and Proceed. In the dialog box, click Test Connectivity. If the test fails, fix the issue based on the error message before continuing.

  4. Configure the objects to migrate.

    Configure Objects

    On the Configure Objects tab, set the migration types and select the objects to migrate: Click Next: Advanced Settings.

    Parameter Description
    Migration Types Select Schema Migration and Full Data Migration for a full migration. Add Incremental Data Migration to minimize downtime.
    Source Objects Select the databases, tables, or columns to migrate, then click Rightwards arrow to add them to Selected Objects.
    Selected Objects Right-click an object to rename it or configure a WHERE filter. Click Batch Edit to rename multiple objects at once.
    Processing Mode of Conflicting Tables Precheck and Report Errors (default): fails the precheck if the destination already contains tables with the same names. Ignore Errors and Proceed: skips this check—during full migration, existing records are retained; during incremental migration, existing records are overwritten.
    Method to Migrate Triggers in Source Database Available when both Schema Migration and Incremental Data Migration are selected. See Synchronize or migrate triggers.
    Whether to migrate Event Select Yesalert notification settings to migrate events. See Synchronize or migrate events.
    Enable Migration Assessment Available when Schema Migration is selected. Checks whether source and destination schemas (index lengths, stored procedures, dependent tables) are compatible. Results are shown during the precheck but do not affect the precheck outcome.
    Capitalization of Object Names in Destination Instance Controls the capitalization of database, table, and column names in the destination. Defaults to DTS default policy. See Specify capitalization of object names.

    (Optional) Advanced Configurations

    (Optional) On the Advanced Configurations tab, adjust settings as needed: Click Next: Data verification.

    Parameter Description
    Dedicated Cluster for Task Scheduling By default, tasks run on the shared cluster. Purchase a dedicated cluster for improved stability.
    Copy the temporary table of the Online DDL tool If using DMS or gh-ost for online DDL on the source: Yes migrates temporary table data (may increase latency); No, Adapt to DMS Online DDL migrates only original DDL operations; No, Adapt to gh-ost migrates only original DDL from gh-ost.
    Whether to Migrate Accounts Migrates account information from the source. If enabled, select the accounts to migrate and verify account permissions.
    Retry Time for Failed Connections How long DTS retries after a connection failure. Range: 10–1,440 minutes. Default: 720 minutes. Set to at least 30 minutes.
    Retry Time for Other Issues How long DTS retries after DDL or DML failures. Range: 1–1,440 minutes. Default: 10 minutes. Must be less than Retry Time for Failed Connections.
    Enable Throttling for Full Data Migration Limits QPS to the source database, RPS for full migration, and migration speed (MB/s). Use to reduce load on database servers.
    Enable Throttling for Incremental Data Migration Limits RPS and migration speed (MB/s) for incremental migration.
    Configure ETL Enables extract, transform, and load (ETL) processing. See Configure ETL.
    Monitoring and Alerting Sends alerts when the task fails or latency exceeds a threshold. See Configure monitoring and alerting.
    Whether to delete SQL operations on heartbeat tables Yes: does not write heartbeat SQL to the source (migration latency may be displayed). No: writes heartbeat SQL to the source (may affect physical backup and cloning).

    (Optional) Data Verification

    (Optional) On the Data Verification tab, configure data verification: If you configure Full Data Verification, set the following parameters: If you configure Incremental Data Verification, set the Incremental Verification Benchmark to filter DML operations to verify. To configure alerts for data verification, set Full Data Verification Alert or Incremental Data Verification Alert to Yes, then select and configure alert rules. To receive alert notifications, subscribe to alert messages in CloudMonitor. See Configure alert rules for DTS tasks.

    Verification method Cost Description
    Full Data Verification Charged Compares data between source and destination after full migration
    Incremental Data Verification Charged Compares data during incremental migration
    Schema Verification Free Checks schema compatibility between source and destination

    Select one or more data verification methods based on your business needs.

    1. Full Data Verification

      Configure the following parameters if you select Full Data Verification.

      Parameter Description
      Full Data Verification Full field validation by row sampling: samples a percentage of rows (10–100%) for full-field comparison. Verify based on the number of table rows: compares row counts only (free).
      Full Data Verification Time Rule Only Start Now is supported.
      Timeout Settings for Full Data Verification Set a timeout (1–72 hours) to automatically end the verification task if it runs too long.
      Full calibration reference Default: uses the union of source and destination as the baseline. Source Database: verifies that the destination matches the source. Destination Database: verifies that the source matches the destination.

      Incremental Data Verification

Phase 3: Run the precheck and start migration

  1. Click Next: Save Task Settings and Precheck. DTS validates your configuration and environment.

  2. Wait for the precheck to complete:

    • If Success Rate reaches 100%, the environment is ready. Review any warnings to confirm they pose no risk, then ignore them and proceed.

    • If the precheck fails, click View Details, fix the issue, and run the precheck again.

  3. Click Next: Purchase Instance.

  4. Select a Resource Group (default: default resource group) and the appropriate DTS instance specification.

  5. Accept the Data Transmission Service (Pay-As-You-Go) Terms of Service, click Purchase and Start, then click Confirm. The migration task starts automatically.

Phase 4: Verify data and switch over

  1. Monitor the migration task status:

    • Tasks without incremental data migration show Status: Completed when finished.

    • Tasks with incremental data migration show Status: Running and do not end automatically.

  2. After full migration completes and incremental migration latency is near zero, verify data consistency: Option 1 — Automatic verification: Configure a data verification task in DTS. Option 2 — Manual verification: Run the following queries on both the source database and the destination RDS instance and compare the results:

    -- Compare row counts
    SELECT COUNT(*) FROM <your_table>;
    
    -- Compare key business metrics
    SELECT SUM(amount) FROM orders WHERE create_time >= '2024-01-01';
  3. Switch over your application during off-peak hours:

    1. Stop your application.

    2. Confirm that incremental migration latency has reached zero.

    3. Update your application's database connection strings to the endpoint of the destination RDS instance.

    4. Release the migration task after the switchover is complete.

Supported SQL operations for incremental data migration

Operation type SQL statements
DML INSERT, UPDATE, DELETE
DDL ALTER TABLE, ALTER VIEW, CREATE FUNCTION, CREATE INDEX, CREATE PROCEDURE, CREATE TABLE, CREATE VIEW, DROP INDEX, DROP TABLE, RENAME TABLE, TRUNCATE TABLE
Important

RENAME TABLE operations can cause data inconsistency. If you rename a table during migration and selected that table (rather than its parent database) as the migration object, the renamed table's data is not migrated. To avoid this, select the database as the migration object and ensure both the pre-rename and post-rename database names are included in the migration scope.

FAQ

Q: Why does DTS fail to connect with the error "Host 'XXX' is not allowed to connect to this MySQL server"?

This is a Java Database Connectivity (JDBC) connection error. Verify that the account credentials are correct and that the account has the required permissions. Using a privileged account to test the connection can help isolate the issue.

Q: Why can't I select an RDS instance in the China (Fuzhou) region when creating a migration task?

DTS does not support instances in the China (Fuzhou) region. As an alternative, back up a self-managed MySQL 5.7 or 8.0 database to the cloud.