All Products
Search
Document Center

Data Transmission Service:Migrate data from RDS MySQL to AnalyticDB for MySQL 3.0

Last Updated:Feb 04, 2026

You can use Data Transmission Service (DTS) to migrate data from an RDS for MySQL instance to an AnalyticDB for MySQL 3.0 cluster. This migration lets you quickly build systems for business intelligence (BI), interactive queries, and real-time reporting.

Prerequisites

  • You have created a destination AnalyticDB for MySQL 3.0 cluster. For more information, see Create a cluster.

  • The storage space of the destination AnalyticDB for MySQL instance must be larger than the storage space that is used by the source RDS for MySQL instance.

Notes

Note
  • During schema migration, DTS does not migrate foreign keys from the source database to the destination database.

  • During full and incremental migration, DTS temporarily disables constraint checks and foreign key cascade operations at the session level. If cascade update or delete operations occur in the source database while the task is running, data inconsistency may occur.

Type

Description

Source database limits

  • Bandwidth requirements: The server that hosts the source database must have sufficient outbound bandwidth. Otherwise, the migration speed is affected.

  • The tables to be migrated must have a primary key or a UNIQUE constraint, and the fields must be unique. Otherwise, duplicate data may exist in the destination database.

  • If you migrate data at the table level and need to edit the tables, such as mapping column names, a single data migration task supports a maximum of 1,000 tables. If you exceed this limit, an error is reported after you submit the task. In this case, split the tables into multiple migration tasks or configure a task to migrate the entire database.

  • If you perform incremental migration, note the following for binary logs:

    • Binary logging must be enabled. The binlog_format parameter must be set to row, and the binlog_row_image parameter must be set to full. Otherwise, the precheck fails, and the data migration task cannot start.

      Important

      If the source self-managed MySQL database is in a dual-primary cluster where each instance is a primary and a secondary of the other, you must enable the log_slave_updates parameter. This ensures that DTS can obtain all binary logs.

    • The binary logs of an RDS for MySQL instance must be retained for at least 3 days. We recommend that you retain them for 7 days. The binary logs of a self-managed MySQL database must be retained for at least 7 days. Otherwise, DTS may fail because it cannot obtain the binary logs. In extreme cases, this can cause data inconsistency or data loss. Issues caused by a binary log retention period shorter than the required period are not covered by the DTS Service-Level Agreement (SLA).

      Note

      For more information about how to set the Retention Period for the binary logs of an RDS for MySQL instance, see Automatically delete binary logs.

  • Source database operation limits:

    • During schema migration and full data migration, do not perform DDL operations to change the schema of databases or tables. Otherwise, the data migration task fails.

      Note

      During full data migration, DTS queries the source database. This creates a metadata lock, which may block DDL operations on the source database.

    • During migration, do not perform DDL operations to add comments, such as ALTER TABLE table_name COMMENT='Table comment';. Otherwise, the data migration task fails.

    • If you perform only full data migration, do not write new data to the source instance. Otherwise, data inconsistency occurs between the source and destination. To ensure real-time data consistency, select schema migration, full data migration, and incremental data migration.

  • Data changes from operations that are not recorded in binary logs during the migration are not migrated to the destination database. Examples of such operations include data recovery using physical backup and cascade operations.

    Note

    If this occurs, you can perform a full data migration again when your business permits.

  • If the source database is MySQL 8.0.23 or later and the data to be migrated contains invisible columns, data loss may occur because the data in these columns cannot be obtained.

    Note

    You can run the ALTER TABLE <table_name> ALTER COLUMN <column_name> SET VISIBLE; command to make the invisible columns visible. For more information, see Invisible Columns.

Other limits

  • Migration of prefix indexes is not supported. If the source database has prefix indexes, the data migration may fail.

  • Migration of INDEX, PARTITION, VIEW, PROCEDURE, FUNCTION, TRIGGER, and FK is not supported.

  • If online DDL operations that use temporary tables, such as merging multiple tables, are performed on the source database, data loss may occur in the destination database or the migration instance may fail.

  • If a primary key or unique key conflict occurs during the migration:

    • If the table schemas are consistent and a record in the destination database has the same primary key value as a record in the source database:

      • During full migration, DTS keeps the record in the destination database. The record from the source database is not migrated.

      • During incremental migration, DTS does not keep the record in the destination database. The record from the source database overwrites the record in the destination database.

    • If the table schemas are inconsistent, only some columns of data may be migrated, or the migration may fail. Proceed with caution.

  • The destination database must have a custom primary key, or you must configure the Primary Key Column in the Configurations for Databases, Tables, and Columns step. Otherwise, the data migration may fail.

  • Due to the limits of AnalyticDB for MySQL, if the disk space usage of a node in the AnalyticDB for MySQL cluster exceeds 80%, the DTS task becomes abnormal and latency occurs. Estimate the required space for the migration objects in advance and ensure that the destination cluster has sufficient storage space.

  • If the destination AnalyticDB for MySQL 3.0 cluster is being backed up while the DTS task is running, the task fails.

  • Before data migration, evaluate the performance of the source and destination databases. We recommend that you perform data migration during off-peak hours. During full data migration, DTS consumes some read and write resources of the source and destination databases, which may increase the database load.

  • Full data migration involves concurrent INSERT operations, which cause table fragmentation in the destination database. After the full migration is complete, the table storage space in the destination database is larger than that in the source instance.

  • Confirm whether the migration precision of DTS for columns of the FLOAT or DOUBLE data type meets your business requirements. DTS reads the values of these columns using ROUND(COLUMN,PRECISION). If the precision is not explicitly defined, DTS migrates FLOAT values with a precision of 38 digits and DOUBLE values with a precision of 308 digits.

  • DTS attempts to resume a data migration task that failed within the last seven days. Therefore, before you switch your services to the destination instance, you must end or release the task, or use the revoke command to revoke the write permissions of the DTS account on the destination instance. Otherwise, data in the source instance overwrites the data in the destination instance after the task is automatically resumed.

  • If a DDL statement fails to be written to the destination database, the DTS task continues to run. You need to check the task logs for the failed DDL statement. For more information about how to view task logs, see Query task logs.

  • If the always-encrypted (EncDB) feature is enabled for the RDS for MySQL instance, full data migration is not supported.

    Note

    RDS for MySQL instances with Transparent Data Encryption (TDE) enabled support schema migration, full data migration, and incremental data migration.

  • If the task fails, DTS technical support will attempt to recover it within 8 hours. During the recovery process, operations such as restarting the task or adjusting its parameters may be performed.

    Note

    When parameters are adjusted, only DTS task parameters are modified. Database parameters remain unchanged.The parameters that may be modified include but are not limited to those described in Modify instance parameters.

Special cases

  • When the source database is a self-managed MySQL database:

    • A primary/secondary switchover in the source database during migration causes the migration task to fail.

    • The latency of DTS is calculated by comparing the timestamp of the last migrated data record with the current timestamp. If no DML operations are performed on the source database for a long time, the latency information may be inaccurate. If the displayed latency is too high, you can perform a DML operation on the source database to update the latency information.

      Note

      If you choose to migrate the entire database, you can also create a heartbeat table. The heartbeat table is updated or written to every second.

    • DTS periodically runs the CREATE DATABASE IF NOT EXISTS `test` command on the source database to advance the binary log offset.

    • If the source database is an Amazon Aurora MySQL instance or another clustered MySQL instance, ensure that the domain name or IP address configured for the task and its resolved result always point to the read/write (RW) node. Otherwise, the migration task may not run as expected.

  • When the source database is an RDS for MySQL instance:

    • To migrate incremental data, RDS for MySQL instances that do not record transaction logs, such as read-only instances of RDS for MySQL 5.6, cannot be used as the source database.

    • DTS periodically runs the CREATE DATABASE IF NOT EXISTS `test` command on the source database to advance the binary log offset.

Billing

Migration type

Instance configuration fee

Internet traffic fee

Schema migration and full data migration

Free of charge.

When the Access Method parameter of the destination database is set to Public IP Address, you are charged for Internet traffic. For more information, see Billing overview.

Incremental data migration

Charged. For more information, see Billing overview.

Migration types

  • Schema migration

    DTS migrates the schema definitions of the migration objects from the source database to the destination database.

    Note

    This is a heterogeneous data migration. During schema migration, DTS cannot perfectly map all data types. You must carefully evaluate the impact of data type mapping on your business. For more information, see Data type mappings between heterogeneous databases.

  • Full migration

    DTS migrates all historical data of the migration objects from the source database to the destination database.

  • Incremental migration

    After a full migration, DTS migrates incremental data updates from the source database to the destination database. Incremental data migration lets you smoothly migrate data without stopping your self-managed applications.

Supported SQL operations for incremental migration

Operation type

SQL statement

DML

INSERT, UPDATE, DELETE

Note

When data is written to AnalyticDB for MySQL, UPDATE statements are automatically converted to REPLACE INTO statements. If the primary key is updated, the statement is converted to a DELETE statement and an INSERT statement.

DDL

CREATE TABLE, DROP TABLE, RENAME TABLE, TRUNCATE TABLE, ADD COLUMN, MODIFY COLUMN, DROP COLUMN

Important

A RENAME TABLE operation may cause data inconsistency. For example, if you select only one table as the migration object and rename the table in the source instance during migration, the data of this table is not migrated to the destination database. To prevent this issue, select the entire database to which the table belongs as the migration object when you configure the data migration task. Make sure that the databases to which the table belongs before and after the RENAME TABLE operation are both included in the migration objects.

Warning

If you change the field type of a source table during data migration, the migration task reports an error and stops. You can resolve this issue by following these steps:

  1. The migration task fails because the field type of a source table, such as `customer`, is changed during migration to the destination AnalyticDB for MySQL database.

  2. In AnalyticDB for MySQL 3.0, create a new table named `customer_new` with the same schema as the `customer` table.

  3. Run the INSERT INTO SELECT command to copy the data from the `customer` table to the new `customer_new` table. This ensures that the data in both tables is consistent.

  4. Rename or delete the failed table `customer`, and then rename the `customer_new` table to `customer`.

  5. In the DTS console, restart the data migration task.

Database account permissions

Database

Schema migration

Full migration

Incremental migration

RDS MySQL

SELECT permission

SELECT permission

REPLICATION SLAVE, REPLICATION CLIENT, and SELECT permissions on the objects to be migrated. DTS automatically grants these permissions.

AnalyticDB for MySQL 3.0

Read and write permissions

To create a database account and grant permissions:

Procedure

  1. Navigate to the migration task list page for the destination region using one of the following methods.

    From the DTS console

    1. Log on to the Data Transmission Service (DTS) console.

    2. In the navigation pane on the left, click Data Migration.

    3. In the upper-left corner of the page, select the region where the migration instance is located.

    From the DMS console

    Note

    The actual operations may vary based on the mode and layout of the DMS console. For more information, see Simple mode console and Customize the layout and style of the DMS console.

    1. Log on to the Data Management (DMS) console.

    2. In the top menu bar, choose Data + AI > Data Transmission (DTS) > Data Migration.

    3. To the right of Data Migration Tasks, select the region where the migration instance is located.

  2. Click Create Task to navigate to the task configuration page.

  3. Configure the source and destination databases.

    Warning

    After you select the source and destination instances, we recommend that you carefully read the limits displayed at the top of the page. Otherwise, the task may fail or data inconsistency may occur.

    Category

    Configuration

    Description

    None

    Task Name

    DTS automatically generates a task name. We recommend that you specify a descriptive name for easy identification. The name does not need to be unique.

    Source Database

    Select Existing Connection

    • To use a database instance that has been added to the system (created or saved), select the desired database instance from the drop-down list. The database information below will be automatically configured.

      Note

      In the DMS console, this parameter is named Select a DMS database instance..

    • If you have not registered the database instance with the system, or do not need to use a registered instance, manually configure the database information below.

    Database Type

    Select MySQL.

    Connection Type

    Select Cloud Instance.

    Instance Region

    Select the region where the source RDS MySQL instance resides.

    Cross-account

    This example shows a migration within the same Alibaba Cloud account. Select No.

    RDS Instance ID

    Select the ID of the source RDS MySQL instance.

    Database Account

    Enter the database account of the source RDS MySQL instance. For information about the required permissions, see Database account permissions.

    Database Password

    Enter the password for the database account.

    Encryption

    Select Non-encrypted or SSL-encrypted based on your database requirements. If you set this parameter to SSL-encrypted, you must enable SSL encryption for the RDS for MySQL instance beforehand. For more information, see Quickly enable SSL encryption using a cloud certificate.

    Destination Database

    Select Existing Connection

    • To use a database instance that has been added to the system (created or saved), select the desired database instance from the drop-down list. The database information below will be automatically configured.

      Note

      In the DMS console, this parameter is named Select a DMS database instance..

    • If you have not registered the database instance with the system, or do not need to use a registered instance, manually configure the database information below.

    Database Type

    Select AnalyticDB MySQL 3.0.

    Connection Type

    Select Cloud Instance.

    Instance Region

    Select the region where the destination AnalyticDB for MySQL 3.0 cluster resides.

    Instance ID

    Select the ID of the destination AnalyticDB for MySQL 3.0 cluster.

    Database Account

    Enter the database account of the destination AnalyticDB for MySQL 3.0 cluster. For information about the required permissions, see Database account permissions.

    Database Password

    Enter the password for the database account.

  4. After you complete the configuration, click Test Connectivity and Proceed at the bottom of the page.

    Note
    • Ensure that the IP address segment of the DTS service is automatically or manually added to the security settings of the source and destination databases to allow access from DTS servers. For more information, see Add DTS server IP addresses to a whitelist.

    • If the source or destination database is a self-managed database (the Access Method is not Alibaba Cloud Instance), you must also click Test Connectivity in the CIDR Blocks of DTS Servers dialog box that appears.

  5. Configure the task objects.

    1. On the Configure Objects page, configure the objects that you want to migrate.

      Configuration

      Description

      Migration Types

      • If you want to perform a full migration, select both Schema Migration and Full Migration.

      • To perform a migration without service interruption, select Schema Migration, Full Migration, and Incremental Migration.

      Note
      • If you select Full Migration, all tables, including their schemas and data, can be migrated to the destination database.

      • If you do not select Incremental Migration, do not write new data to the source instance during the data migration to ensure data consistency.

      Processing Mode of Conflicting Tables

      • Precheck and Report Errors: Checks whether tables with the same names exist in the destination database. If no tables with the same names exist, the precheck is passed. If tables with the same names exist, an error is reported during the precheck, and the data migration task does not start.

        Note

        If a table in the destination database has the same name but cannot be easily deleted or renamed, you can change the name of the table in the destination database. For more information, see Object name mapping.

      • Ignore Errors and Proceed: Skips the check for tables with the same names.

        Warning

        Selecting Ignore Errors and Proceed may cause data inconsistency and business risks. For example:

        • If the table schemas are consistent and a record in the destination database has the same primary key value as a record in the source database:

          • During full migration, DTS keeps the record in the destination database. The record from the source database is not migrated.

          • During incremental migration, DTS does not keep the record in the destination database. The record from the source database overwrites the record in the destination database.

        • If the table schemas are inconsistent, only some columns of data may be migrated, or the migration may fail. Proceed with caution.

      DDL and DML Operations to Be Synchronized

      Select the SQL operations for incremental migration at the instance level. For information about the supported operations, see Supported SQL operations for incremental migration.

      Note

      To select SQL operations for incremental migration at the database or table level, right-click a migration object in the Selected Objects box and select the desired SQL operations in the dialog box that appears.

      Merge Tables

      • If you select Yes, DTS adds the __dts_data_source column to each table to record data sources. For more information, see Enable multi-table merge.

      • If you select No, this is the default option.

      Note

      The table merging feature is configured at the task level, not the table level. To merge some tables but not others, you must create two separate data migration tasks.

      Warning

      Do not perform DDL operations to change the schema of the source database or tables. Otherwise, data inconsistency or task failure may occur.

      Capitalization of Object Names in Destination Instance

      You can configure the case sensitivity policy for the names of migrated objects, such as databases, tables, and columns, in the destination instance. By default, DTS default policy is selected. You can also choose to keep the case sensitivity consistent with the default policy of the source or destination database. For more information, see Case sensitivity of object names in the destination database.

      Source Objects

      In the Source Objects box, click the objects to migrate, and then click Right arrow to move them to the Selected Objects box.

      Note
      • You can select migration objects at the granularity of database, table, or column. If you select tables as migration objects, other objects such as views, triggers, and stored procedures are not migrated to the destination database.

      • If you select an entire database as the migration object, the following rules apply by default:

        • If a table to be migrated in the source database has a primary key (single-column or multi-column), the primary key column is used as the distribution key.

        • If a table to be migrated in the source database does not have a primary key, an auto-increment primary key column is automatically generated. This may cause data inconsistency between the source and destination databases.

      Selected Objects

      • To rename an object that you want to migrate to the destination instance, right-click the object in the Selected Objects section. For more information, see Individual table column mapping.

      • To rename multiple objects at a time, click Batch Edit in the upper-right corner of the Selected Objects section. For more information, see Map multiple object names at a time.

      Note
      • If you use the object name mapping feature, the migration of other objects that depend on the mapped object may fail.

      • To filter data using a WHERE clause, right-click the table to be migrated in the Selected Objects box and set the filter condition in the dialog box that appears. For more information, see Set a filter condition.

      • To select SQL operations for migration at the database or table level, right-click the migration object in the Selected Objects box and select the desired SQL operations in the dialog box that appears.

      • You can also right-click a table to be migrated in the Selected Objects box and use the add column feature. For more information, see Add an additional column.

    2. Click Next: Advanced Settings to configure advanced parameters.

      Configuration

      Description

      Dedicated Cluster for Task Scheduling

      By default, DTS schedules tasks on a shared cluster. You do not need to select one. If you want more stable tasks, you can purchase a dedicated cluster to run DTS migration tasks.

      Copy the temporary table of the Online DDL tool that is generated in the source table to the destination database.

      If you use Data Management (DMS) or gh-ost to perform online DDL changes in the source database, you can choose whether to migrate the data from the temporary tables generated by the online DDL changes.

      Important
      • DTS tasks do not support using tools such as pt-online-schema-change to perform online DDL changes. Otherwise, the DTS task fails.

      • The processing methods for each phase are as follows: The Schema Migration and Full Data Migration phases do not allow DDL operations that change the database or table structure. Therefore, they are not controlled by the online DDL policy.

        • Schema Migration: Not controlled by the online DDL policy. Related temporary tables are created.

        • Full Data Migration: Not controlled by the online DDL policy. The migration of temporary tables is not included in the full migration objects. All tables whose names match the regular expression (^_(.+)_(?:gho|new)$ or ^_(.+)_(?:ghc|del|old)$) are filtered out.

        • Incremental Data Migration: Controlled by the online DDL policy.

          • Yes: Migrates data changes from temporary tables (for example, _table_name_gho) generated by online DDL operations.

          • No, Adapt to DMS Online DDL and No, Adapt to gh-ost: Filters out data changes from temporary tables (for example, _table_name_gho) generated by tools such as gh-ost based on regular expression rules.

      • Yes: Migrates the data from the temporary tables generated by online DDL changes.

        Note

        If online DDL changes generate a large amount of data in temporary tables, it may cause task latency.

      • No, Adapt to DMS Online DDL: Does not migrate the data from the temporary tables generated by online DDL changes. It only migrates the original DDL statements executed using Data Management (DMS).

        Note

        This option causes tables in the destination database to be locked.

      • No, Adapt to gh-ost: Does not migrate the data from the temporary tables generated by online DDL changes. It supports custom filtering rules. DTS filters out data changes from temporary tables (for example, _table_name_gho) generated by tools such as gh-ost based on regular expression rules. You can modify the default regular expressions used to match shadow and useless tables as needed:

        • Shadow table: ^_(.+)_(?:gho|new)$

        • Useless table: ^_(.+)_(?:ghc|del|old)$

        Note

        This option causes tables in the destination database to be locked.

      Retry Time for Failed Connections

      After the migration task starts, if the connection to the source or destination database fails, DTS reports an error and immediately begins to retry the connection. The default retry duration is 720 minutes. You can customize the retry time to a value from 10 to 1440 minutes. We recommend that you set the duration to more than 30 minutes. If DTS reconnects to the source and destination databases within the specified duration, the migration task automatically resumes. Otherwise, the task fails.

      Note
      • For multiple DTS instances that share the same source or destination, the network retry time is determined by the setting of the last created task.

      • Because you are charged for the task during the connection retry period, we recommend that you customize the retry time based on your business needs, or release the DTS instance as soon as possible after the source and destination database instances are released.

      Retry Time for Other Issues

      After the migration task starts, if a non-connectivity issue, such as a DDL or DML execution exception, occurs in the source or destination database, DTS reports an error and immediately begins to retry the operation. The default retry duration is 10 minutes. You can customize the retry time to a value from 1 to 1440 minutes. We recommend that you set the duration to more than 10 minutes. If the related operations succeed within the specified retry duration, the migration task automatically resumes. Otherwise, the task fails.

      Important

      The value of Retry Time for Other Issues must be less than the value of Retry Time for Failed Connections.

      Enable Throttling for Full Data Migration

      During full migration, DTS consumes read and write resources on the source and destination databases, which may increase the database load. If required, you can enable throttling for the full migration task. You can set Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s) to reduce the load on the destination database.

      Note
      • This configuration item is available only if you select Full Data Migration for Migration Types.

      • You can also adjust the full migration speed after the migration instance is running.

      Enable Throttling for Incremental Data Migration

      If required, you can also choose to set speed limits for the incremental migration task. You can set RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s) to reduce the load on the destination database.

      Note
      • This configuration item is available only if you select Incremental Data Migration for Migration Types.

      • You can also adjust the incremental migration speed after the migration instance is running.

      Whether to delete SQL operations on heartbeat tables of forward and reverse tasks

      Choose whether DTS writes heartbeat SQL information to the source database while the instance is running.

      • Yes: Does not write heartbeat SQL information to the source database. The DTS instance may display latency.

      • No: Writes heartbeat SQL information to the source database. This may interfere with source database operations like physical backups and cloning.

      Environment Tag

      You can select an environment tag to identify the instance as needed. No selection is required for this example.

      Configure ETL

      Choose whether to enable the extract, transform, and load (ETL) feature. For more information, see What is ETL? Valid values:

      Monitoring and Alerting

      Select whether to set alerts and receive alert notifications based on your business needs.

      • No: Does not set an alert.

      • Yes: Configure alerts by setting an alert threshold and an alert notifications. If a migration fails or the latency exceeds the threshold, the system sends an alert notification.

    3. Click Next: Data Validation to configure a data validation task.

      For more information about the data validation feature, see Configure data validation.

    4. Optional: After you complete the preceding configurations, click Next: Configure Database and Table Fields to set the Type, Primary Key Column, Distribution Key, and partition key information (Partition Key, Partitioning Rules, and Partition Lifecycle) for the tables to be migrated in the destination database.

      Note
      • This step is available only if you select the Schema Migration option for Migration Types when you configure task objects. You can select All for Definition Status to make modifications.

      • You can select multiple columns for Primary Key Column to form a composite primary key. You must also select one or more columns from the Primary Key Column to use as the Distribution Key and Partition Key. For more information, see CREATE TABLE.

  6. Save the task and run a precheck.

    • To view the parameters for configuring this instance when you call the API operation, move the pointer over the Next: Save Task Settings and Precheck button and click Preview OpenAPI parameters in the bubble that appears.

    • If you do not need to view or have finished viewing the API parameters, click Next: Save Task Settings and Precheck at the bottom of the page.

    Note
    • Before the migration task starts, DTS performs a precheck. The task starts only after it passes the precheck.

    • If the precheck fails, click View Details next to the failed check item, fix the issue based on the prompt, and then run the precheck again.

    • If a warning is reported during the precheck:

      • For check items that cannot be ignored, click View Details next to the failed item, fix the issue based on the prompt, and then run the precheck again.

      • For check items that can be ignored, you can click Confirm Alert Details, Ignore, OK, and Precheck Again to skip the alert item and run the precheck again. If you choose to ignore a warning, it may cause issues such as data inconsistency and pose risks to your business.

  7. Purchase the instance.

    1. When the Success Rate is 100%, click Next: Purchase Instance.

    2. On the Purchase page, select the link specification for the data migration instance. For more information, see the following table.

      Category

      Parameter

      Description

      New Instance Class

      Resource Group Settings

      Select the resource group to which the instance belongs. The default value is default resource group. For more information, see What is Resource Management?

      Instance Class

      DTS provides migration specifications with different performance levels. The link specification affects the migration speed. You can select a specification based on your business scenario. For more information, see Data migration link specifications.

    3. After the configuration is complete, read and select Data Transmission Service (Pay-as-you-go) Service Terms.

    4. Click Buy and Start. In the OK dialog box that appears, click OK.

      You can view the progress of the migration task on the Data Migration Tasks list page.

      Note
      • If the migration task does not include incremental migration, it stops automatically after the full migration is complete. After the task stops, its Status changes to Completed.

      • If the migration task includes incremental migration, it does not stop automatically. The incremental migration task continues to run. While the incremental migration task is running, the Status of the task is Running.

FAQ

  • During schema synchronization or migration, the following error occurs: only 500 dimension table allowed, current dimensionTableCount: 500. How do I resolve it?

    • Cause: In AnalyticDB for MySQL, the total number of tables includes both tables currently in use and tables in the recycle bin. If this total exceeds the limit, an error appears indicating that the table count limit has been exceeded. For more information, see Limits.

    • Solution:

      1. In your DTS task, check whether the number of tables being synchronized or migrated exceeds the maximum value shown in the error message.

      2. If the number of tables does not exceed the maximum, run the following commands to check whether deleted tables exist in the recycle bin. For more information, see Table recycle bin.

        -- Query the number of tables currently in use by the instance.
        SELECT COUNT(*) FROM INFORMATION_SCHEMA.tables;
        -- Query the number of tables in the recycle bin.
        SELECT COUNT(*) FROM INFORMATION_SCHEMA.KEPLER_META_RECYCLE_BIN;
      3. If you confirm that deleted tables exist in the recycle bin, run the following commands to clean them up.

        Important

        After you purge tables from the recycle bin, they cannot be recovered. Confirm that these tables are no longer needed.

        -- Delete all tables from the table recycle bin.
        PURGE RECYCLE_BIN ALL;
        -- Delete a specific table from the table recycle bin.
        PURGE RECYCLE_BIN TABLE <table name in ADB_RECYCLE_BIN database>;