All Products
Search
Document Center

Data Transmission Service:Migrate data from an ApsaraDB RDS for MySQL instance to an AnalyticDB for PostgreSQL instance

Last Updated:Aug 08, 2023

This topic describes how to migrate data from an ApsaraDB RDS for MySQL instance to an AnalyticDB for PostgreSQL instance by using Data Transmission Service (DTS).

Supported source databases

You can migrate data from the following types of MySQL databases to an AnalyticDB for PostgreSQL instance:
  • ApsaraDB RDS for MySQL instance
  • Self-managed MySQL databases:
    • Self-managed database with a public IP address
    • Self-managed database that is hosted on an Elastic Compute Service (ECS) instance
    • Self-managed database that is connected over Database Gateway
    • Self-managed database that is connected over Cloud Enterprise Network (CEN)
    • Self-managed database that is connected over Express Connect, VPN Gateway, or Smart Access Gateway
Note In this topic, an ApsaraDB RDS for MySQL instance is used to describe how to configure a data migration task. You can also follow the procedure to configure data migration tasks for other types of MySQL databases.

Prerequisites

Limits

Note
  • During schema migration, DTS migrates foreign keys from the source database to the destination database.
  • During full data migration and incremental data migration, DTS temporarily disables the constraint check and cascade operations on foreign keys at the session level. If you perform the cascade and delete operations on the source database during data migration, data inconsistency may occur.
CategoryDescription
Limits on the source database
  • The tables to be synchronized must have PRIMARY KEY or UNIQUE constraints, and all fields must be unique. Otherwise, the destination database may contain duplicate data records.
  • If you select tables as the objects to be synchronized and you need to edit the tables, such as renaming tables or columns, you can synchronize up to 1,000 tables in a single data synchronization task. If you run a task to synchronize more than 1,000 tables, a request error occurs. In this case, we recommend that you split the tables and configure multiple tasks to synchronize the tables, or configure a task to synchronize the entire database.
  • The following requirements for binary logs must be met:
    • The binary logging feature is enabled by default. The binlog_row_image parameter must be set to full. Otherwise, error messages are returned during precheck and the data synchronization task cannot be started. For more information, see Modify the parameters of an ApsaraDB RDS for MySQL instance.
      Important
      • If the source database is a self-managed MySQL database, you must enable the binary logging feature and set the binlog_format parameter to row and the binlog_row_image parameter to full.
      • If the source database is a self-managed MySQL database deployed in a dual-primary cluster, you must set the log_slave_updates parameter to ON. This ensures that DTS can obtain all binary logs. For more information, see Create an account for a self-managed MySQL database and configure binary logging.
    • If you perform only incremental data synchronization, the binary logs of the source database are retained for at least 24 hours. If you perform both full data synchronization and incremental data synchronization, the binary logs of the source database are retained for at least seven days. After full data synchronization is complete, you can set the retention period to more than 24 hours. Otherwise, DTS may fail to obtain the binary logs and the task may fail. In exceptional circumstances, data inconsistency or loss may occur. Make sure that you configure the retention period of binary logs in accordance with the preceding requirements. Otherwise, the SLA of DTS does not guarantee service reliability or performance. For more information about binary log files of an ApsaraDB RDS for MySQL instance, see View and delete the binary log files of an ApsaraDB RDS for MySQL instance.

  • During data synchronization, do not perform DDL operations to modify the primary key or add comments because the operations cannot take effect. For example, do not execute the ALTER TABLE table_name COMMENT='Table comments'; statement.
Other limits
  • Requirements for the objects to be synchronized:
    • Only tables can be selected as the objects to synchronize.
    • DTS does not synchronize data of the following data types: BIT, VARBIT, GEOMETRY, ARRAY, UUID, TSQUERY, TSVECTOR, TXID_SNAPSHOT, and POINT.
    • Prefix indexes cannot be synchronized. If the source database contains prefix indexes, data may fail to be synchronized.
  • Before you synchronize data, evaluate the impact of data synchronization on the performance of the source and destination databases. We recommend that you synchronize data during off-peak hours. During initial full data synchronization, DTS uses read and write resources of the source and destination databases. This may increase the loads on the database servers.
  • During initial full data synchronization, concurrent INSERT operations cause fragmentation in the tables of the destination database. After initial full data synchronization is complete, the size of the used tablespace of the destination database is larger than that of the source database.
  • If you select one or more tables instead of an entire database as the objects to be synchronized, do not use tools such as pt-online-schema-change to perform DDL operations on the tables during data synchronization. Otherwise, data may fail to be synchronized.

    You can use Data Management (DMS) to perform online DDL operations. For more information, see Perform lock-free DDL operations.

  • During data synchronization, we recommend that you use only DTS to write data to the destination database. This prevents data inconsistency between the source and destination databases. For example, if you use tools other than DTS to write data to the destination database, data loss may occur in the destination database when you use DMS to perform online DDL operations.
  • The destination table does not support append-optimized (AO) tables.
  • If column mapping is used for non-full table synchronization or if the source and destination table structures are inconsistent, the data for the columns that are missing in the destination database compared to the source database will be lost.
Special cases
If the source database is a self-managed MySQL database, take note of the following items:
  • If you perform a primary/secondary switchover on the source database when the data synchronization task is running, the task fails.
  • DTS calculates synchronization latency based on the timestamp of the latest synchronized data in the destination database and the current timestamp in the source database. If no DML operation is performed on the source database for an extended period of time, the synchronization latency may be inaccurate. If the latency of the synchronization task is excessively high, you can perform a DML operation on the source database to update the latency.
    Note If you select an entire database as the object to be synchronized, you can create a heartbeat table. The heartbeat table is updated or receives data every second.
  • DTS executes the CREATE DATABASE IF NOT EXISTS 'test' statement in the source database as scheduled to move forward the binary log file position.

Billing

Migration type

Instance configuration fee

Internet traffic fee

Schema migration and full data migration

Free of charge.

Charged only when data is migrated from Alibaba Cloud over the Internet. For more information, see Billing overview.

Incremental data migration

Charged. For more information, see Billing overview.

Migration types

  • Schema migration

    DTS migrates the schemas of the objects from the source database to the destination database.

    Note In this topic, the source and the destination databases are heterogeneous databases. DTS does not ensure that the schemas of the source and destination databases are consistent after schema migration. We recommend that you evaluate the impact of data type conversion on your business. For more information, see Data type mappings between heterogeneous databases.
  • Full data migration

    DTS migrates the historical data of the objects from the source database to the destination database.

  • Incremental data migration

    After full data migration is complete, DTS migrates incremental data from the source database to the destination database. Incremental data migration allows data to be migrated smoothly without interrupting the services of self-managed applications during data migration.

SQL operations that can be migrated during incremental data migration

Operation typeSQL statement
DMLINSERT, UPDATE, and DELETE
DDLCREATE TABLE, DROP TABLE, RENAME TABLE, TRUNCATE TABLE, ADD COLUMN, and DROP COLUMN
Warning If the data type of a field in the source table is changed during data migration, an error message is returned and the data migration task is stopped. You can perform the following steps to troubleshoot the issue:
  1. Check whether the data migration task fails because the data type of a field in the source table, such as the customer field, is changed when DTS migrates data to the destination AnalyticDB for PostgreSQL instance.
  2. Create a table named customer_new in the destination AnalyticDB for PostgreSQL instance. The customer_new table has the same schema as the customer table.
  3. Execute the INSERT INTO SELECT statement to copy the data of the customer table and insert the data into the customer_new table. This ensures that the data of the two tables is consistent.
  4. Rename or delete the customer table. Then, change the name of the customer_new table to customer.
  5. Restart the data migration task in the DTS console.

Permissions required for database accounts

DatabaseSchema migrationFull data migrationIncremental data migration
ApsaraDB RDS for MySQLSELECT permissionSELECT permissionSELECT permission on the objects to be migrated, and the REPLICATION SLAVE and REPLICATION CLIENT permissions. DTS automatically grants these permissions to the database account.
AnalyticDB for PostgreSQL Read and write permissions
For more information about how to create a database account and grant permissions to the database account, see the following topics:

Procedure

  1. Go to the Data Migration Tasks page.
    1. Log on to the Data Management (DMS) console.
    2. In the top navigation bar, click DTS.
    3. In the left-side navigation pane, choose DTS (DTS) > Data Migration.
    Note
  2. From the drop-down list next to Data Migration Tasks, select the region in which the data migration instance resides.
    Note If you use the new DTS console, you must select the region in which the data migration instance resides in the upper-left corner.
  3. Click Create Task. On the page that appears, configure the source and destination databases.
    SectionParameterDescription
    N/ATask Name

    The task name that DTS automatically generates. We recommend that you specify a descriptive name that makes it easy to identify the task. You do not need to specify a unique task name.

    Source DatabaseDatabase TypeThe type of the source database. Select MySQL.
    Access MethodThe access method of the source database. Select Alibaba Cloud Instance.
    Instance RegionThe region in which the source ApsaraDB RDS for MySQL instance resides.
    Replicate Data Across Alibaba Cloud AccountsSpecifies whether to migrate data across Alibaba Cloud accounts. In this example, No is selected.
    RDS Instance IDThe ID of the source ApsaraDB RDS for MySQL instance.
    Database AccountThe database account of the source ApsaraDB RDS for MySQL instance. For information about the permissions that are required for the account, see the Permissions required for database accounts section of this topic.
    Database Password

    The password of the database account.

    Encryption

    Specifies whether to encrypt the connection to the source database. Select Non-encrypted or SSL-encrypted based on your business requirements. If you select SSL-encrypted, you must enable SSL encryption for the ApsaraDB RDS for MySQL instance before you configure the data migration task. For more information, see Configure SSL encryption for an ApsaraDB RDS for MySQL instance.

    Destination DatabaseDatabase TypeThe type of the destination database. Select AnalyticDB for PostgreSQL.
    Access MethodThe access method of the destination database. Select Alibaba Cloud Instance.
    Instance RegionThe region in which the destination AnalyticDB for PostgreSQL instance resides.
    Instance IDThe ID of the destination AnalyticDB for PostgreSQL instance.
    Database NameThe name of the destination database in the AnalyticDB for PostgreSQL instance.
    Database AccountThe initial account of the destination AnalyticDB for PostgreSQL instance.
    Note You can also enter an account that has the RDS_SUPERUSER permission. For more information, see Manage users and permissions.
    Database Password

    The password of the database account.

  4. In the lower part of the page, click Test Connectivity and Proceed.
    If the source or destination database is an Alibaba Cloud database instance, such as an ApsaraDB RDS for MySQL or ApsaraDB for MongoDB instance, DTS automatically adds the CIDR blocks of DTS servers to the IP address whitelist of the instance. If the source or destination database is a self-managed database hosted on an Elastic Compute Service (ECS) instance, DTS automatically adds the CIDR blocks of DTS servers to the security group rules of the ECS instance, and you must make sure that the ECS instance can access the database. If the source or destination database is a self-managed database that is deployed in a data center or provided by a third-party cloud service provider, you must manually add the CIDR blocks of DTS servers to the IP address whitelist of the database to allow DTS to access the database. For more information, see the "CIDR blocks of DTS servers" section of the Add the CIDR blocks of DTS servers to the security settings of on-premises databases topic.
    Warning If the CIDR blocks of DTS servers are automatically or manually added to the IP address whitelist of the database instance or ECS security group rules, security risks may arise. Therefore, before you use DTS to migrate data, you must understand and acknowledge the potential risks and take preventive measures, including but not limited to the following measures: enhance the security of your account and password, limit the ports that are exposed, authenticate API calls, regularly check the IP address whitelist or ECS security group rules and forbid unauthorized CIDR blocks, and connect the database to DTS by using Express Connect, VPN Gateway, or Smart Access Gateway.
  5. Configure the objects to be migrated and advanced settings.
    • Basic Settings
      Parameter or settingDescription
      Migration Type
      • To perform only full data migration, select Schema Migration and Full Data Migration.
      • To ensure service continuity during data migration, select Schema Migration, Full Data Migration, and Incremental Data Migration.
      Note If you do not select Incremental Data Migration, we recommend that you do not write data to the source database during data migration. This ensures data consistency between the source and destination databases.
      Processing Mode of Conflicting Tables
      • Precheck and Report Errors: checks whether the destination instance contains tables that have the same names as tables in the source database. If the source and destination databases do not contain tables that have the same names, the precheck is passed. Otherwise, an error is returned during the precheck and the data migration task cannot be started.

        Note If the source and destination databases contain tables that have the same names and the tables in the destination database cannot be deleted or renamed, you can use the object name mapping feature to rename the tables that are migrated to the destination database. For more information, see Map object names.
      • Clear Destination Table: skips the precheck for empty destination tables and clears the data in destination tables before the full data migration task is initialized.

      • Ignore Errors and Proceed: skips the precheck for identical table names in the source database and destination instance.
        Warning If you select Ignore Errors and Proceed, data consistency is not ensured, and your business may be exposed to potential risks.
        • If the source and destination databases have the same schema, DTS does not migrate data records that have the same primary keys as data records in the destination database.
        • If the source and destination databases have different schemas, only some columns are migrated or the data migration task fails.
      DDL and DML Operations to Be SynchronizedThe SQL operations to be migrated during incremental data migration at the instance level. For more information, see the SQL operations that can be migrated during incremental data migration section of this topic.
      Note To select the SQL operations performed on a specific database or table, perform the following steps: In the Selected Objects section, right-click an object. In the dialog box that appears, select the SQL operations that you want to migrate.
      Select Objects

      Select one or more objects from the Source Objects section and click the Rightwards arrow icon to add the objects to the Selected Objects section.

      Note You can select columns, tables, or schemas as the objects to be migrated. If you select tables or columns as the objects to be migrated, DTS does not migrate other objects, such as views, triggers, or stored procedures, to the destination database.
      Rename Databases and Tables
      • To rename an object that you want to migrate to the destination instance, right-click the object in the Selected Objects section. For more information, see Map the name of a single object.
      • To rename multiple objects at a time, click Batch Edit in the upper-right corner of the Selected Objects section. For more information, see Map multiple object names at a time.
      Note If you use the object name mapping feature to rename an object, other objects that depend on the object may fail to be migrated.
      Filter the data to be migrated

      You can specify WHERE conditions to filter data. For more information, see Use SQL conditions to filter data.

      Select SQL operations for incremental data migrationIn the Selected Objects section, right-click an object to be migrated. In the dialog box that appears, select the SQL operations that you want to migrate. For more information, see the SQL operations that can be migrated during incremental data migration section of this topic.
    • Advanced Settings
      ParameterDescription
      Set Alerts
      Specifies whether to set alerts for the data migration task. If the task fails or the migration latency exceeds the threshold, the alert contacts will receive notifications. Valid values:
      Copy the temporary table of the Online DDL tool that is generated in the source table to the destination database
      If you use DMS or the gh-ost tool to perform online DDL operations on the source database, you can specify whether to migrate the temporary tables generated by online DDL operations. Valid values:
      Important You cannot use tools such as pt-online-schema-change to perform online DDL operations on the source database. Otherwise, the DTS task fails.
      • Yes: DTS migrates the data of temporary tables generated by online DDL operations.
        Note If online DDL operations generate a large amount of data, the data migration task may take an extended period of time to complete.
      • No, Adapt to DMS Online DDL: DTS does not migrate the data of temporary tables generated by online DDL operations. Only the original DDL operations that are performed by using DMS are migrated.
        Note If you select No, Adapt to DMS Online DDL, the tables in the destination database may be locked.
      • No, Adapt to gh-ost: DTS does not migrate the data of temporary tables generated by online DDL operations. Only the original DDL operations that are performed by using the gh-ost tool are migrated. You can use the default or custom regular expressions to filter out the shadow tables of the gh-ost tool and tables that are not required.
        Note If you select No, Adapt to gh-ost, the tables in the destination database may be locked.
      Retry Time for Failed Connections
      The retry time range for failed connections. If the source or destination database fails to be connected after the data migration task is started, DTS immediately retries a connection within the retry time range. Valid values: 10 to 1440. Unit: minutes. Default value: 720. We recommend that you set the parameter to a value greater than 30. If DTS reconnects to the source and destination databases within the specified retry time range, DTS resumes the data migration task. Otherwise, the data migration task fails.
      Note
      • If you set different retry time ranges for multiple data migration tasks that share the same source or destination instance, the value that is set later takes precedence.
      • When DTS retries a connection, you are charged for the DTS instance. We recommend that you specify the retry time range based on your business requirements. You can also release the DTS instance at your earliest opportunity after the source database and destination instances are released.
      Enclose Object Names in Quotation MarksSpecify whether to enclose object names in quotation marks.
      • If you select Yes, DTS automatically encloses the names of schemas, tables, and columns that meet specific requirements in single quotation marks (') or double quotation marks (") during schema migration and incremental data migration in the following scenarios
        • The business environment of the source database is case-sensitive but the object names of the database contain both uppercase and lowercase letters.
        • A source table name does not start with a letter and contains characters other than letters, digits, and special characters.
          Note A source table name can contain only the following special characters: underscores (_), number signs (#), and dollar signs ($).
        • The names of the schemas, tables, or columns that you want to migrate are keywords, reserved keywords, or invalid characters in the destination database.
      • If you select No, DTS does not enclose object names in quotation marks.
      Configure ETLSpecify whether to configure the extract, transform, and load (ETL) feature. If you select Yes, you must enter domain-specific language (DSL) statements in the code editor. For more information, see Configure ETL in a data migration or data synchronization task.
  6. Optional:Configure fields in the tables that you want to migrate. Specify the primary keys and distribution keys of the tables that you want to migrate to the AnalyticDB for PostgreSQL instance. For more information, see CREATE TABLE.
    Note You can perform this operation only if you select Schema Migration in the previous step.
  7. In the lower part of the page, click Next: Save Task Settings and Precheck.

    You can move the pointer over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters to view the parameters to be specified when you call the relevant API operation to configure the DTS task.

    Note
    • Before you can start the data migration task, DTS performs a precheck. You can start the data migration task only after the task passes the precheck.
    • If the task fails to pass the precheck, click View Details next to each failed item. After you troubleshoot the issues based on the causes, run a precheck again.
    • If an alert is triggered for an item during the precheck:
      • If an alert item cannot be ignored, click View Details next to the failed item and troubleshoot the issues. Then, run a precheck again.
      • If an alert item can be ignored, click Confirm Alert Details. In the View Details dialog box, click Ignore. In the message that appears, click OK. Then, click Precheck Again to run a precheck again. If you ignore the alert item, data inconsistency may occur, and your business may be exposed to potential risks.
  8. Wait until the Success Rate value becomes 100%. Then, click Next: Purchase Instance.
  9. On the Purchase Instance page, specify the Instance Class parameter for the data migration instance. The following table describes the parameter.
    SectionParameterDescription
    New Instance ClassResource GroupThe resource group to which the data migration instance belongs. Default value: default resource group. For more information, see What is Resource Management?.
    Instance Class

    DTS provides various instance classes that vary in the migration speed. You can select an instance class based on your business scenario. For more information, see Specifications of data migration instances.

  10. Read and select the check box to agree to Data Transmission Service (Pay-as-you-go) Service Terms.
  11. Click Buy and Start to start the data migration task. You can view the progress of the task in the task list.