This topic describes how to synchronize data from an ApsaraDB RDS for MySQL instance to an AnalyticDB for PostgreSQL instance in Serverless mode by using Data Transmission Service (DTS). The data synchronization feature provided by DTS allows you to transmit data for analysis.

Prerequisites

Supported MySQL database types

The following types of MySQL databases can be synchronized to AnalyticDB for PostgreSQL instances in Serverless mode. In this topic, an ApsaraDB RDS for MySQL instance is used to describe how to configure a data synchronization task. You can also follow the procedure to configure data synchronization tasks for other types of MySQL databases.
  • ApsaraDB RDS for MySQL instance
  • Self-managed database that is hosted on Elastic Compute Service (ECS)
  • Self-managed database that is connected over Express Connect, VPN Gateway, or Smart Access Gateway
  • Self-managed database that is connected over Database Gateway
  • Self-managed database that is connected over Cloud Enterprise Network (CEN)

DTS can also synchronize data from PostgreSQL, SQL Server, and Db2 databases. For more information, see Supported databases.

Precautions

Note By default, DTS disables FOREIGN KEY constraints for the destination database in a data synchronization task. Therefore, the cascade and delete operations of the source database are not synchronized to the destination database.
CategoryDescription
Limits on the source database
  • The tables to be synchronized must have PRIMARY KEY or UNIQUE constraints, and all fields must be unique. Otherwise, the destination database may contain duplicate data records.
  • If you select tables as the objects to be synchronized and you need to edit the tables, such as renaming tables or columns, you can synchronize up to 1,000 tables in a single data synchronization task. If you run a task to synchronize more than 1,000 tables, a request error occurs. In this case, we recommend that you split the tables and configure multiple tasks to synchronize the tables, or configure a task to synchronize the entire database.
  • The following requirements for binary logs must be met:
    • The binary logging feature must be enabled and the binlog_row_image parameter must be set to full. For more information about how to enable binary logging, see Modify the parameters of an ApsaraDB RDS for MySQL instance. Otherwise, error messages are returned during precheck and the data synchronization task cannot be started.
      Important
      • If the source database is a self-managed MySQL database, you must enable the binary logging feature and set the binlog_format parameter to row and the binlog_row_image parameter to full.
      • If the source database is a self-managed MySQL database deployed in a dual-primary cluster, you must set the log_slave_updates parameter to ON. This ensures that DTS can obtain all binary logs. For more information, see Create an account for a self-managed MySQL database and configure binary logging.
    • If you perform only incremental data synchronization, the binary logs of the source database are retained for at least 24 hours. If you perform both full data synchronization and incremental data synchronization, the binary logs of the source database are retained for at least seven days. After full data synchronization is complete, you can set the retention period to more than 24 hours. Otherwise, DTS may fail to obtain the binary logs and the task may fail. In exceptional circumstances, data inconsistency or loss may occur. Make sure that you configure the retention period of binary logs in accordance with the preceding requirements. Otherwise, the SLA of DTS does not guarantee service reliability or performance. For more information about binary log files of an ApsaraDB RDS for MySQL instance, see View and delete the binary log files of an ApsaraDB RDS for MySQL instance.

  • During data synchronization, do not perform DDL operations to modify the primary key or add comments because the operations cannot take effect. For example, do not execute the ALTER TABLE table_name COMMENT='Table comments'; statement.
Other limits
  • Requirements for the objects to be synchronized:
    • Only tables can be selected as the objects to synchronize.
    • DTS does not synchronize data of the following data types: BIT, VARBIT, GEOMETRY, ARRAY, UUID, TSQUERY, TSVECTOR, TXID_SNAPSHOT, and POINT.
    • Prefix indexes cannot be synchronized. If the source database contains prefix indexes, data may fail to be synchronized.
  • Before you synchronize data, evaluate the impact of data synchronization on the performance of the source and destination databases. We recommend that you synchronize data during off-peak hours. During initial full data synchronization, DTS uses read and write resources of the source and destination databases. This may increase the loads on the database servers.
  • During initial full data synchronization, concurrent INSERT operations cause fragmentation in the tables of the destination database. After initial full data synchronization is complete, the size of the used tablespace of the destination database is larger than that of the source database.
  • If you select one or more tables instead of an entire database as the objects to be synchronized, do not use tools such as pt-online-schema-change to perform DDL operations on the tables during data synchronization. Otherwise, data may fail to be synchronized.

    If you use only DTS to write data to the destination database, you can use Data Management (DMS) to perform online DDL operations on source tables during data synchronization. For more information, see Perform lock-free DDL operations.

  • During data synchronization, we recommend that you use only DTS to write data to the destination database. This prevents data inconsistency between the source and destination databases. For example, if you use tools other than DTS to write data to the destination database, data loss may occur in the destination database when you use DMS to perform online DDL operations.
Special cases
If the source database is a self-managed MySQL database, take note of the following items:
  • If you perform a primary/secondary switchover on the source database when the data synchronization task is running, the task fails.
  • DTS calculates synchronization latency based on the timestamp of the latest synchronized data in the destination database and the current timestamp in the source database. If no DML operation is performed on the source database for an extended period of time, the synchronization latency may be inaccurate. If the latency of the synchronization task is excessively high, you can perform a DML operation on the source database to update the latency.
    Note If you select an entire database as the object to be synchronized, you can create a heartbeat table. The heartbeat table is updated or receives data every second.
  • DTS executes the CREATE DATABASE IF NOT EXISTS 'test' statement in the source database as scheduled to move forward the binary log file position.

Billing

Synchronization typeTask configuration fee
Schema synchronization and full data synchronizationFree of charge.
Incremental data synchronizationCharged. For more information, see Billing overview.

Supported synchronization topologies

  • One-way one-to-one synchronization
  • One-way one-to-many synchronization
  • One-way many-to-one synchronization

SQL operations that can be synchronized

  • DML operations: INSERT, UPDATE, and DELETE
  • DDL operation: ADD COLUMN
    Note The CREATE TABLE operation is not supported. To synchronize data from a new table, you must add the table to the selected objects. For more information, see Add an object to a data synchronization task.

Term mappings

MySQLAnalyticDB for PostgreSQL
DatabaseSchema
TableTable

Procedure

  1. Go to the Data Synchronization page of the new DTS console.
    Note You can also log on to the DMS console. In the top navigation bar, click DTS. Then, in the left-side navigation pane, choose DTS (DTS) > Data Synchronization.
  2. In the top navigation bar, select the region where the data synchronization instance resides.
  3. Click Create Task. On the Create Task page, configure the source and destination databases.
    SectionParameterDescription
    N/ATask NameThe task name that DTS automatically generates. We recommend that you specify a descriptive name that makes it easy to identify. You do not need to use a unique task name.
    Source DatabaseSelect InstanceSelect an existing ApsaraDB RDS for MySQL instance. This parameter is optional.
    Database TypeSelect MySQL.
    Access MethodSelect Alibaba Cloud Instance.
    Instance RegionThe region where the source ApsaraDB RDS for MySQL instance resides.
    Replicate Data Across Alibaba Cloud AccountsSelect No in this example.
    RDS Instance IDThe ID of the source ApsaraDB RDS for MySQL instance.
    Database AccountThe database account of the source ApsaraDB RDS for MySQL database. The account must have the REPLICATION CLIENT, REPLICATION SLAVE, SHOW VIEW, and SELECT permissions.
    Database PasswordThe password of the database account.
    EncryptionSelect Non-encrypted or SSL-encrypted based on your requirements. If you want to select SSL-encrypted, you must enable SSL encryption for the ApsaraDB RDS for MySQL instance before you configure the data synchronization task. For more information, see Configure SSL encryption for an ApsaraDB RDS for MySQL instance.
    Destination DatabaseSelect InstanceSelect an existing AnalyticDB for PostgreSQL instance in Serverless mode. This parameter is optional.
    Database TypeSelect AnalyticDB for PostgreSQL.
    Access MethodSelect Alibaba Cloud Instance.
    Instance RegionThe region where the destination AnalyticDB for PostgreSQL instance in Serverless mode resides.
    Instance IDThe ID of the destination AnalyticDB for PostgreSQL instance in Serverless mode.
    Database NameThe name of the destination database in the AnalyticDB for PostgreSQL instance in Serverless mode.
    Database AccountThe initial account of the destination AnalyticDB for PostgreSQL instance in Serverless mode.
    Note You can also enter an account that has the RDS_SUPERUSER permission. For more information, see Manage users and permissions.
    Database PasswordThe password of the database account.
  4. In the lower part of the page, click Test Connectivity and Proceed.
    Note
    1. Select objects for the task and configure advanced settings.
      ParameterDescription
      Task StagesIncremental Data Synchronization is automatically selected. You must also select Schema Synchronization and Full Data Synchronization. After the precheck is complete, DTS synchronizes the historical data of selected objects from the source instance to the destination instance. The historical data serves as the basis for subsequent incremental synchronization.
      Processing Mode of Conflicting Tables
      • Precheck and Report Errors: checks whether the source and destination databases contain tables that share the same names. If the source and destination databases do not contain identical table names, the precheck is passed. Otherwise, an error is returned during the precheck, and the data synchronization task cannot be started.
        Note You can use the object name mapping feature to rename the tables that are synchronized to the destination database. You can use this feature if the source and destination databases contain identical table names and the tables in the destination database cannot be deleted or renamed. For more information, see Map object names .
      • Ignore Errors and Proceed: skips the precheck for identical table names in the source and destination databases.
        Warning If you select Ignore Errors and Proceed, data inconsistency may occur and your business may be exposed to potential risks.
        • If the source and destination databases have the same schema, and a data record has the same primary key as an existing data record in the destination database:
          • During full data synchronization, DTS does not synchronize the data record to the destination database. The existing data record in the destination database is retained.
          • During incremental data synchronization, DTS synchronizes the data record to the destination database. The existing data record in the destination database is overwritten.
        • If the source and destination databases have different schemas, data may fail to be initialized. In this case, only specific columns are synchronized or the data synchronization task fails.
      DDL and DML Operations to Be SynchronizedThe DDL and DML operations that you want to synchronize. For more information, see SQL operations that can be synchronized.
      Note To select the SQL operations performed on a specific database or table, right-click an object in the Selected Objects section. In the dialog box that appears, select the SQL operations that you want to synchronize.
      Select ObjectsSelect one or more objects from the Source Objects section and click the Rightwards arrow icon to add the objects to the Selected Objects section.
      Note You can select only tables as the objects to be synchronized.
      Rename Databases and Tables
      • To rename an object that you want to synchronize to the destination instance, right-click the object in the Selected Objects section. For more information, see Map the name of a single object.
      • To rename multiple objects at a time, click Batch Edit in the upper-right corner of the Selected Objects section. For more information, see Map multiple object names at a time.
      Filter dataYou can specify WHERE conditions to filter data. For more information, see Use SQL conditions to filter data.
      Select the SQL operations to be synchronizedIn the Selected Objects section, right-click an object. In the dialog box that appears, select the DML and DDL operations that you want to synchronize. For more information, see SQL operations that can be synchronized.
    2. Click Next: Advanced Settings.
      ParameterDescription
      Set AlertsSpecify whether to set alerts for the data synchronization task. If the task fails or the synchronization latency exceeds the threshold, the alert contacts will receive notifications.
      • Select No if you do not want to set alerts.
      • Select Yes to set alerts. In this case, you must also set the alert threshold and alert contacts.
      Replicate Temporary Tables When DMS Performs DDL OperationsIf you use DMS to perform online DDL operations on the source database, you can specify whether to synchronize temporary tables generated by online DDL operations.
      • Yes: DTS synchronizes the data of temporary tables generated by online DDL operations.
        Note If online DDL operations generate a large amount of data, the data synchronization task may be delayed.
      • No: DTS does not synchronize the data of temporary tables generated by online DDL operations. Only the original DDL data of the source database is synchronized.
        Note If you select No, the tables in the destination database may be locked.
      Retry Time for Failed ConnectionSpecify the retry time range for failed connections. If a data synchronization task is disconnected, DTS immediately retries a connection within the specified time range. Valid values: 10 to 1440. Unit: minutes. Default value: 120. We recommend that you set the retry time range to more than 30 minutes. If DTS reconnects to the source and destination databases within the specified time range, DTS resumes the data synchronization task. Otherwise, the data synchronization task fails.
      Note
      • If you set different retry time ranges for multiple data synchronization tasks that have the same source or destination database, the shortest retry time range that is set takes precedence.
      • If DTS retries a connection, you are charged for the data synchronization task. We recommend that you specify the retry time based on your business needs and release the data synchronization task in a timely manner after the source and destination instances are released.
      Enclose Object Names in Quotation MarksSpecify whether you need to enclose object names in quotation marks. If you select Yes and the following conditions are met, DTS encloses object names in single quotation marks (') or double quotation marks (") during schema migration and incremental data migration.
      • The business environment of the source database is case-sensitive, and the database name contains both uppercase and lowercase letters.
      • A source table name does not start with a letter and contains characters other than letters, digits, and specific special characters.
        Note A source table name can contain only the following special characters: underscores (_), number signs (#), and dollar signs ($).
      • The names of the schemas, tables, or columns that you want to synchronize are keywords of the destination database, reserved keywords, or invalid characters.
        Note If you select Yes, after DTS synchronizes data to the destination database, you must specify the object name in quotation marks to query the object.
      Configure ETLSelect whether to configure the extract, transform, and load (ETL) feature. For more information, see What is ETL?What is ETL? Valid values:
  5. In the lower part of the page, click Next: Configure Database and Table Fields. On the page that appears, set the primary key columns and distribution key columns of the tables that you want to synchronize to the destination AnalyticDB for PostgreSQL instance.
    AnalyticDB for PostgreSQL: Set the primary key columns and distribution key columns
  6. In the lower part of the page, click Next: Save Task Settings and Precheck.
    Note
    • Before you can start the data synchronization task, DTS performs a precheck. You can start the data synchronization task only after the task passes the precheck.
    • If the task fails to pass the precheck, click View Details next to each failed item. After you analyze the causes based on the check results, troubleshoot the issues. Then, run a precheck again.
    • If an alert is generated for an item during the precheck, perform the following operations based on the scenario:
      • In scenarios in which you cannot ignore the alert item, click View Details next to the failed item. After you analyze the causes based on the check results, troubleshoot the issues. Then, run a precheck again.
      • In scenarios in which you can ignore the alert item, click Confirm Alert Details next to the failed item. In the View Details dialog box, click Ignore. In the message that appears, click OK. Then, click Precheck Again to run a precheck again. If you ignore the alert item, data inconsistency may occur and your business may be exposed to potential risks.
  7. Wait until the success rate becomes 100%. Then, click Next: Purchase Instance.
  8. On the Purchase Instance page, configure the Billing Method and Instance Class parameters for the data synchronization instance. The following table describes the parameters.
    SectionParameterDescription
    New Instance ClassBilling Method
    • Subscription: You pay for the instance when you create an instance. The subscription billing method is more cost-effective than the pay-as-you-go billing method for long-term use.
    • Pay-as-you-go: A pay-as-you-go instance is charged on an hourly basis. The pay-as-you-go billing method is suitable for short-term use. If you no longer require a pay-as-you-go instance, you can release the pay-as-you-go instance to reduce costs.
    Instance ClassDTS provides several instance classes that have different performance in synchronization speed. You can select an instance class based on your business scenario. For more information, see Specifications of data synchronization instances.
    Subscription DurationIf you select the subscription billing method, set the subscription duration and the number of instances that you want to create. The subscription duration can be one to nine months or one to three years.
    Note This parameter is displayed only if you select the subscription billing method.
  9. Read and select the check box for Data Transmission Service (Pay-as-you-go) Service Terms.
  10. Click Buy and Start to start the data synchronization task. You can view the progress of the task in the task list.

Troubleshooting

  • If an error is repeatedly reported during schema synchronization after you make sure that the table schema is consistent, Submit a ticket.
  • VACUUM operations are not automatically performed during data synchronization because they may affect subsequent data write speeds. We recommend that you periodically perform VACUUM operations on databases.
  • If an exception occurs during full data synchronization, you must clear the data in the destination table and write data again.
  • The data synchronization performance of AnalyticDB for PostgreSQL instances in Serverless mode is good in scenarios where a large amount of data is written from a single table, but poor in scenarios where a small amount of data is written from hot data rows or multiple tables. If your business involves the latter scenarios, we recommend that you Submit a ticket for parameter optimization and performance improvement.