When you write data to a table, you can use the INSERT and REPLACEINTO statements by using batch packaging to improve data write performance. When you use batch packaging, take note of the following items:

  • The number of rows of data written by each INSERT or REPLACE statement is greater than 1,000. However, the total size of the data to be written cannot exceed 16MB.
  • When data is written by using batch packaging, the write latency of a single batch is relatively high.
  • If an error occurs, you must retry operations to make sure that data is written. Then, you can use the primary key of the table to eliminate data duplication caused by the retry operations.

Update data

AnalyticDB for MySQL provides multiple data update methods. We recommend that you use the following methods:

  • If data is updated at a high frequency and overwritten at the row level based on primary keys, data is updated and applied to all columns. Data can be updated in batches by using REPLACE INTO.
  • If data is updated at a low frequency based on primary keys, data can be updated by using the REPLACE INTO or UPDATE statement.
  • If data is updated at a low frequency based on conditions, data can be updated by UPDATE.

Delete data, partitions, or tables

AnalyticDB for MySQL has multiple methods to delete data. We recommend that you use the following methods:

  • If data is deleted at a low frequency based on primary keys, you can use DELETE FROM WHERE PK='xxx' to delete data.
  • If data is deleted at a low frequency based on conditions, data can be deleted by using DELETE.
  • Delete a specified subpartition by using TRUNCATE TABLE db_name.table_name PARTITION partition_name.
  • Delete data from a specified table, including all list partitions by using TRUNCATE TABLE db_name.table_name.

Import data in batches and in real time

  • Batch import: This method is applicable to scenarios where large amounts of data is imported. You can query original data during the import process and switch to new data with a single click after import is complete. If the import fails, the new data can be rolled back without affecting the original data.

    For example, we recommend that you use INSERT OVERWRITE INTO SELECT to import data in batches from MaxCompute or Object Storage Service (OSS) to AnalyticDB for MySQL. For more information, see Import OSS data to AnalyticDB for MySQL by using external tables.

    Note When you import a single table in batches, import tasks are executed in the system in serial. When you import multiple tables in batch, the import tasks are executed in parallel in the system. By default, two tasks are executed in parallel.

    For example, when you import data from MaxCompute to AnalyticDB for MySQL, data in different partitions of the same table in MaxCompute is queued and imported in series in the system.

  • Real-time import: This method is applicable to scenarios where small amounts of data is imported, such as millions of table data. For example, we recommend that you use INSERT INTO to import data from ApsaraDB RDS for MySQL or self-managed MySQL database hosted on Elastic Compute Service (ECS) to AnalyticDB for MySQL. For more information, see Import OSS data to AnalyticDB for MySQL by using external tables.
Note When you import data to AnalyticDB for MySQL, the import task uses resources in AnalyticDB for MySQL. If you must import data during data query, we recommend that you perform the import operation when the query per second (QPS) is low.