This topic describes how to troubleshoot memory consumption issues on an ApsaraDB RDS for MySQL instance.

Background information

The memory usage and the buffer pool hit ratio are two important metrics that are used to measure the overall memory consumption of your RDS instance. If the memory usage is excessively high, the risk of memory exhaustion arises. If the buffer pool hit ratio is abnormally low, a large number of data pages that are requested cannot be hit in the buffer pool. As a result, ApsaraDB RDS needs to read data from the disk. This increases I/O operations and query latencies.

View the memory consumption

The ApsaraDB RDS console provides various methods that can be used to view the memory consumption:

  • Monitoring and alerting

    In the left-side navigation pane, click Monitoring and Alerts. On the page that appears, click the Standard Monitoring tab. On the Standard Monitoring tab, click Resource Monitoring to view the memory usage, and click Engine Monitoring to view the buffer pool hit ratio.

  • Autonomy service

    In the left-side navigation pane, choose Autonomy Service > Dashboard. On the page that appears, click the Performance Trends tab. Then, you can view the MySQL CPU Utilization/Memory Usage metric and the InnoDB Buffer Pool Hit Ratio metric.

You can also use the PERFORMANCE_SCHEMA storage engine to configure memory instruments. This allows you to aggregate memory usage statistics into memory summary tables. For more information, see Memory Summary Tables.

  • If you want to enable memory monitoring when your RDS instance is starting, add the performance_schema = on setting to the my.cnf file. Then, restart your RDS instance. The setting immediately takes effect.
  • If you want to enable memory monitoring when your RDS instance is running, run the following command:
    update performance_schema.setup_instruments set enabled = 'yes' where name like 'memory%';

The following tables provide the memory consumption from various dimensions:

  • The memory_summary_by_account_by_event_name table provides the events and event names that match a specified account. The account is specified by the combination of a user and a host.
  • The memory_summary_by_host_by_event_name table provides the events and event names that match a specified host.
  • The memory_summary_by_thread_by_event_name provides the events and event names that match a specified thread.
  • The memory_summary_by_user_by_event_name table provides the events and event names that match a specified user.
  • The memory_summary_global_by_event_name table provides the events that match a specified event name.

Common reasons for excessively high memory usage

In normal cases, the InnoDB buffer pool consumes the most memory. The maximum memory that can be consumed by the buffer pool varies based on the parameter settings of the buffer pool. In addition, most of the memory is dynamically allocated and adjusted when the requests are being processed. The memory consumption includes the memory that is consumed by in-memory temporary tables, prefetch caches, table caches, hash indexes, and row lock objects. For more information about the memory usage and parameter limits, see How MySQL Uses Memory.

Multi-statement queries

MySQL allows you to combine multiple SQL statements into a single query. These SQL statements are separated by semicolons (;) in the query and are sent to MySQL at a time. MySQL processes these SQL statements one by one. However, some memory is released only after all these SQL statements are executed.

If a large number of SQL statements are sent at a time, the accumulative memory that is consumed by various objects to execute these SQL statements significantly increases. The increase can reach up to a few hundred MB. This may exhaust the available memory for the MySQL process.

In normal cases, if a large number of SQL statements are sent at a time, you can detect a sudden increase in network traffic by using the monitoring and alerting feature or the SQL Explorer feature. We recommend that you do not run multi-statement queries.

Buffer pool issues

The data pages of all tables are stored in the buffer pool. If the requested data pages are hit in the buffer pool, ApsaraDB RDS does not perform physical I/O operations. In this case, ApsaraDB RDS executes SQL statements at high speeds. In addition, the buffer pool uses the least recently used (LRU) caching algorithm to manage the data pages. This algorithm allows the buffer pool to store all dirty pages in the flush list.

The default size of the InnoDB buffer pool is set to 75% of the memory capacity that is provided by your RDS instance.

The following common issues that are related to the buffer pool may occur:

  • Data pages are not sufficiently pre-warmed. This increases query latencies. If you restart your RDS instance, read cold data, or suffer a low buffer pool hit ratio, this issue may occur. Before you upgrade your RDS instance or launch a sales promotion, we recommend that you sufficiently pre-warm data pages.
  • Excessive dirty pages are accumulated. For example, a dirty page has not been updated for a long period of time. In this case, if the difference between the earliest and current log sequence numbers (LSNs) of the dirty page exceeds 76%, a user thread is triggered to synchronously update the dirty page. This significantly decreases the performance of your RDS instance. To fix this issue, you can balance the write loads, prevent excessively high throughput for write operations, reconfigure the parameters that specify how to update dirty pages, and upgrade your RDS instance.
  • Your RDS instance provides a large memory capacity. However, the innodb_buffer_pool_instances parameter of your RDS instance is set to a small value. In this case, if the QPS is high, the competition for locks in the buffer pool is fierce. We recommend that you set the innodb_buffer_pool_instances parameter to 8, 16, or a larger value.

Temporary table issues

The in-memory temporary table size is limited by the tmp_table_size and max_heap_table_size parameters. If the size of an in-memory temporary table exceeds the limit, the in-memory temporary table is converted into an on-disk temporary table. If a large number of temporary tables are created over a number of connections, the memory usage of your RDS instance suddenly increases. MySQL 8.0 provides a new TempTable engine. This engine specifies that the total size of the in-memory temporary tables that are created by all threads must be smaller than the value of the temptable_max_ram parameter. The default value of this parameter is 1 GB. If the total size exceeds the value of this parameter, earlier in-memory temporary tables are converted into on-disk temporary tables.

Other issues

If an excessively large number of tables are created on your RDS instance or the QPS is high, the table cache may consume a specific amount of memory. We recommend that you do not create a large number of tables. Otherwise, we recommend that you do not set the table_open_cache parameter to a large value.

The default memory consumption for adaptive hash indexes is set to 1/64 of the buffer pool size. If you query or write large fields of the binary large object (BLOB) data type, memory is dynamically allocated to these large fields. This also increases the memory usage of your RDS instance.

The memory usage of your RDS instance may increase due to a number of other issues. If the memory usage abnormally increases or the memory is exhausted, see How MySQL Uses Memory or submit a ticket.