This topic describes typical resource consumption issues related to slow queries. You can view the amount of resources consumed by slow queries on the query details page and in VisualPlan. For more information, see Slow query table, Query details page, and VisualPlan.

Slow queries that consume memory resources

You can first use slow query tables to find queries within a period that take a long time to complete. Then, you can sort data by the Peak Memory field in the stage-level statistics section of the query details page to find the stages that consume a large amount of peak memory resources. You can find the stages in VisualPlan and use SQL statements to determine which operators consume a large amount of memory resources. For more information, see Slow query table.

The following scenarios consume large amounts of memory resources:
  • GROUP BY operations are performed in a stage, and the GROUP BY field has multiple unique values. In this scenario, a large amount of memory resources are consumed during computing to cache the values of the GROUP BY field.
  • JOIN operations are performed in a stage. When hash joins are used, the data of a table is cached to the memory. The larger the table, the more memory resources occupied.
  • SORT operations are performed in a stage. When data is sorted, it is cached to nodes. The more data sorted, the more memory resources occupied.
  • Window functions are executed in a stage. When a window function is executed, data is cached to memory. The more data executed by the window function, the more memory resources occupied.

Slow queries that consume CPU resources

You can first use slow query tables to find queries within a period that consume a large amount of CPU resources. Then, you can sort data by the Operator Cost field in the stage-level statistics section of the query details page to find stages that have a large operator cost. You can find the stages in VisualPlan and use SQL statements to determine operators that consume a large amount of CPU resources. For more information, see Slow query table.

The following scenarios consume large amounts of CPU resources:
  • Filter conditions are not pushed down to the storage layer. By default, AnalyticDB for MySQL creates indexes for all fields. If indexes are used to filter data, CPU consumption can be reduced. However, indexes cannot be used in filter conditions in some scenarios, such as the use of functions in filter conditions. In this case, AnalyticDB for MySQL must filter each scanned data record. This consumes a large amount of CPU resources.
  • Joins contain filter conditions. If a join contains filter conditions, AnalyticDB for MySQL first joins two tables and then filters the joined data. In this case, indexes cannot be used. The larger the amount of joined data, the more CPU resources consumed by the filter operations.
  • Criteria of joins are not specified. If the criterion of a join is not specified, Cartesian products are performed on the left and right tables. The number of generated rows is the product of the numbers of rows in the left and right tables. Such operations consume a large amount of CPU resources.

Slow queries that consume disk I/O resources

You can first use slow query tables to find queries within a period that take a long scan time. Then, you can sort data by the Scan Time field in the stage-level statistics section of the query details page to find stages that have a long scan time. You can find the stages in VisualPlan and determine whether a table contains a large amount of scanned data based on the number of input rows of the Tablescan operator. For more information, see Slow query table.

The following scenarios consume large amounts of disk I/O resources:
  • The filtering rate of conditions is low, which decreases index efficiency.
  • Filter conditions are not pushed down, which results in a full scan of a source table.
  • Filter conditions are pushed down. However, their filter scopes are large, and large amounts of data must be scanned.
  • A large number of list partitions are scanned.