This topic describes how to analyze and optimize slow SQL queries.

Each SQL query consumes I/O resources of the database on which the SQL query is performed. A slow-performing SQL query occupies database resources for a long period of time. For example, a slow SQL query occupies 30% of the resources of a database for one minute. Within the one minute, only 70% of the resources of the database can be allocated to other SQL queries that are executed on the same database. When all resources of the database are being consumed, new SQL queries queue up. In this case, the slow SQL query affects the entire business.

How does PolarDB-X execute an SQL query?

Before you analyze and optimize slow SQL queries, you need to learn about how an SQL query is executed. An SQL query is executed at three layers: the client layer, the compute layer, and the storage layer. The execution process involves data transfer between the client and compute nodes, and that between compute nodes and data nodes.

The following figure shows the execution process of an SQL query.
  • At the compute layer, the SQL statement is parsed, optimized, scheduled, and executed. These operations must be performed on compute nodes and cannot be pushed down to data nodes.
  • At the storage layer, the SQL statements that are pushed down from the compute nodes are executed. On data nodes, data is scanned, computed, and returned to compute nodes with a private remote procedure call (RPC).
456789

The principles of PolarDB-X service and the process of SQL statement execution determine that the following factors affect the efficiency at which SQL statements are executed:

  1. Data amount
    • The amount of data that is returned to the client.
    • The amount of data that is stored on data nodes. If a large amount of data is stored on the data nodes, a large number of I/O operations are performed. In this case, the I/O resources of the data nodes may become insufficient and cause performance bottlenecks.
  2. Data retrieval methods
    • The queried data is retrieved from the cache or disks.
    • If a global index on the data node is used, the system retrieves the queried data efficiently.
    • If predicates are used to filter the rows that hit the specified global index, the system retrieves the queried data efficiently.
  3. Data processing methods
    • Data processing methods include sorting, subquery, aggregation, and joining. Generally, PolarDB-X stores the obtained data in a temporary table and then processes the data.
    • If a large amount of data is processed on a compute node, large amounts of CPU resources of the compute node are consumed, which makes data processing slower.
    • The join algorithm that is used. If an inappropriate join algorithm is used, data is processed in an inefficient manner. For information about join algorithms, see Join algorithms.

Optimization methods

  1. Store data in a location that can be accessed efficiently

    For example, a table on which specific SQL statements are executed contains a large number of data rows and the schema of the table cannot be optimized. In this case, you can store the queried data in the cache of your application or a Redis cache if the data is frequently accessed and is not of a large size and is not frequently changed. This way, the data can be accessed efficiently.

  2. Push join operations down to data nodes
    • If you need to perform a large number of join operations on large tables in your business, you can configure sharding strategies for the tables. Make sure that the shard keys and the join keys are consistent. PolarDB-X converts the join clauses to physical SQL statements and then pushes the physical SQL statements down to the data nodes. This way, the join clauses are executed on table shards and the workloads on tables are distributed to table shards.
    • If slow queries occur due to join operations that are performed on large tables and small tables, and data in the small tables is not of a large size and is not frequently changed, you can convert the small tables to broadcast tables. PolarDB-X converts join clauses that are used to join broadcast tables and tables of other types to physical SQL statements and pushes the statements down to data nodes.
    • If the table sharding strategies of your tables cannot be modified, you can create global secondary indexes based on the join keys. This way, the join operations can be pushed down to data nodes and are performed based on the index tables on the data nodes.
  3. Reduce the scope of data that is scanned
    • Specify predicate expressions in SQL statements as conditions to filter data rows. This way, you can reduce the number of rows that are scanned and improve the query efficiency. For example, you can specify timestamps as conditions to filter data.
    • Specify suitable indexes. If an index table of a large table contains the data that is queried, the queried data can be directly returned from the index table. This way, the system does not scan the base table for the queried data. This reduces the resource consumption of the SQL query.
  4. Specify an appropriate data processing algorithm for your SQL execution plan

    You can choose an algorithm such as HashJoin, SortMergeJoin, or BkaJoin based on your business scenario. If you do not specify an appropriate execution plan, the resource consumption of SQL queries increases.

Identify slow SQL queries

  • You can view slow SQL queries in the PolarDB-X console. For more information, see Slow query logs and Performance trend.
  • You can use the SHOW SLOW statement to view logical slow SQL queries and use the SHOW PHYSICAL_SLOW statement to view physical slow SQL queries. These statements return only the most recent 100 logical or physical slow SQL queries.
    • Execute the following statement:

      show slow;

      The following information is returned:

      +------------------+---------------+-----------+---------------------+--------------+------------+--------------------+
      | TRACE_ID         | USER          | HOST      | START_TIME          | EXECUTE_TIME | AFFECT_ROW | SQL                |
      +------------------+---------------+-----------+---------------------+--------------+------------+--------------------+
      | 12f812912f400000 | polardbx_root | 127.0.0.1 | 2021-08-24 12:59:38 |         6475 |          0 | show physical_slow |
      | 12f8128383400000 | polardbx_root | 127.0.0.1 | 2021-08-24 12:59:19 |         1899 |         -1 | show physical_slow |
      +------------------+---------------+-----------+---------------------+--------------+------------+--------------------+
      2 rows in set (0.01 sec)
    • Execute the following statement:

      show physical_slow;

      The following information is returned:

      +----------------+-----------------------------------+---------------------+--------------+------------------+-------------------------+------------------------+------------+-----------------+
      | GROUP_NAME     | DBKEY_NAME                        | START_TIME          | EXECUTE_TIME | SQL_EXECUTE_TIME | GETLOCK_CONNECTION_TIME | CREATE_CONNECTION_TIME | AFFECT_ROW | SQL             |
      +----------------+-----------------------------------+---------------------+--------------+------------------+-------------------------+------------------------+------------+-----------------+
      | TDDL5_00_GROUP | db218249098_sqa_zmf_tddl5_00_3309 | 2021-03-16 13:05:38 |         1057 |             1011 |                       0 |                      0 |          1 | select sleep(1) |
      +----------------+-----------------------------------+---------------------+--------------+------------------+-------------------------+------------------------+------------+-----------------+
      1 row in set (0.01 sec)

Analyze slow queries

The following list describes the issues that can cause slow queries:
  • Business issues: data skews, inappropriate sharding strategies, and an excessive number of data rows that are returned.
  • System issues: performance bottlenecks and network jitters. If the amount of traffic for your business is high and the resources of your database become insufficient, performance bottlenecks occur.
  • Execution issues: issues that are caused by using an inappropriate index, inappropriate join algorithm, or inappropriate order based on which clauses are executed.

Analyze slow queries based on your business requirements

  1. If slow queries are simple queries and the data that is returned for each query contains hundreds of thousands or millions of rows, you need to determine whether your business requires all data that is returned.
  2. If the system requires long periods of time to execute a specific type of SQL statements on specific shards, you can use the SHOW INFO FROM T statement to check whether data skew occurs. If data skew occurs, you can check whether the sharding strategies of your tables can be modified.

Execute the following statement:

show table info from test_tb;

The following information is returned:

+----+------------+-----------------+------------+
| ID | GROUP_NAME | TABLE_NAME      | SIZE_IN_MB |
+----+------------+-----------------+------------+
|  0 | ads_000000 | test_tb_o2ud_00 | 0.01562500 |
|  1 | ads_000000 | test_tb_o2ud_01 | 0.01562500 |
|  2 | ads_000000 | test_tb_o2ud_02 | 0.01562500 |
|  3 | ads_000000 | test_tb_o2ud_03 | 0.01562500 |
|  4 | ads_000000 | test_tb_o2ud_04 | 0.01562500 |
+----+------------+-----------------+------------+
Analyze slow queries based on your system requirements
  1. If slow queries occur during traffic spikes, you can log on to the PolarDB-X console to check the monitoring metrics of compute and storage resources. If these metrics show that compute and storage resources are being fully consumed due to traffic spikes, we recommend that you increase database resources or enable SQL throttling.
  2. If no slow query logs are displayed in the PolarDB-X console when your business is affected by slow queries, check your business settings. If you make sure that your business does not cause slow queries, we recommend that you check the network condition. Check whether your client and database server are deployed in the same virtual private cloud (VPC) or perform packet capture to check the network condition.
Analyze slow queries based on the requirements of an execution plan

PolarDB-X provides various EXPLAIN statements to check whether slow queries are caused by inappropriate execution plans.

  • View logical execution plans

    Use the EXPLAIN statement to view the logical execution plan of an SQL statement.

    explain select nation, o_year, sum(amount) as sum_profit from ( select n_name as nation, extract(year from o_orderdate) as o_year, l_extendedprice * (1 - l_discount) - ps_supplycost * l_quantity as amount from part, supplier, lineitem, partsupp, orders, nation where s_suppkey = l_suppkey and ps_suppkey = l_suppkey and ps_partkey = l_partkey and p_partkey = l_partkey and o_orderkey = l_orderkey and s_nationkey = n_nationkey and p_name like '%yellow%' ) as profit group by nation, o_year order by nation, o_year desc limit 1;

    The following information is returned:

    +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    | LOGICAL EXECUTIONPLAN                                                                                                                                                                                                                                                                                                                                                                                       |
    +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    | Limit(offset=0, fetch=?2)                                                                                                                                                                                                                                                                                                                                                                                   |
    |   SortAgg(group="nation,o_year", $f2="SUM(amount)")                                                                                                                                                                                                                                                                                                                                                         |
    |     Project(nation="N_NAME", o_year="EXTRACT", amount="__*__ - PS_SUPPLYCOST * L_QUANTITY")                                                                                                                                                                                                                                                                                                                 |
    |       HashJoin(condition="PS_PARTKEY = L_PARTKEY AND P_PARTKEY = L_PARTKEY AND PS_SUPPKEY = L_SUPPKEY AND PS_SUPPKEY = S_SUPPKEY", type="inner")                                                                                                                                                                                                                                                            |
    |         MemSort(sort="N_NAME ASC,EXTRACT DESC")                                                                                                                                                                                                                                                                                                                                                             |
    |           BKAJoin(condition="O_ORDERKEY = L_ORDERKEY", type="inner")                                                                                                                                                                                                                                                                                                                                        |
    |             Project(S_SUPPKEY="S_SUPPKEY", S_NATIONKEY="S_NATIONKEY", N_NATIONKEY="N_NATIONKEY", N_NAME="N_NAME", L_ORDERKEY="L_ORDERKEY", L_PARTKEY="L_PARTKEY", L_SUPPKEY="L_SUPPKEY", L_QUANTITY="L_QUANTITY", __*__="__*__")                                                                                                                                                                            |
    |               HashJoin(condition="L_SUPPKEY = S_SUPPKEY", type="inner")                                                                                                                                                                                                                                                                                                                                     |
    |                 Gather(concurrent=true)                                                                                                                                                                                                                                                                                                                                                                     |
    |                   LogicalView(tables="[000000-000003].lineitem_[00-15]", shardCount=16, sql="SELECT `L_ORDERKEY`, `L_PARTKEY`, `L_SUPPKEY`, `L_QUANTITY`, (`L_EXTENDEDPRICE` * (? - `L_DISCOUNT`)) AS `__*__` FROM `lineitem` AS `lineitem`")                                                                                                                                                               |
    |                 BKAJoin(condition="N_NATIONKEY = S_NATIONKEY", type="inner")                                                                                                                                                                                                                                                                                                                                |
    |                   Gather(concurrent=true)                                                                                                                                                                                                                                                                                                                                                                   |
    |                     LogicalView(tables="[000000-000003].supplier_[00-15]", shardCount=16, sql="SELECT `S_SUPPKEY`, `S_NATIONKEY` FROM `supplier` AS `supplier`")                                                                                                                                                                                                                                            |
    |                   Gather(concurrent=true)                                                                                                                                                                                                                                                                                                                                                                   |
    |                     LogicalView(tables="[000000-000003].nation_[00-15]", shardCount=16, sql="SELECT `N_NATIONKEY`, `N_NAME` FROM `nation` AS `nation` WHERE (`N_NATIONKEY` IN (...))")                                                                                                                                                                                                                      |
    |             Gather(concurrent=true)                                                                                                                                                                                                                                                                                                                                                                         |
    |               LogicalView(tables="[000000-000003].orders_[00-15]", shardCount=16, sql="SELECT `O_ORDERKEY`, EXTRACT(YEAR FROM `O_ORDERDATE`) AS `EXTRACT` FROM `orders` AS `orders` WHERE (`O_ORDERKEY` IN (...))")                                                                                                                                                                                         |
    |         Gather(concurrent=true)                                                                                                                                                                                                                                                                                                                                                                             |
    |           LogicalView(tables="[000000-000003].part_[00-15],partsupp_[00-15]", shardCount=16, sql="SELECT `part`.`P_PARTKEY`, `partsupp`.`PS_PARTKEY`, `partsupp`.`PS_SUPPKEY`, `partsupp`.`PS_SUPPLYCOST` FROM `part` AS `part` INNER JOIN `partsupp` AS `partsupp` ON ((`part`.`P_PARTKEY` = `partsupp`.`PS_PARTKEY`) AND (`part`.`P_NAME` LIKE ?)) WHERE (`partsupp`.`PS_PARTKEY` = `part`.`P_PARTKEY`)") |
    | HitCache:false                                                                                                                                                                                                                                                                                                                                                                                              |
    | Source:PLAN_CACHE
    Note The shardCount parameter in LogicalView indicates the total number of table shards that are accessed.

    You can analyze whether the SQL statement is executed as expected based on the execution plan and check whether the global secondary index, aggregation algorithm, and join algorithm that are specified in the SQL statement are suitable for the query. The execution plan affects the execution efficiency. For more information, see GSI, Optimize and execute aggregation operations, and Optimize and execute JOIN operations. If an inappropriate execution plan is used in the optimization process, you can use the following methods to resolve the issue:

    1. Check the statistical information about the tables on which the SQL statement is executed. An execution plan is generated based on the statistical information about the tables on which an SQL statement is executed. If specific statistical information is missing or outdated, the execution plan that is generated based on the statistical information may not be the optimal execution plan. You can use the ANALYZE TABLE statement to trigger a task to collect statistics of your tables.

      Execute the following statement to check whether the statistical information is correct.

      show statistic; 

      The following information is returned:

      +---------------+------------+-------------+-------------+
      | table_name    | table_rows | column_name | cardinality |
      +---------------+------------+-------------+-------------+
      | tpch_orders   |       8400 | NULL        |        NULL |
      | tpch_lineitem |      10000 | NULL        |        NULL |
      | lineitem      |       8492 | NULL        |        NULL |
      | tpch_part     |       8054 | NULL        |        NULL |
      | aa_lineitem   |       8490 | l_id        |        9143 |
      | part          |       8054 | NULL        |        NULL |
      | not_rt        |       8446 | NULL        |        NULL |
      | aa_orders     |       8308 | o_id        |        9095 |
      | partsupp      |       8609 | NULL        |        NULL |
      | aa_partsupp   |       8667 | ps_id       |        9424 |
      | tpch_partsupp |       8596 | NULL        |        NULL |
      | tpch_customer |       8572 | NULL        |        NULL |
      | orders        |       8423 | NULL        |        NULL |
      | aa_customer   |       9080 | c_id        |        9476 |
      | customer      |       8472 | NULL        |        NULL |
      +---------------+------------+-------------+-------------+
      15 rows in set (0.01 sec)

      Execute the following statement to trigger a task to collect the statistical information about a specified table.

      analyze table lineitem;

      The following information is returned:

      +--------------+---------+----------+----------+
      | TABLE        | OP      | MSG_TYPE | MSG_TEXT |
      +--------------+---------+----------+----------+
      | ads.lineitem | analyze | status   | OK       |
      +--------------+---------+----------+----------+
      1 row in set (1.66 sec)
    2. Create a custom execution plan. Even if all correct statistical information is collected, the execution plan that is generated by the optimizer may not be optimal. The optimizer uses dynamic plan operators to calculate an execution plan. The execution plan that is generated may not be optimal. You can use a FORCE INDEX hint to create an optimal execution plan.
    3. Use a global secondary index. After you confirm that the execution plan is suitable for the query, you can also use a global secondary index to optimize the SQL execution. For information about how to use a global secondary index, see GSI and Intelligent index recommendation.
  • View physical execution plans

    Use the EXPLAIN EXECUTE statement to view the execution plan for an SQL clause that is pushed down to the data nodes. You can check whether the query that is performed on data nodes uses an appropriate local index. You can use the FORCE INDEX hint to specify an index. If an appropriate local index is used, the system scans the data nodes to query the requested data in an efficient manner.

    Example 1:

    explain execute  select * from lineitem;

    The following information is returned:

    +----+-------------+----------+------------+------+---------------+-----+---------+-----+------+----------+-------+
    | id | select_type | table    | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
    +----+-------------+----------+------------+------+---------------+-----+---------+-----+------+----------+-------+
    | 1  | SIMPLE      | lineitem | NULL       | ALL  | NULL          | NULL | NULL    | NULL | 1    | 100      | NULL  |
    +----+-------------+----------+------------+------+---------------+-----+---------+-----+------+----------+-------+
    1 row in set (0.02 sec)

    Example 2:

    explain execute  select L_LINENUMBER from lineitem;

    The following information is returned:

    +----+-------------+----------+------------+-------+---------------+---------+---------+-----+------+----------+-------------+
    | id | select_type | table    | partitions | type  | possible_keys | key     | key_len | ref | rows | filtered | Extra       |
    +----+-------------+----------+------------+-------+---------------+---------+---------+-----+------+----------+-------------+
    | 1  | SIMPLE      | lineitem | NULL       | index | NULL          | PRIMARY | 8       | NULL | 1    | 100      | Using index |
    +----+-------------+----------+------------+-------+---------------+---------+---------+-----+------+----------+-------------+
    1 row in set (0.04 sec)
  • Analyze query duration

    If the system requires a long period of time to execute an SQL statement each time, you can use the EXPLAIN ANALYZE statement to check which operator consumes a long period of time. This helps you identify the cause of the time-consuming query and find a solution.

    explain analyze select L_SUPPKEY, count(*) from lineitem group by L_SUPPKEY;

    The following information is returned:

    +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    | LOGICAL EXECUTIONPLAN                                                                                                                                                                                                                                                                                                                                                                              |
    +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    | HashAgg(group="L_SUPPKEY", count(*)="SUM(count(*))"): rowcount = 1000.0, cumulative cost = value = 2.4863085E7, cpu = 108049.0, memory = 35480.0, io = 201.0, net = 4.75, actual time = 0.000 + 0.000, actual rowcount = 1000, actual memory = 0, instances = 1, id = 519                                                                                                                          |
    |   Gather(concurrent=true): rowcount = 1000.0, cumulative cost = value = 2.4860051E7, cpu = 105039.0, memory = 11881.0, io = 201.0, net = 4.75, actual time = 0.000 + 0.000, actual rowcount = 0, actual memory = 0, instances = 0, id = 517                                                                                                                                                        |
    |     LogicalView(tables="[000000-000003].lineitem_[00-15]", shardCount=16, sql="SELECT `L_SUPPKEY`, COUNT(*) AS `count(*)` FROM `lineitem` AS `lineitem` GROUP BY `L_SUPPKEY`"): rowcount = 1000.0, cumulative cost = value = 2.486005E7, cpu = 105038.0, memory = 11881.0, io = 201.0, net = 4.75, actual time = 0.004 + 0.002, actual rowcount = 4892, actual memory = 0, instances = 0, id = 453 |

    Parameter description:

    • rowcount: indicates the estimated number of rows that can be returned by the operator. The estimated number of rows is calculated based on the statistical information about tables.
    • actual rowcount: indicates the actual number of rows that is returned by the operator.
    • actual time: indicates the period of time that is required by the operator.

    If an insufficient degree of parallelism (DOP) is used, you can change the value of DOP to a higher value to accelerate queries. For more information, see Query executors. You can use the TRACE statement and the SHOW TRACE statement to check physical SQL statements that are pushed down to data nodes and the execution duration of each physical SQL statement. This way, you can determine whether the executions on compute nodes or data nodes cause the slow query.

    Execute the following statement:

    trace select L_SUPPKEY, count(*) from lineitem group by L_SUPPKEY limit 1;

    The following information is returned:

    +-----------+----------+
    | L_SUPPKEY | count(*) |
    +-----------+----------+
    |         1 |       12 |
    +-----------+----------+
    1 row in set (0.21 sec)

    Execute the following statement:

    show trace;

    The following information is returned:

    +----+----------------+-----------+-------+------------------+----------------------------------------------------------------------------+---------------+--------------------------+---------------------+---------------------+------+-------------------------------------------------------------------------------------------------------------------------------+----------------------+
    | ID | NODE_IP        | TIMESTAMP | TYPE  | GROUP_NAME       | DBKEY_NAME                                                                 | TIME_COST(MS) | CONNECTION_TIME_COST(MS) | TOTAL_TIME_COST(MS) | CLOSE_TIME_COST(MS) | ROWS | STATEMENT                                                                                                                     | PARAMS               |
    +----+----------------+-----------+-------+------------------+----------------------------------------------------------------------------+---------------+--------------------------+---------------------+---------------------+------+-------------------------------------------------------------------------------------------------------------------------------+----------------------+
    |  0 | 192.168.31.105 |     0.000 | Query | ADS_000003_GROUP | dskey_ads_000003_group#polardbx-storage-1-master#127.0.0.1-3306#ads_000003 | 3             | 0.00                     | 3                   | 0                   |    0 | /*DRDS /127.0.0.1/12f940ed62400000/0// */SELECT `L_SUPPKEY`, COUNT(*) AS `count(*)`
    FROM ? AS `lineitem`
    GROUP BY `L_SUPPKEY` | [`lineitem_MgSG_13`] |
    |  1 | 192.168.31.105 |    -0.000 | Query | ADS_000003_GROUP | dskey_ads_000003_group#polardbx-storage-1-master#127.0.0.1-3306#ads_000003 | 4             | 0.00                     | 7                   | 0                   |    0 | /*DRDS /127.0.0.1/12f940ed62400000/0// */SELECT `L_SUPPKEY`, COUNT(*) AS `count(*)`
    FROM ? AS `lineitem`
    GROUP BY `L_SUPPKEY` | [`lineitem_MgSG_12`] |
    |  2 | 192.168.31.105 |     0.000 | Query | ADS_000003_GROUP | dskey_ads_000003_group#polardbx-storage-1-master#127.0.0.1-3306#ads_000003 | 2             | 0.00                     | 2                   | 0                   |    0 | /*DRDS /127.0.0.1/12f940ed62400000/0// */SELECT `L_SUPPKEY`, COUNT(*) AS `count(*)`
    FROM ? AS `lineitem`
    GROUP BY `L_SUPPKEY` | [`lineitem_MgSG_14`] |
    |  3 | 192.168.31.105 |     0.000 | Query | ADS_000003_GROUP | dskey_ads_000003_group#polardbx-storage-1-master#127.0.0.1-3306#ads_000003 | 3             | 0.00                     | 3                   | 0                   |    0 | /*DRDS /127.0.0.1/12f940ed62400000/0// */SELECT `L_SUPPKEY`, COUNT(*) AS `count(*)`
    FROM ? AS `lineitem`
    GROUP BY `L_SUPPKEY` | [`lineitem_MgSG_15`] |
    |  4 | 192.168.31.105 |     0.000 | Query | ADS_000001_GROUP | dskey_ads_000001_group#polardbx-storage-1-master#127.0.0.1-3306#ads_000001 | 5             | 0.00                     | 14                  | 2                   |  746 | /*DRDS /127.0.0.1/12f940ed62400000/0// */SELECT `L_SUPPKEY`, COUNT(*) AS `count(*)`
    FROM ? AS `lineitem`
    GROUP BY `L_SUPPKEY` | [`lineitem_MgSG_04`] |
    |  5 | 192.168.31.105 |     0.000 | Query | ADS_000001_GROUP | dskey_ads_000001_group#polardbx-storage-1-master#127.0.0.1-3306#ads_000001 | 4             | 0.00                     | 15                  | 1                   |  682 | /*DRDS /127.0.0.1/12f940ed62400000/0// */SELECT `L_SUPPKEY`, COUNT(*) AS `count(*)`
    FROM ? AS `lineitem`
    GROUP BY `L_SUPPKEY` | [`lineitem_MgSG_05`] |
    |  6 | 192.168.31.105 |     0.000 | Query | ADS_000001_GROUP | dskey_ads_000001_group#polardbx-storage-1-master#127.0.0.1-3306#ads_000001 | 9             | 0.00                     | 15                  | 1                   |  516 | /*DRDS /127.0.0.1/12f940ed62400000/0// */SELECT `L_SUPPKEY`, COUNT(*) AS `count(*)`
    FROM ? AS `lineitem`
    GROUP BY `L_SUPPKEY` | [`lineitem_MgSG_06`] |
    |  7 | 192.168.31.105 |     0.000 | Query | ADS_000001_GROUP | dskey_ads_000001_group#polardbx-storage-1-master#127.0.0.1-3306#ads_000001 | 7             | 0.00                     | 15                  | 0                   |  310 | /*DRDS /127.0.0.1/12f940ed62400000/0// */SELECT `L_SUPPKEY`, COUNT(*) AS `count(*)`
    FROM ? AS `lineitem`
    GROUP BY `L_SUPPKEY` | [`lineitem_MgSG_07`] |
    |  8 | 192.168.31.105 |     0.000 | Query | ADS_000000_GROUP | dskey_ads_000000_group#polardbx-storage-0-master#127.0.0.1-3306#ads_000000 | 2             | 0.00                     | 3                   | 0                   |    0 | /*DRDS /127.0.0.1/12f940ed62400000/0// */SELECT `L_SUPPKEY`, COUNT(*) AS `count(*)`
    FROM ? AS `lineitem`
    GROUP BY `L_SUPPKEY` | [`lineitem_MgSG_00`] |
    |  9 | 192.168.31.105 |     0.000 | Query | ADS_000000_GROUP | dskey_ads_000000_group#polardbx-storage-0-master#127.0.0.1-3306#ads_000000 | 5             | 0.00                     | 7                   | 1                   |  884 | /*DRDS /127.0.0.1/12f940ed62400000/0// */SELECT `L_SUPPKEY`, COUNT(*) AS `count(*)`
    FROM ? AS `lineitem`
    GROUP BY `L_SUPPKEY` | [`lineitem_MgSG_02`] |
    | 10 | 192.168.31.105 |     0.000 | Query | ADS_000002_GROUP | dskey_ads_000002_group#polardbx-storage-0-master#127.0.0.1-3306#ads_000002 | 2             | 0.00                     | 5                   | 0                   |    0 | /*DRDS /127.0.0.1/12f940ed62400000/0// */SELECT `L_SUPPKEY`, COUNT(*) AS `count(*)`
    FROM ? AS `lineitem`
    GROUP BY `L_SUPPKEY` | [`lineitem_MgSG_08`] |
    | 11 | 192.168.31.105 |     0.000 | Query | ADS_000000_GROUP | dskey_ads_000000_group#polardbx-storage-0-master#127.0.0.1-3306#ads_000000 | 6             | 0.00                     | 11                  | 0                   |  917 | /*DRDS /127.0.0.1/12f940ed62400000/0// */SELECT `L_SUPPKEY`, COUNT(*) AS `count(*)`
    FROM ? AS `lineitem`
    GROUP BY `L_SUPPKEY` | [`lineitem_MgSG_01`] |
    | 12 | 192.168.31.105 |     0.000 | Query | ADS_000000_GROUP | dskey_ads_000000_group#polardbx-storage-0-master#127.0.0.1-3306#ads_000000 | 7             | 0.00                     | 10                  | 1                   |  837 | /*DRDS /127.0.0.1/12f940ed62400000/0// */SELECT `L_SUPPKEY`, COUNT(*) AS `count(*)`
    FROM ? AS `lineitem`
    GROUP BY `L_SUPPKEY` | [`lineitem_MgSG_03`] |
    | 13 | 192.168.31.105 |     0.000 | Query | ADS_000002_GROUP | dskey_ads_000002_group#polardbx-storage-0-master#127.0.0.1-3306#ads_000002 | 3             | 0.00                     | 3                   | 0                   |    0 | /*DRDS /127.0.0.1/12f940ed62400000/0// */SELECT `L_SUPPKEY`, COUNT(*) AS `count(*)`
    FROM ? AS `lineitem`
    GROUP BY `L_SUPPKEY` | [`lineitem_MgSG_09`] |
    | 14 | 192.168.31.105 |     0.000 | Query | ADS_000002_GROUP | dskey_ads_000002_group#polardbx-storage-0-master#127.0.0.1-3306#ads_000002 | 2             | 0.00                     | 2                   | 0                   |    0 | /*DRDS /127.0.0.1/12f940ed62400000/0// */SELECT `L_SUPPKEY`, COUNT(*) AS `count(*)`
    FROM ? AS `lineitem`
    GROUP BY `L_SUPPKEY` | [`lineitem_MgSG_10`] |
    | 15 | 192.168.31.105 |     0.000 | Query | ADS_000002_GROUP | dskey_ads_000002_group#polardbx-storage-0-master#127.0.0.1-3306#ads_000002 | 1             | 0.00                     | 3                   | 1                   |    0 | /*DRDS /127.0.0.1/12f940ed62400000/0// */SELECT `L_SUPPKEY`, COUNT(*) AS `count(*)`
    FROM ? AS `lineitem`
    GROUP BY `L_SUPPKEY` | [`lineitem_MgSG_11`] |
    +----+----------------+-----------+-------+------------------+----------------------------------------------------------------------------+---------------+--------------------------+---------------------+---------------------+------+-------------------------------------------------------------------------------------------------------------------------------+----------------------+
    16 rows in set (0.05 sec)