This topic describes how to analyze and optimize slow queries.

Each SQL query consumes I/O resources of the database on which the SQL query is performed. A slow-performing query occupies database resources for a long period of time. For example, a slow query occupies 30% of the resources of a database for 1 minute. Within the 1-minute period, only 70% of the resources of the database can be assigned to other SQL queries that are executed on the same database. When all resources of the database are being consumed, new SQL queries queue up. In this case, the slow query affects the entire business.

How does PolarDB-X perform an SQL query?

Before you analyze and optimize slow queries, you need to learn about how a query is performed. An SQL query is performed at three layers: the client layer, the compute layer, and the storage layer. During the execution, two phases of data transfer are performed, including data transfer between the client and compute nodes, and data transfer between compute nodes and data nodes.

The principles of PolarDB-X service and the process of SQL statement execution determine that the following factors affect the efficiency at which Data Query Language (DQL) statements are executed:

  1. The amount of data
    • The amount of data that is returned to the client.
    • The amount of data that is stored on data nodes. If a large amount of data is stored on the data nodes, a large number of I/O operations are performed. In this case, the I/O resources of the data nodes may become insufficient and cause performance bottleneck issues.
  2. Data retrieval methods
    • The queried data is retrieved from the cache or disks.
    • If a global index is used, the system retrieves the queried data in an efficient manner.
    • If predicates are used to filter the rows that hit the specified global index, the system retrieves the queried data in an efficient manner.
  3. Data processing methods
    • The data processing methods that are used. Data processing methods include sorting, subquery, aggregation, and association. In most cases, PolarDB-X creates a temporary table to store the data that is obtained and then processes the data.
    • The amount of data that is processed. If large amounts of data is processed on a compute node, large amounts of CPU resources of the compute node are consumed. In this case, the data is processed in an inefficient manner.
    • The join algorithm that is used. If an inappropriate join algorithm is used, data is processed in an inefficient manner. For information about join algorithms, see Join algorithms.

Optimization methods

  1. Store data in a location that can be accessed in an efficient manner.

    For example, a table on which specific SQL statements are executed contains a large number of data rows and the schema of the table cannot be optimized. In this case, you can store the queried data in the cache of your application or a Redis cache if the data is frequently accessed and is not of a large size and is not frequently changed. This way, the data can be accessed in an efficient manner.

  2. Push join operations to data nodes to be executed
    • If your business scenario requires join operations to be performed on large tables, you need to create appropriate sharding strategies for the tables. Make sure that the shard keys and the join keys are consistent. PolarDB-X converts the join clauses to physical SQL statements and then pushes the physical SQL statements down to data nodes to be executed. This way, the join clauses are executed on table shards and the workloads on tables are distributed to table shards.
    • If slow queries occur due to join operations that are performed on large tables and small tables, and data in the small tables is not of a large size and is not frequently changed, you can convert the small tables to broadcast tables. PolarDB-X converts join clauses that are used to join broadcast tables and tables of other types to physical SQL statements and pushes the physical SQL statements to data nodes to be executed.
    • If the table sharding strategies of your tables cannot be modified, you can create global secondary indexes based on the join keys. This way, the join operations can be pushed down to data nodes and are performed based on the index tables on the data nodes.
  3. Reduce the scope of data that is scanned
    • Specify predicate expressions in SQL statements as conditions to filter data rows. This way, you can reduce the number of rows that are scanned and improve the query efficiency. For example, you can specify timestamps as conditions to filter data.
    • Specify suitable indexes. If an index table of a large table contains the data that is queried, the queried data can be directly returned from the index table. This way, the system does not scan the base table for the queried data. This reduces the resource consumption of the SQL query.
  4. Specify an appropriate data processing algorithm for your SQL execution plan

    You must specify an appropriate data processing algorithm for your SQL execution plan. For example, you can use the HashJoin, SortMergeJoin, or BkaJoin algorithm based on your business scenario. If you do not specify an appropriate execution plan, the resource consumption of SQL queries increase.

Identify slow queries

  • You can view slow queries in the PolarDB-X console. For more information about slow queries, see Slow query logs and Performance trend.
  • You can use the SHOW SLOW statement to view logical slow queries and use the SHOW PHYSICAL_SLOW statement to view physical slow queries. These statements return only the most recent 100 logical slow queries or physical slow queries.
    mysql> show slow;
    +------------------+---------------+-----------+---------------------+--------------+------------+--------------------+
    | TRACE_ID         | USER          | HOST      | START_TIME          | EXECUTE_TIME | AFFECT_ROW | SQL                |
    +------------------+---------------+-----------+---------------------+--------------+------------+--------------------+
    | 12f812912f400000 | polardbx_root | 127.0.0.1 | 2021-08-24 12:59:38 |         6475 |          0 | show physical_slow |
    | 12f8128383400000 | polardbx_root | 127.0.0.1 | 2021-08-24 12:59:19 |         1899 |         -1 | show physical_slow |
    +------------------+---------------+-----------+---------------------+--------------+------------+--------------------+
    2 rows in set (0.01 sec)
    
    mysql> show physical_slow;
    Empty set (0.01 sec)

Analyze slow queries

The following list describes the issues that can cause slow queries:
  • Business issues: data skews, inappropriate sharding strategies, and an excessive number of data rows that are returned.
  • System issues: performance bottleneck issues and network jitters. If the amount of traffic for your business is high and the resources of your database become insufficient, performance bottleneck issues occur.
  • Execution issues: issues that are caused by using an inappropriate index, inappropriate join algorithm, or inappropriate order based on which clauses are executed.

Analyze slow queries based on your business requirements

  1. If slow queries are simple queries and the data that is returned for each query contains hundreds of thousands or millions of rows, you need to determine whether your business requires all data that is returned.
  2. If the system requires long periods of time to execute a specific type of SQL statements on specific shards, you can use the SHOW INFO FROM T statement to check whether data skew occurs. If data skew occurs, you can check whether the sharding strategies of your tables can be modified.
mysql> show table info from test_tb;
+----+------------+-----------------+------------+
| ID | GROUP_NAME | TABLE_NAME      | SIZE_IN_MB |
+----+------------+-----------------+------------+
|  0 | ads_000000 | test_tb_o2ud_00 | 0.01562500 |
|  1 | ads_000000 | test_tb_o2ud_01 | 0.01562500 |
|  2 | ads_000000 | test_tb_o2ud_02 | 0.01562500 |
|  3 | ads_000000 | test_tb_o2ud_03 | 0.01562500 |
|  4 | ads_000000 | test_tb_o2ud_04 | 0.01562500 |
+----+------------+-----------------+------------+
Analyze slow queries based on your system requirements
  1. If slow queries occur during traffic spikes, you can log on to the PolarDB-X console to check the monitoring metrics of compute resources and storage resources. If the monitoring metrics show that compute resources and storage resources are being fully consumed, we recommend that you increase your database resources or enable the SQL throttling feature. For information about SQL throttling, see SQL throttling.
  2. If no slow query logs are displayed in the PolarDB-X console when your business is affected by slow queries, check your business settings. If you make sure that your business does not cause slow queries, we recommend that you check your network condition. Check whether your client and your database server are deployed in the same virtual private cloud (VPC) or perform packet capture to check the network condition.
Analyze slow queries based on the requirements of an execution plan

PolarDB-X provides various EXPLAIN statements to check whether slow queries are caused by inappropriate execution plans.

  • View logical execution plans

    Use the EXPLAIN statement to view the logical execution plan of an SQL statement.

    mysql> explain select nation, o_year, sum(amount) as sum_profit from ( select n_name as nation, extract(year from o_orderdate) as o_year, l_extendedprice * (1 - l_discount) - ps_supplycost * l_quantity as amount from part, supplier, lineitem, partsupp, orders, nation where s_suppkey = l_suppkey and ps_suppkey = l_suppkey and ps_partkey = l_partkey and p_partkey = l_partkey and o_orderkey = l_orderkey and s_nationkey = n_nationkey and p_name like '%yellow%' ) as profit group by nation, o_year order by nation, o_year desc limit 1;
    +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    | LOGICAL EXECUTIONPLAN                                                                                                                                                                                                                                                                                                                                                                                       |
    +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    | Limit(offset=0, fetch=?2)                                                                                                                                                                                                                                                                                                                                                                                   |
    |   SortAgg(group="nation,o_year", $f2="SUM(amount)")                                                                                                                                                                                                                                                                                                                                                         |
    |     Project(nation="N_NAME", o_year="EXTRACT", amount="__*__ - PS_SUPPLYCOST * L_QUANTITY")                                                                                                                                                                                                                                                                                                                 |
    |       HashJoin(condition="PS_PARTKEY = L_PARTKEY AND P_PARTKEY = L_PARTKEY AND PS_SUPPKEY = L_SUPPKEY AND PS_SUPPKEY = S_SUPPKEY", type="inner")                                                                                                                                                                                                                                                            |
    |         MemSort(sort="N_NAME ASC,EXTRACT DESC")                                                                                                                                                                                                                                                                                                                                                             |
    |           BKAJoin(condition="O_ORDERKEY = L_ORDERKEY", type="inner")                                                                                                                                                                                                                                                                                                                                        |
    |             Project(S_SUPPKEY="S_SUPPKEY", S_NATIONKEY="S_NATIONKEY", N_NATIONKEY="N_NATIONKEY", N_NAME="N_NAME", L_ORDERKEY="L_ORDERKEY", L_PARTKEY="L_PARTKEY", L_SUPPKEY="L_SUPPKEY", L_QUANTITY="L_QUANTITY", __*__="__*__")                                                                                                                                                                            |
    |               HashJoin(condition="L_SUPPKEY = S_SUPPKEY", type="inner")                                                                                                                                                                                                                                                                                                                                     |
    |                 Gather(concurrent=true)                                                                                                                                                                                                                                                                                                                                                                     |
    |                   LogicalView(tables="[000000-000003].lineitem_[00-15]", shardCount=16, sql="SELECT `L_ORDERKEY`, `L_PARTKEY`, `L_SUPPKEY`, `L_QUANTITY`, (`L_EXTENDEDPRICE` * (? - `L_DISCOUNT`)) AS `__*__` FROM `lineitem` AS `lineitem`")                                                                                                                                                               |
    |                 BKAJoin(condition="N_NATIONKEY = S_NATIONKEY", type="inner")                                                                                                                                                                                                                                                                                                                                |
    |                   Gather(concurrent=true)                                                                                                                                                                                                                                                                                                                                                                   |
    |                     LogicalView(tables="[000000-000003].supplier_[00-15]", shardCount=16, sql="SELECT `S_SUPPKEY`, `S_NATIONKEY` FROM `supplier` AS `supplier`")                                                                                                                                                                                                                                            |
    |                   Gather(concurrent=true)                                                                                                                                                                                                                                                                                                                                                                   |
    |                     LogicalView(tables="[000000-000003].nation_[00-15]", shardCount=16, sql="SELECT `N_NATIONKEY`, `N_NAME` FROM `nation` AS `nation` WHERE (`N_NATIONKEY` IN (...))")                                                                                                                                                                                                                      |
    |             Gather(concurrent=true)                                                                                                                                                                                                                                                                                                                                                                         |
    |               LogicalView(tables="[000000-000003].orders_[00-15]", shardCount=16, sql="SELECT `O_ORDERKEY`, EXTRACT(YEAR FROM `O_ORDERDATE`) AS `EXTRACT` FROM `orders` AS `orders` WHERE (`O_ORDERKEY` IN (...))")                                                                                                                                                                                         |
    |         Gather(concurrent=true)                                                                                                                                                                                                                                                                                                                                                                             |
    |           LogicalView(tables="[000000-000003].part_[00-15],partsupp_[00-15]", shardCount=16, sql="SELECT `part`.`P_PARTKEY`, `partsupp`.`PS_PARTKEY`, `partsupp`.`PS_SUPPKEY`, `partsupp`.`PS_SUPPLYCOST` FROM `part` AS `part` INNER JOIN `partsupp` AS `partsupp` ON ((`part`.`P_PARTKEY` = `partsupp`.`PS_PARTKEY`) AND (`part`.`P_NAME` LIKE ?)) WHERE (`partsupp`.`PS_PARTKEY` = `part`.`P_PARTKEY`)") |
    | HitCache:false                                                                                                                                                                                                                                                                                                                                                                                              |
    | Source:PLAN_CACHE
    Note In the preceding example, the shardCount parameter in LogicalView indicates the total number of table shards that are accessed.

    You can analyze whether the SQL statement is executed as expected based on the execution plan and check whether the global secondary index, aggregation algorithm, and join algorithm that are specified in the SQL statement are suitable for the query. The execution plan affects the execution efficiency. For more information, see GSI, Optimize and execute aggregation operations, and Optimize and execute JOIN operations. If an inappropriate execution plan is used in the optimization process, you can use the following methods to resolve the issue:

    1. Check the statistical information about the tables on which the SQL statement is executed. An execution plan is generated based on the statistical information about the tables on which an SQL statement is executed. If specific statistical information is missing or expired, the execution plan that is generated based on the statistical information may not be the optimal execution plan. You can use the ANALYZE TABLE statement to trigger a task to collect statistics of your tables.
      mysql> show statistic;  // You can check whether the statistical information that is collected is correct.
      +---------------+------------+-------------+-------------+
      | table_name    | table_rows | column_name | cardinality |
      +---------------+------------+-------------+-------------+
      | tpch_orders   |       8400 | NULL        |        NULL |
      | tpch_lineitem |      10000 | NULL        |        NULL |
      | lineitem      |       8492 | NULL        |        NULL |
      | tpch_part     |       8054 | NULL        |        NULL |
      | aa_lineitem   |       8490 | l_id        |        9143 |
      | part          |       8054 | NULL        |        NULL |
      | not_rt        |       8446 | NULL        |        NULL |
      | aa_orders     |       8308 | o_id        |        9095 |
      | partsupp      |       8609 | NULL        |        NULL |
      | aa_partsupp   |       8667 | ps_id       |        9424 |
      | tpch_partsupp |       8596 | NULL        |        NULL |
      | tpch_customer |       8572 | NULL        |        NULL |
      | orders        |       8423 | NULL        |        NULL |
      | aa_customer   |       9080 | c_id        |        9476 |
      | customer      |       8472 | NULL        |        NULL |
      +---------------+------------+-------------+-------------+
      15 rows in set (0.01 sec)
      
      mysql> analyze table lineitem; // You can specify a table name to trigger a task to collect the statistical information about the specified table.
      +--------------+---------+----------+----------+
      | TABLE        | OP      | MSG_TYPE | MSG_TEXT |
      +--------------+---------+----------+----------+
      | ads.lineitem | analyze | status   | OK       |
      +--------------+---------+----------+----------+
      1 row in set (1.66 sec)
    2. Create a custom execution plan. Even if all correct statistical information is collected, the execution plan that is generated by the optimizer may not be optimal. The optimizer uses dynamic plan operators to calculate an execution plan. The execution plan that is generated may not be optimal. You can use a FORCE INDEX hint to create an execution plan.
    3. Use a global secondary index. After you confirm that the execution plan is suitable for the query, you can also use a global secondary index to optimize the SQL execution. For information about how to use a global secondary index, see GSI and Intelligent index recommendation.
  • View physical execution plans

    Use the EXPLAIN EXECUTE statement to view the execution plan for an SQL clause that is pushed down to data nodes to be executed. You can check whether the query that is performed on data nodes uses an appropriate local index. You can use the FORCE INDEX statement to specify an index. If an appropriate local index is used, the system scans the data nodes to query the requested data in an efficient manner.

    mysql> explain execute  select * from lineitem;
    +----+-------------+----------+------------+------+---------------+-----+---------+-----+------+----------+-------+
    | id | select_type | table    | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
    +----+-------------+----------+------------+------+---------------+-----+---------+-----+------+----------+-------+
    | 1  | SIMPLE      | lineitem | NULL       | ALL  | NULL          | NULL | NULL    | NULL | 1    | 100      | NULL  |
    +----+-------------+----------+------------+------+---------------+-----+---------+-----+------+----------+-------+
    1 row in set (0.02 sec)
    
    mysql> explain execute  select L_LINENUMBER from lineitem;
    +----+-------------+----------+------------+-------+---------------+---------+---------+-----+------+----------+-------------+
    | id | select_type | table    | partitions | type  | possible_keys | key     | key_len | ref | rows | filtered | Extra       |
    +----+-------------+----------+------------+-------+---------------+---------+---------+-----+------+----------+-------------+
    | 1  | SIMPLE      | lineitem | NULL       | index | NULL          | PRIMARY | 8       | NULL | 1    | 100      | Using index |
    +----+-------------+----------+------------+-------+---------------+---------+---------+-----+------+----------+-------------+
    1 row in set (0.04 sec)
  • Analyze query duration

    If the system requires a long period of time to execute a specific SQL statement each time, you can use the EXPLAIN ANALYZE statement to check which operator consumes a long period of time.

    mysql> explain analyze select L_SUPPKEY, count(*) from lineitem group by L_SUPPKEY;
    +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    | LOGICAL EXECUTIONPLAN                                                                                                                                                                                                                                                                                                                                                                              |
    +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    | HashAgg(group="L_SUPPKEY", count(*)="SUM(count(*))"): rowcount = 1000.0, cumulative cost = value = 2.4863085E7, cpu = 108049.0, memory = 35480.0, io = 201.0, net = 4.75, actual time = 0.000 + 0.000, actual rowcount = 1000, actual memory = 0, instances = 1, id = 519                                                                                                                          |
    |   Gather(concurrent=true): rowcount = 1000.0, cumulative cost = value = 2.4860051E7, cpu = 105039.0, memory = 11881.0, io = 201.0, net = 4.75, actual time = 0.000 + 0.000, actual rowcount = 0, actual memory = 0, instances = 0, id = 517                                                                                                                                                        |
    |     LogicalView(tables="[000000-000003].lineitem_[00-15]", shardCount=16, sql="SELECT `L_SUPPKEY`, COUNT(*) AS `count(*)` FROM `lineitem` AS `lineitem` GROUP BY `L_SUPPKEY`"): rowcount = 1000.0, cumulative cost = value = 2.486005E7, cpu = 105038.0, memory = 11881.0, io = 201.0, net = 4.75, actual time = 0.004 + 0.002, actual rowcount = 4892, actual memory = 0, instances = 0, id = 453 |

    Parameters:

    • rowcount: the estimated number of rows that can be returned by the operator. The estimated number of rows is calculated based on the statistical information about tables.
    • actual rowcount: the actual number of rows that is returned by the operator.
    • actual time: the period of time that is required by the operator.

    You can check which operator requires the longest period of time and then resolve the issue based on the information that is returned by the EXPLAIN ANALYZE statement. If an insufficient degree of parallelism (DOP) is used, you can change the value of DOP to a higher value to accelerate queries. For more information, see Query executors. You can use the TRACE statement and the SHOW TRACE statement to check physical SQL statements that are pushed down to data nodes to be executed and the execution duration of each physical SQL statement. This way, you can determine whether the executions on compute nodes or data nodes cause the slow query.

    mysql> trace select L_SUPPKEY, count(*) from lineitem group by L_SUPPKEY limit 1;
    +-----------+----------+
    | L_SUPPKEY | count(*) |
    +-----------+----------+
    |         1 |       12 |
    +-----------+----------+
    1 row in set (0.21 sec)
    
    mysql> show trace;
    +----+----------------+-----------+-------+------------------+----------------------------------------------------------------------------+---------------+--------------------------+---------------------+---------------------+------+-------------------------------------------------------------------------------------------------------------------------------+----------------------+
    | ID | NODE_IP        | TIMESTAMP | TYPE  | GROUP_NAME       | DBKEY_NAME                                                                 | TIME_COST(MS) | CONNECTION_TIME_COST(MS) | TOTAL_TIME_COST(MS) | CLOSE_TIME_COST(MS) | ROWS | STATEMENT                                                                                                                     | PARAMS               |
    +----+----------------+-----------+-------+------------------+----------------------------------------------------------------------------+---------------+--------------------------+---------------------+---------------------+------+-------------------------------------------------------------------------------------------------------------------------------+----------------------+
    |  0 | 192.168.31.105 |     0.000 | Query | ADS_000003_GROUP | dskey_ads_000003_group#polardbx-storage-1-master#127.0.0.1-3306#ads_000003 | 3             | 0.00                     | 3                   | 0                   |    0 | /*DRDS /127.0.0.1/12f940ed62400000/0// */SELECT `L_SUPPKEY`, COUNT(*) AS `count(*)`
    FROM ? AS `lineitem`
    GROUP BY `L_SUPPKEY` | [`lineitem_MgSG_13`] |
    |  1 | 192.168.31.105 |    -0.000 | Query | ADS_000003_GROUP | dskey_ads_000003_group#polardbx-storage-1-master#127.0.0.1-3306#ads_000003 | 4             | 0.00                     | 7                   | 0                   |    0 | /*DRDS /127.0.0.1/12f940ed62400000/0// */SELECT `L_SUPPKEY`, COUNT(*) AS `count(*)`
    FROM ? AS `lineitem`
    GROUP BY `L_SUPPKEY` | [`lineitem_MgSG_12`] |
    |  2 | 192.168.31.105 |     0.000 | Query | ADS_000003_GROUP | dskey_ads_000003_group#polardbx-storage-1-master#127.0.0.1-3306#ads_000003 | 2             | 0.00                     | 2                   | 0                   |    0 | /*DRDS /127.0.0.1/12f940ed62400000/0// */SELECT `L_SUPPKEY`, COUNT(*) AS `count(*)`
    FROM ? AS `lineitem`
    GROUP BY `L_SUPPKEY` | [`lineitem_MgSG_14`] |
    |  3 | 192.168.31.105 |     0.000 | Query | ADS_000003_GROUP | dskey_ads_000003_group#polardbx-storage-1-master#127.0.0.1-3306#ads_000003 | 3             | 0.00                     | 3                   | 0                   |    0 | /*DRDS /127.0.0.1/12f940ed62400000/0// */SELECT `L_SUPPKEY`, COUNT(*) AS `count(*)`
    FROM ? AS `lineitem`
    GROUP BY `L_SUPPKEY` | [`lineitem_MgSG_15`] |
    |  4 | 192.168.31.105 |     0.000 | Query | ADS_000001_GROUP | dskey_ads_000001_group#polardbx-storage-1-master#127.0.0.1-3306#ads_000001 | 5             | 0.00                     | 14                  | 2                   |  746 | /*DRDS /127.0.0.1/12f940ed62400000/0// */SELECT `L_SUPPKEY`, COUNT(*) AS `count(*)`
    FROM ? AS `lineitem`
    GROUP BY `L_SUPPKEY` | [`lineitem_MgSG_04`] |
    |  5 | 192.168.31.105 |     0.000 | Query | ADS_000001_GROUP | dskey_ads_000001_group#polardbx-storage-1-master#127.0.0.1-3306#ads_000001 | 4             | 0.00                     | 15                  | 1                   |  682 | /*DRDS /127.0.0.1/12f940ed62400000/0// */SELECT `L_SUPPKEY`, COUNT(*) AS `count(*)`
    FROM ? AS `lineitem`
    GROUP BY `L_SUPPKEY` | [`lineitem_MgSG_05`] |
    |  6 | 192.168.31.105 |     0.000 | Query | ADS_000001_GROUP | dskey_ads_000001_group#polardbx-storage-1-master#127.0.0.1-3306#ads_000001 | 9             | 0.00                     | 15                  | 1                   |  516 | /*DRDS /127.0.0.1/12f940ed62400000/0// */SELECT `L_SUPPKEY`, COUNT(*) AS `count(*)`
    FROM ? AS `lineitem`
    GROUP BY `L_SUPPKEY` | [`lineitem_MgSG_06`] |
    |  7 | 192.168.31.105 |     0.000 | Query | ADS_000001_GROUP | dskey_ads_000001_group#polardbx-storage-1-master#127.0.0.1-3306#ads_000001 | 7             | 0.00                     | 15                  | 0                   |  310 | /*DRDS /127.0.0.1/12f940ed62400000/0// */SELECT `L_SUPPKEY`, COUNT(*) AS `count(*)`
    FROM ? AS `lineitem`
    GROUP BY `L_SUPPKEY` | [`lineitem_MgSG_07`] |
    |  8 | 192.168.31.105 |     0.000 | Query | ADS_000000_GROUP | dskey_ads_000000_group#polardbx-storage-0-master#127.0.0.1-3306#ads_000000 | 2             | 0.00                     | 3                   | 0                   |    0 | /*DRDS /127.0.0.1/12f940ed62400000/0// */SELECT `L_SUPPKEY`, COUNT(*) AS `count(*)`
    FROM ? AS `lineitem`
    GROUP BY `L_SUPPKEY` | [`lineitem_MgSG_00`] |
    |  9 | 192.168.31.105 |     0.000 | Query | ADS_000000_GROUP | dskey_ads_000000_group#polardbx-storage-0-master#127.0.0.1-3306#ads_000000 | 5             | 0.00                     | 7                   | 1                   |  884 | /*DRDS /127.0.0.1/12f940ed62400000/0// */SELECT `L_SUPPKEY`, COUNT(*) AS `count(*)`
    FROM ? AS `lineitem`
    GROUP BY `L_SUPPKEY` | [`lineitem_MgSG_02`] |
    | 10 | 192.168.31.105 |     0.000 | Query | ADS_000002_GROUP | dskey_ads_000002_group#polardbx-storage-0-master#127.0.0.1-3306#ads_000002 | 2             | 0.00                     | 5                   | 0                   |    0 | /*DRDS /127.0.0.1/12f940ed62400000/0// */SELECT `L_SUPPKEY`, COUNT(*) AS `count(*)`
    FROM ? AS `lineitem`
    GROUP BY `L_SUPPKEY` | [`lineitem_MgSG_08`] |
    | 11 | 192.168.31.105 |     0.000 | Query | ADS_000000_GROUP | dskey_ads_000000_group#polardbx-storage-0-master#127.0.0.1-3306#ads_000000 | 6             | 0.00                     | 11                  | 0                   |  917 | /*DRDS /127.0.0.1/12f940ed62400000/0// */SELECT `L_SUPPKEY`, COUNT(*) AS `count(*)`
    FROM ? AS `lineitem`
    GROUP BY `L_SUPPKEY` | [`lineitem_MgSG_01`] |
    | 12 | 192.168.31.105 |     0.000 | Query | ADS_000000_GROUP | dskey_ads_000000_group#polardbx-storage-0-master#127.0.0.1-3306#ads_000000 | 7             | 0.00                     | 10                  | 1                   |  837 | /*DRDS /127.0.0.1/12f940ed62400000/0// */SELECT `L_SUPPKEY`, COUNT(*) AS `count(*)`
    FROM ? AS `lineitem`
    GROUP BY `L_SUPPKEY` | [`lineitem_MgSG_03`] |
    | 13 | 192.168.31.105 |     0.000 | Query | ADS_000002_GROUP | dskey_ads_000002_group#polardbx-storage-0-master#127.0.0.1-3306#ads_000002 | 3             | 0.00                     | 3                   | 0                   |    0 | /*DRDS /127.0.0.1/12f940ed62400000/0// */SELECT `L_SUPPKEY`, COUNT(*) AS `count(*)`
    FROM ? AS `lineitem`
    GROUP BY `L_SUPPKEY` | [`lineitem_MgSG_09`] |
    | 14 | 192.168.31.105 |     0.000 | Query | ADS_000002_GROUP | dskey_ads_000002_group#polardbx-storage-0-master#127.0.0.1-3306#ads_000002 | 2             | 0.00                     | 2                   | 0                   |    0 | /*DRDS /127.0.0.1/12f940ed62400000/0// */SELECT `L_SUPPKEY`, COUNT(*) AS `count(*)`
    FROM ? AS `lineitem`
    GROUP BY `L_SUPPKEY` | [`lineitem_MgSG_10`] |
    | 15 | 192.168.31.105 |     0.000 | Query | ADS_000002_GROUP | dskey_ads_000002_group#polardbx-storage-0-master#127.0.0.1-3306#ads_000002 | 1             | 0.00                     | 3                   | 1                   |    0 | /*DRDS /127.0.0.1/12f940ed62400000/0// */SELECT `L_SUPPKEY`, COUNT(*) AS `count(*)`
    FROM ? AS `lineitem`
    GROUP BY `L_SUPPKEY` | [`lineitem_MgSG_11`] |
    +----+----------------+-----------+-------+------------------+----------------------------------------------------------------------------+---------------+--------------------------+---------------------+---------------------+------+-------------------------------------------------------------------------------------------------------------------------------+----------------------+
    16 rows in set (0.05 sec)