PolarDB for MySQL 8.0 provides the parallel query feature. The feature is automatically enabled to reduce the response time of queries when the volume of data you query exceeds a specified threshold.
In PolarDB for MySQL 8.0.1, data shards are distributed to different threads at the storage layer. These threads run in parallel and return the results to the leader thread. Then, the leader thread merges the results and returns the final result to users. This way, the query efficiency can be improved.
In PolarDB for MySQL 8.0.2, the linear acceleration capabilities of parallel queries are brought to a new level and multi-phase parallel computing capabilities are provided. Cost-based optimization enables more flexible parallel execution plans and overcomes the performance bottleneck of a single leader thread and the imbalanced loads of workers that may exist in parallel queries of PolarDB for MySQL 8.0.1. Specifically, data shards are still distributed to different threads at the storage layer. Multiple threads perform parallel computing and may distribute computing to the parallel worker group in the next phase. The parallel computing results are returned to the leader thread and then to users. This greatly improves parallel execution capabilities.
Parallel queries are achieved based on the parallel processing capabilities of multi-core CPUs. The following figure shows how parallel processing is performed in a PolarDB for MySQL Cluster that has an 8-core CPU and 32-GB memory.
- 188.8.131.52.5 or later
- 184.108.40.206.4.1 or later
- 220.127.116.11.0 or later
For more information about how to view the cluster version, see Query the version number.
Parallel query is applicable to most SELECT statements, such as queries on large tables, multi-table queries that use JOIN statements, and queries on a large amount of data. This feature does not benefit extremely short queries.
- Lightweight analysis
Report queries are complex and time-consuming. The parallel query feature can accelerate individual queries.
- More available system resources
More system resources are required if the parallel query feature is enabled. You can use this feature to improve resource utilization and query efficiency only if your system has sufficient CPU resources, low I/O loads, and sufficient memory resources.
- Combination of online and offline queries
You can use different endpoints to connect services to different database nodes. This ensures that the services do not affect each other. If you enable the parallel query feature for the endpoint of a single-node cluster, the read-only node that corresponds to this endpoint is accessed by using the endpoint that is used for online analytical processing (OLAP).
- Enable or disable the parallel query feature by setting a cluster endpoint or system
parameters, and set the DOP.
- On the Configure Nodes page in the PolarDB console, you can enable or disable the
parallel query feature and set the DOP. For more information, see Configure PolarProxy.
- The parameter can set only if you set the read/write mode of a cluster endpoint to Read Only.
- The setting takes precedence over that of the max_parallel_degree parameter.
- On the Parameters page in the PolarDB console, set the DOP by using the max_parallel_degree parameter to specify the maximum number of threads used for each SQL statement to
execute queries in parallel. If you set the value of this parameter to 0, the parallel
query feature is disabled. For more information, see Specify cluster parameters.
Note The setting of this parameter takes effect immediately. You do not need to restart the database in order for the change to take effect.
We recommend that you complete the following parallel query settings:
You can modify global parameters to adjust the DOP at the cluster level. You can also adjust the DOP for SQL statements in a specific session. For example, if you add a session-level environment variable to the setting of the Java Database Connectivity (JDBC) connection string, you can set the DOP for a specific application.
- Increase the value of the max_parallel_degree parameter in small increments. We recommend that the value should not exceed a quarter of the total number of CPU cores. Enable the parallel query feature only if your cluster has at least eight CPU cores. Do not enable the parallel query feature for clusters with lower specifications. For example, if you set the max_parallel_degree parameter to 2, you can check the CPU utilization on the next day. If the CPU utilization is low, increase the value of the max_parallel_degree parameter. If the CPU utilization is high, do not increase the value.
- If your cluster has at least 8 CPU cores, the default value of the DOP parameter is 2. If your cluster has at least 16 CPU cores, the default value is 4. If your cluster has at least 88 CPU cores, the default value is 8.
- In the PolarDB console, the max_parallel_degree parameter is named as loose_max_parallel_degree. This setting maximizes the compatibility of the feature with other programs and is consistent with the default MySQL settings. This also ensures that no error is returned when other programs use the parameter. For more information, see Program Option Modifiers.
- When you enable the parallel query feature, set the innodb_adaptive_hash_index parameter to OFF. If the innodb_adaptive_hash_index parameter is set to ON, the performance of parallel query is degraded.
set max_parallel_degree = n
- On the Configure Nodes page in the PolarDB console, you can enable or disable the parallel query feature and set the DOP. For more information, see Configure PolarProxy.
- Use hints.
You can optimize an individual SQL statement by using hints. For example, if the parallel query feature is disabled by the system, you can use hints to accelerate a slow SQL query that is frequently executed. For more information, see Parallel hints.
- Use a threshold value to specify whether the optimizer can enable parallel query.
PolarDB provides two threshold values to specify whether the optimizer can enable parallel query. Parallel query is enabled by the optimizer if SQL statements meet one of the requirements that are specified by the threshold values.
If the number of scanned rows exceeds the value of records_threshold_for_parallelism, the optimizer enables parallel query. The default value is 10000. If the majority of your queries are simple and involve only a small volume of data, we recommend that you set the threshold value to at least 2000.Note The number of scanned rows is estimated based on the statistics of tables. This number can be different from the actual number of rows.
If the cost of sequential queries exceeds the value of the cost_threshold_for_parallelism parameter, the optimizer enables parallel query. The default value is 50000.
Parameters and variables
|max_parallel_degree||Global and session||
The maximum DOP for an individual query. This parameter specifies the maximum number of workers that are used to run queries in parallel.
Note The PolarDB optimizer can use different parallel query execution plans to run the main query and the subqueries in parallel. If the optimizer uses the same plan, the maximum number of workers cannot exceed the value of max_parallel_degree. The total number of workers equals the sum number of the workers that are used by the main query and the workers that are used by the subqueries.
|parallel_degree_policy||Global||Specify a DOP policy for an individual query. Valid values:
Note For more information about DOP policies, see DOP policy.
If the number of scanned rows exceeds the value of records_threshold_for_parallelism, the optimizer enables parallel query.
Note If the majority of your queries are simple and involve only a small volume of data, we recommend that you set the threshold value to at least 2000.
If the cost of sequential queries exceeds the value of the cost_threshold_for_parallelism parameter, the optimizer enables parallel query.
|Parallel_workers_created||Global and session||The number of parallel workers that have been created since the start of the session.|
|Gather_records||Global and session||The total number of records that are gathered.|
|PQ_refused_over_total_workers||Global and session||The number of queries that are not performed in parallel due to the limit on the total number of workers.|
|Total_running_parallel_workers||Global||The number of parallel workers that are running.|
The following tests use 100 GB of data that is generated based on TPC Benchmark H (TPC-H) to test the performance of a PolarDB for MySQL 8.0 cluster. In the test, the PolarDB cluster has a 32-core CPU and 256-GB memory (Dedicated). The max_parallel_degree parameter is set to 32 or 0. For more information, see Parallel query performance in OLAP scenarios.
The test result shows 100% of the SQL queries in TPC-H are accelerated. The query speed is 19 times faster.
View parallel query execution plans
For more information about how to execute the EXPLAIN statement to view parallel query information in execution plans, see View parallel query execution plans.
- Parallel scans
In a parallel scan, workers scan the data of a table in parallel. Each worker produces a partial result and returns the partial result to the leader thread. The leader thread gathers the results by using the gather node and returns the final result to the client side.
- Parallel joins on multiple tables
If parallel query is enabled, the complete multi-table join operation is sent to workers for parallel processing. The PolarDB optimizer selects the optimal table for parallel scanning, and does not perform parallel scanning for any other tables. Each worker produces a partial result and returns the partial result to a leader thread. The leader thread gathers the results by using the gather node and returns the final result to the client side.
- Parallel sorting
The PolarDB optimizer sends the ORDER BY operation to all workers for parallel sorting. Each worker produces a partial result. The leader thread gathers, merges, and sorts all partial results by using the gather merge node, and returns the final sorting result to the client side.
- Parallel grouping
The PolarDB optimizer sends the GROUP BY operation to all workers for parallel grouping. Each worker performs the GROUP BY operation on a portion of data. Each worker produces a partial result of GROUP BY. The leader thread gathers the results from all workers by using the gather node. The PolarDB optimizer checks whether to perform a GROUP BY operation in the leader thread again based on the query plan. For example, if a loose index scan is used to execute a GROUP BY statement, the leader thread does not perform a GROUP BY operation. If the loose index scan is not used, the leader thread performs a GROUP BY operation and returns the final result to the client side.
- Parallel aggregation
If parallel query is enabled, the aggregate function is sent to all parallel workers for parallel aggregation. The optimizer determines whether to perform serial execution, one-phase aggregation, or two-phase aggregation based on the cost.
- One-phase aggregation: The optimizer distributes the aggregation operation to workers. Each worker contains all data in the groups. Therefore, the second-phase aggregation computing is not required. Each worker directly computes the final aggregation results of the groups to prevent the leader thread from performing a second aggregation.
- Two-phase aggregation: In the first phase, each worker involved in the parallel query performs the aggregation. In the second phase, the gather or gather merge node returns the results generated by each worker to the leader thread. Then, the leader thread aggregates the results from all workers to generate the final result.
- Parallel window function
The PolarDB optimizer performs computing and distributes window functions to workers for parallel execution based on the cost. Each worker computes some data. The distribution method is based on the key of the PARTITION BY clause in window functions. If window functions do not contain the PARTITION BY clause, serial computing is performed. However, if parallel computing can still be performed later, subsequent computing tasks are distributed to multiple workers for execution based on the cost to ensure maximum parallelization.
- Support for subqueries
In a parallel query, you can use one of the following policies to run subqueries:
- Sequential execution in the leader thread
If subqueries do not support parallel processing, the subqueries in the leader thread are performed in sequence. For example, if two tables are joined by using a JOIN clause that references user-defined functions (UDFs), the subqueries are performed in sequence.
- Parallel execution in the leader thread (another group of workers is used)
After a parallel query plan is generated, if the execution plan of the leader thread includes the subqueries that support parallel processing, these parallel subqueries cannot be run in advance or run in parallel based on the shared access policy. For example, if the subqueries include window functions, the subqueries cannot run in parallel based on the shared access policy.
- Shared access
After a parallel query plan is generated, if the execution plans of workers reference the subqueries that support parallel processing, the PolarDB optimizer runs the parallel subqueries in advance. This way, the workers can directly access the results of the subqueries.
- Pushed down
After a parallel query plan is generated, if the execution plans of workers reference correlated subqueries, the workers run the correlated subqueries.
- Sequential execution in the leader thread