PolarDB for MySQL 8.0 launches a parallel query framework. If the amount of the queried data reaches a specific threshold, the parallel query feature is automatically enabled. This exponentially reduces query response time.

In the storage, data shards are distributed to different threads. Multiple threads perform parallel computing and return the results to the leader thread. Then, the leader thread merges and returns the results to clients. This improves the efficiency of queries.

Parallel queries are achieved based on the parallel processing capabilities of multi-core CPUs. The following figure shows parallel processing in a PolarDB for MySQL Cluster Edition cluster that has an 8-core CPU and 32 GB of memory.

Figure

Prerequisites

For more information about how to view the version, see Query the kernel version number.

Scenarios

Parallel queries are applicable to most SELECT statements, such as queries on large tables, multi-table queries that use JOINs, and queries on a large amount of data. The effect of parallel queries is insignificant for short queries.

  • Lightweight analysis business

    Report queries are often complex and time-consuming. The parallel query feature can accelerate individual queries.

  • More available system resources

    Parallel queries consume more system resources. You can use parallel queries to improve resource usage and query efficiency only when your system has sufficient CPU resources, low I/O loads, and sufficient memory.

  • Combine online and offline queries

    You can use different endpoints to connect services to different database nodes. This ensures that the services do not affect each other. If you enable the parallel query feature for a single-node endpoint, the read-only node that corresponds to the single-node endpoint is accessed through the endpoint that is used for online analytical processing (OLAP).

    Combine online and offline queries

How to use parallel queries

  • Use system parameters

    PolarDB allows you to use the global parameter max_parallel_degree to specify the maximum number of threads that each SQL statement can use for parallel processing. The default value is 0. You can modify the parameter when this parameter is being used during parallel processing. After you modify the parameter, you do not need to restart the database. For more information, see Specify cluster parameters.

    We recommend that you configure the following parallel query settings:

    • Gradually increase the value of the max_parallel_degree parameter. We recommend that you set a value that does not exceed a quarter of CPU cores. Enable the parallel query feature only when your cluster has at least eight CPU cores. Do not enable the parallel query feature for small clusters. Set the max_parallel_degree parameter to 2 at the beginning when you use the parallel query feature. Check the CPU usage one day later. If the CPU usage is low, you can gradually increase the value. If the CPU usage is high, do not increase the value.
    • To disable the parallel query feature, set the max_parallel_degree parameter to 0. If the max_parallel_degree parameter is set to 1, the parallel query feature is enabled and the DOP is 1.
    • In the PolarDB console, the max_parallel_degree parameter is prefixed with loose: loose_max_parallel_degree. This way, the max_parallel_degree parameter can be supported by programs of other versions and is consistent with the default settings in the MySQL configuration file. This also ensures that no error is returned when programs of other versions recognize the parameter. For more information, see Program Option Modifiers.
    • When you enable the parallel query feature, set the innodb_adaptive_hash_index parameter to OFF. If the innodb_adaptive_hash_index parameter is set to ON, the parallel query performance is degraded.
    You can modify the global parameter to adjust the cluster-level DOP. You can also adjust the DOP for SQL statements in a specific session. For example, if you add a session-level environment variable to the setting of the Java Database Connectivity (JDBC) connection string, you can set the DOP for a specific application.
    set max_parallel_degree = n 
  • Use hints

    You can optimize an individual statement by using hints. For example, if the parallel query feature is disabled by the system, you can use hints to accelerate a slow SQL query that is frequently executed. For more information, see Parallel hints.

  • Use a threshold to specify whether the optimizer enables parallel queries

    PolarDB provides two thresholds to specify whether the optimizer enables parallel queries. Parallel queries are enabled by the optimizer when SQL statements meet one of the conditions that are specified by these thresholds.

    • records_threshold_for_parallelism

      If the number of scanned rows in the statement exceeds the value of records_threshold_for_parallelism, the optimizer enables parallel queries. The default value is 10000. If your services require a small volume of data or low concurrency of complex queries, we recommend that you set the threshold to a value greater than or equal to 2000.

      Note The number of scanned rows is estimated based on the statistics of tables. It may be different from the actual number of rows.
    • cost_threshold_for_parallelism

      If the overhead of sequential queries exceeds the value of cost_threshold_for_parallelism, the optimizer enables parallel queries. The default value is 50000.

Parameters and variables

Table 1. System parameters
Parameter Level Description
max_parallel_degree Global and session

The maximum DOP for an individual query. This indicates the maximum number of workers that are used to run queries in parallel.

  • Valid values: 0 to 1024.
  • Default value: 0. This indicates that parallel computing is disabled.
Note The PolarDB optimizer may use different parallel execution plans to run the main query and the subqueries in parallel. If the optimizer uses the same parallel execution plan to run the main query and the subqueries in parallel, the maximum number of workers cannot exceed the value of max_parallel_degree. The total number of workers equals the total number of workers used by the main query and subqueries.
parallel_degree_policy Global Specify a DOP policy for an individual query. Valid values:
  • TYPICAL (default): PolarDB sets the DOP to the same value as max_parallel_degree, regardless of database loads, such as CPU usage.
  • AUTO: PolarDB enables or disables parallel queries based on database loads, such as CPU usage, and selects the DOP based on the costs of queries.
  • REPLICA_AUTO: Only read-only nodes enable or disable parallel queries based on database loads, such as CPU usage, and select the DOP based on the costs of queries. The primary node does not perform parallel queries.
Note For more information about the policies for the DOP, see DOP policy.
records_threshold_for_parallelism Session

If the number of scanned rows in the statement exceeds the value of records_threshold_for_parallelism, the optimizer enables parallel queries.

  • Value values: 0 to 18446744073709551615.
  • Default value: 10000.
Note If your services require a small volume of data or low concurrency of complex queries, we recommend that you set the threshold to a value greater than or equal to 2000.
cost_threshold_for_parallelism Session

If the overhead of sequential queries exceeds the value of cost_threshold_for_parallelism, the optimizer enables parallel queries.

  • Value values: 0 to 18446744073709551615.
  • Default value: 50000.
Table 2. Status variables
Variable Level Description
Parallel_workers_created Global and session The number of parallel workers that have been created since the start of the session.
Gather_records Global and session The total number of records that are gathered.
PQ_refused_over_memory_soft_limit Global and session The number of queries that are not performed in parallel due to memory limits.
PQ_refused_over_total_workers Global and session The number of queries that are not performed in parallel due to the limit on the total number of workers.
Total_used_query_memory Global The amount of memory (virtual memory) that is used for the query.
Total_running_parallel_workers Global The number of parallel workers that are running.

Performance metrics

The following tests use 100 GB of data that is generated based on TPC Benchmark™H (TPC-H) to test the performance of a PolarDB for MySQL 8.0 cluster. In the test, the PolarDB cluster has a 32-core CPU and 256 GB of memory (). The max_parallel_degree parameter is set to 32 or 0. For more information, see Parallel query performance (OLAP).

Performance metrics

The test result shows that 95% of the SQL queries in TPC-H are accelerated. For 70% of the accelerated SQL queries, the query speed is more than eight times that of SQL queries that are not executed in parallel.

Terms

  • Parallel scans

    In a parallel scan, workers simultaneously scan the data of a table. Each worker produces a partial result and returns the partial result to a leader thread. The leader thread gathers the results by using the Gather node and returns the final result to the client.

  • Parallel joins on multiple tables

    When parallel queries are enabled, the complete multi-table join operation is sent to workers for parallel processing. The PolarDB optimizer selects only the optimal table for parallel scanning. Non-parallel scanning is performed on all other tables. Each worker produces a partial result and returns the partial result to a leader thread. The leader thread gathers the results by using the Gather node and returns the final result to the client.

  • Parallel sorting

    The PolarDB optimizer sends the ORDER BY operation to all workers for parallel processing. Each worker produces a partial result. The leader thread gathers, merges, and sorts all partial results by using the Gather Merge node, and returns the final sorting result to the client.

  • Parallel grouping

    The PolarDB optimizer sends the GROUP BY operation to all workers for parallel processing. Each worker performs the GROUP BY operation on a portion of data. Each worker produces a partial result of GROUP BY. The leader thread gathers the results from all workers by using the Gather node. The PolarDB optimizer checks whether to perform a GROUP BY operation in the leader thread again based on the query plan. For example, if a loose index scan is used to run a GROUP BY query, the leader thread does not perform a GROUP BY operation. If the loose index scan is not used, the leader thread performs a GROUP BY operation and returns the final result to the client.

  • Parallel aggregation

    The execution of the aggregate function is sent to all parallel workers. PolarDB supports parallel aggregation in two stages. Each worker that performs parallel queries first performs an aggregation. In this case, the leader thread gathers results from all workers by using the Gather or Gather Merge node. Then, the leader thread aggregates the results from all workers to the final result.

  • Support for subqueries
    In parallel queries, subqueries can run by using one of the following policies:
    • Sequential execution in the leader thread

      If subqueries do not support parallel processing, the subqueries in the leader thread are performed in sequence. For example, if two tables are joined by using a JOIN clause that references user-defined functions (UDFs), the subqueries are performed in sequence.

    • Parallel execution in the leader thread. (Another group of workers is used)

      After a parallel plan is generated, the execution plan of the leader thread includes the subqueries that support parallel processing. However, these parallel subqueries cannot run in advance or cannot run in parallel based on the shared access policy. For example, if the subqueries include window functions, the subqueries cannot run in parallel based on the shared access policy.

    • Shared access

      After a parallel plan is generated, if the execution plans of workers reference the subqueries that support parallel processing, the PolarDB optimizer runs these parallel subqueries in advance. This allows the workers to directly access the results of the subqueries.

    • Pushed down

      After a parallel plan is generated, if the execution plans of workers reference correlated subqueries, the workers run these correlated subqueries.