AnalyticDB for MySQL supports two methods for configuring parameters: the SET ADB_CONFIG KEY=VALUE command and hints (syntax: /*+ KEY=VALUE*/). The SET ADB_CONFIG command modifies global configurations and applies the setting to the entire cluster. A hint applies only to the SQL statement that contains it and does not affect global configurations. If you configure a parameter using both the SET ADB_CONFIG command and a hint, the hint takes precedence. This topic describes common configuration parameters in AnalyticDB for MySQL.
Common Config Parameters
For Milvus cluster versions 3.2.5 and later, the system validates the configuration type.
To view and update the minor version, go to the Configuration Information section on the Cluster Information page in the AnalyticDB for MySQL console.
Category | Parameter | Description | Example | Documentation Link |
Switchover window | REPLICATION_SWITCH_TIME_RANGE | During a switchover to a new cluster, the old cluster supports read-only operations for 5 to 10 minutes. After connecting to the old cluster, configure REPLICATION_SWITCH_TIME_RANGE to specify the switchover time window for the old and new clusters. Note If you do not configure the switchover time window for the old and new clusters, the old and new clusters automatically switch over after all incremental data in the old cluster is synchronized to the new cluster in real time. |
| |
Limit on IN conditions | MAX_IN_ITEMS_COUNT | Set the limit on the number of IN conditions. The value must be a positive integer greater than 0.
|
| |
Query timeout period | QUERY_TIMEOUT | Configure the query timeout period for all queries at the cluster level. The value must be an integer greater than 0. Unit: milliseconds (ms). |
| |
Timeout period for INSERT, UPDATE, and DELETE statements | INSERT_SELECT_TIMEOUT | Modify the maximum execution time for INSERT, UPDATE, and DELETE statements at the cluster level. Default value: 24×3600000. The value must be an integer greater than 0. Unit: milliseconds (ms). |
| |
Disable filter condition pushdown |
| Disable filter condition pushdown for specific fields at the cluster level. |
| |
Query execution mode | QUERY_TYPE | Switch the query execution mode of the instance. Valid values:
Note You cannot change the query execution mode for an AnalyticDB for MySQL Data Warehouse Edition cluster in reserved mode or AnalyticDB for MySQL Enterprise Edition, Basic Edition, and Data Lakehouse Edition clusters. |
| |
Query queue | XIHE_ENV_QUERY_ETL_MAX_CONCURRENT_SIZE | The maximum number of queries that can run concurrently in the LOWEST queue on a single frontend node. Default value: 20. |
| Priority queues and concurrency of interactive resource groups |
XIHE_ENV_QUERY_ETL_MAX_QUEUED_SIZE | The maximum number of queries that can be queued in the LOWEST queue on a single frontend node. Default value: 200. |
| ||
XIHE_ENV_QUERY_LOW_PRIORITY_MAX_CONCURRENT_SIZE | The maximum number of queries that can run concurrently in the LOW queue on a single frontend node. Default value: 20. |
| ||
XIHE_ENV_QUERY_LOW_PRIORITY_MAX_QUEUED_SIZE | The maximum number of queries that can be queued in the LOW queue on a single frontend node. Default value: 200. |
| ||
XIHE_ENV_QUERY_NORMAL_MAX_CONCURRENT_SIZE | The maximum number of queries that can run concurrently in the NORMAL queue on a single frontend node. Default value: 20. |
| ||
XIHE_ENV_QUERY_NORMAL_MAX_QUEUED_SIZE | The maximum number of queries that can be queued in the NORMAL queue on a single frontend node. Default value: 200. |
| ||
XIHE_ENV_QUERY_HIGH_MAX_CONCURRENT_SIZE | The maximum number of queries that can run concurrently in the HIGH queue on a single frontend node. Default value: 40. |
| ||
XIHE_ENV_QUERY_HIGH_MAX_QUEUED_SIZE | The maximum number of queries that can be queued in the HIGH queue on a single frontend node. Default value: 400. |
| ||
Priority of XIHE BSP jobs | QUERY_PRIORITY | The priority of a BSP job. Valid values:
|
| |
Maximum ACUs for XIHE BSP jobs | ELASTIC_JOB_MAX_ACU | Maximum number of ACUs used by a single XIHE BSP job. The default value is 9, the minimum value is 3, and the value range is 3 (the maximum resource amount for a Job-type resource group). |
| |
Timeout period for XIHE BSP job queries | BATCH_QUERY_TIMEOUT | The timeout period for XIHE BSP jobs. Default value: 7200000 ms. The value must be an integer greater than 0. Unit: milliseconds (ms). |
| |
Import from OSS external tables | HIVE_SPLIT_ASYNC_GENERATION_ENABLED | When Hive generates an execution plan, specify whether to asynchronously submit Hive Split tasks to the queue for execution. Valid values:
Note This parameter is supported only for Milvus version 3.1.10.1 or later. |
| None |
Import from MaxCompute external tables | SQL_OUTPUT_BATCH_SIZE | The number of data entries when importing data in batches. The value must be an integer greater than 0. |
| |
ENABLE_ODPS_MULTI_PARTITION_PART_MATCH | Specify whether to pre-traverse to get the number of records in each MaxCompute partition. |
| ||
ASYNC_GET_SPLIT | When MaxCompute generates an execution plan, specify whether to optimize split loading performance by asynchronously submitting tasks to the queue for execution. Valid values:
Note This parameter is supported only for Milvus version 3.1.10.1 or later. |
| ||
Automatic scheduling time for BUILD jobs | RC_CSTORE_BUILD_SCHEDULE_PERIOD | Limit the scheduling time range for BUILD tasks, not the task execution time range. The value must be an integer between 0 and 24. |
| |
Scheduling priority for BUILD jobs | ADB_CONFIG RC_BUILD_TASK_PRIORITY_LIST | Configure the scheduling priority for BUILD jobs on single or multiple tables. The priority remains effective until you configure a new priority for the table. The scheduling priority for BUILD jobs is controlled by the |
| |
Elastic import | RC_ELASTIC_JOB_SCHEDULER_ENABLE | Manually enable the elastic import switch when importing data elastically. |
| |
REMOTE_CALL function | XIHE_REMOTE_CALL_SERVER_ENDPOINT | The internal service endpoint of the Function Compute service used when you remotely invoke a user-defined function in the Function Compute service using the REMOTE_CALL function. For more information, see Service endpoints. |
| |
XIHE_REMOTE_CALL_SERVER_AK | When invoking UDFs in Function Compute using the REMOTE_CALL function, specify the AccessKey ID of the Alibaba Cloud account or Resource Access Management (RAM) user that owns Function Compute. |
| ||
XIHE_REMOTE_CALL_SERVER_SK | When invoking UDFs in Function Compute using the REMOTE_CALL function, specify the AccessKey secret of the Alibaba Cloud account or RAM user that owns Function Compute. |
| ||
XIHE_REMOTE_CALL_COMPRESS_ENABLED | When invoking UDFs in Function Compute using the REMOTE_CALL function, specify whether to compress data in GZIP format before transmitting it to Function Compute. |
| ||
XIHE_REMOTE_CALL_MAX_BATCH_SIZE | When invoking UDFs in Function Compute using the REMOTE_CALL function, specify the number of data rows sent to Function Compute. |
| ||
Scan concurrency control | ADB_CONFIG SPLIT_FLOW_CONTROL_ENABLED | AnalyticDB for MySQL supports configuring scan concurrency quotas. This prevents node instability caused by excessive scan concurrency and resource consumption during queries. Valid values:
|
| |
ADB_CONFIG NODE_LEVEL_SPLIT_FLOW_CONTROL_ENABLED | AnalyticDB for MySQL supports dynamically adjusting the scan concurrency quota for tasks based on the overall scan concurrency quota of the node. Valid values:
|
| ||
MIN_RUNNING_SPLITS_LIMIT_PER_TASK | The minimum scan concurrency quota for tasks. Default value: 1. Valid values: [ |
| ||
TARGET_RUNNING_SPLITS_LIMIT_PER_TASK | The intermediate scan concurrency quota for tasks. Increase or decrease the scan concurrency quota based on this value. Default value: 32. Valid values: [ |
| ||
MAX_RUNNING_SPLITS_LIMIT_PER_TASK | The maximum scan concurrency quota for tasks. Default value: 64. The value must be greater than |
| ||
ADB_CONFIG WORKER_MAX_RUNNING_SOURCE_SPLITS_PER_NODE | The default scan concurrency quota for storage nodes is 256. An excessively large or small quota affects cluster performance. Do not modify the default value. |
| ||
ADB_CONFIG EXECUTOR_MAX_RUNNING_SOURCE_SPLITS_PER_NODE | The default scan concurrency quota for compute nodes is 256. An excessively large or small quota affects cluster performance. Do not modify the default value. |
| ||
Case sensitivity for logical views | VIEW_OUTPUT_NAME_CASE_SENSITIVE | Set case sensitivity for logical views. Valid values:
|
| |
Execute multiple SQL statements consecutively | ALLOW_MULTI_QUERIES | When executing multiple SQL statements consecutively using the Multi-Statement feature, manually enable the Multi-Statement feature. Valid values:
|
| |
Enable binary logging | BINLOG_ENABLE |
|
| |
Paging cache (performance optimization for deep paged queries) | PAGING_CACHE_SCHEMA | Specify the database that stores temporary cache tables for paged queries. If not specified, the currently connected internal database is used. |
| Paging cache (performance optimization for deep paged queries) |
PAGING_CACHE_MAX_TABLE_COUNT | Set the maximum number of temporary cache tables. If this number is exceeded, new cache creation fails. Default value: 100. The value must be an integer greater than 0. |
| ||
PAGING_CACHE_EXPIRATION_TIME | Specify the cache time-to-live (TTL) threshold. The system automatically clears paged query caches not accessed within the specified time. Default value: 600. The value must be an integer greater than 0. Unit: seconds (s). |
| ||
PAGING_CACHE_ENABLE | Specify whether to globally disable the Paging Cache feature. Valid values:
|
| ||
Specify table engine | RC_DDL_ENGINE_REWRITE_XUANWUV2 | Globally specify the table engine for the cluster. Valid values:
|
|
Common Hints
Category | Parameter | Description | Example | Documentation Link |
Query timeout period | QUERY_TIMEOUT | Configure the query timeout period for a single query at the query level. The value must be an integer greater than 0. Unit: milliseconds (ms). |
| |
Timeout period for INSERT, UPDATE, and DELETE statements | INSERT_SELECT_TIMEOUT | Modify the maximum execution time for INSERT, UPDATE, and DELETE statements at the query level. Default value: 24×3600000. The value must be an integer greater than 0. Unit: milliseconds (ms). |
| |
Disable filter condition pushdown |
| Disable filter condition pushdown for specific fields at the query level. |
| |
Scheduling priority for BUILD jobs | BUILD_TASK_PRIORITY | Modify the scheduling priority for BUILD tasks. Currently, you can only use a hint to configure the BUILD scheduling priority for a single table. Default value: 0. The value must be an integer greater than or equal to 0. A larger value indicates a higher BUILD task scheduling priority. |
| |
Elastic import |
| Select elastic import and configure the Job resource group to execute elastic import tasks. |
| |
Adjust execution plan |
| Hash Join builds using the right table, and Left Join does not reorder the left and right tables. When the right table has a large data volume, this can lead to slow execution and excessive memory resource consumption. Add a hint to instruct the optimizer to convert Left Join to Right Join based on resource consumption. Valid values:
|
| |
REORDER_JOINS | AnalyticDB for MySQL enables automatic Join order adjustment by default. You can manually adjust the Join order based on query data characteristics, allowing queries to execute directly according to the Join order in the SQL statement. Valid values:
|
| ||
AGGREGATION_PATH_TYPE | If you use a two-step aggregation method and the number of groups in grouping and aggregation is large, it consumes many compute resources. Use this hint to skip partial aggregation and directly perform final aggregation. Valid values:
|
| ||
Adjust runtime policy | HASH_PARTITION_COUNT | Each compute task is split into multiple concurrent tasks for execution, fully utilizing compute resources. Use this hint to set the concurrency. Default value: 32. The value must be an integer greater than 0. |
| None |
TASK_WRITER_COUNT | The concurrency for INSERT INTO SELECT import tasks. If the worker load is too high, modify this hint to reduce the concurrency. Default value: 16. The value must be an integer greater than 0. |
| ||
CTE_EXECUTION_MODE | Specify whether to enable CTE execution optimization. When CTE execution optimization is enabled, if a CTE subquery is referenced multiple times, it executes only once, improving the query performance for some queries. Valid values:
|
| ||
Query condition limits | QUERY_MAX_SHUFFLE_DATA_SIZE_MB | The shuffle data volume for a query. By default, no error is triggered regardless of the query data volume. If this parameter is set and the query data volume exceeds the specified threshold, the system directly terminates the query and reports an error. The value must be an integer greater than 0. Unit: MB. |
| None |
MAX_SELECT_ITEMS_COUNT | Set the limit on the number of SELECT conditions. Default value: 1024. The value must be an integer greater than 0. |
| ||
MAX_IN_ITEMS_COUNT | Set the limit on the number of IN conditions. Default value: 4000. The value must be an integer greater than 0. |
| ||
MAX_WHERE_ITEMS_COUNT | Set the limit on the number of WHERE conditions. Default value: 256. The value must be an integer greater than 0. |
| ||
Priority of XIHE BSP jobs | QUERY_PRIORITY | The priority of a BSP job. Valid values:
|
| |
Maximum ACUs for XIHE BSP jobs | ELASTIC_JOB_MAX_ACU | The maximum number of ACUs used by a single XIHE BSP job. The default value is 9, the minimum value is 3, and the valid range is [3, maximum resource amount of the Job-type resource group]. The value is [3, the maximum resource amount for a job-type resource group]. |
| |
Timeout period for XIHE BSP job queries | BATCH_QUERY_TIMEOUT | The timeout period for XIHE BSP jobs. Default value: 7200000 ms. The value must be an integer greater than 0. Unit: milliseconds (ms). |
| |
OSS external table | OUTPUT_FILENAME | When exporting data using an OSS external table, add this hint before the export statement to customize the file name stored in OSS. |
| |
SQL_OUTPUT_OSS_FILE_HEAD_ENABLE | When exporting data using an OSS external table, the data files exported to OSS do not include column names by default; they only contain data. If you want to export files with column names, add this hint before the import statement. Valid values:
|
| ||
HIVE_SPLIT_ASYNC_GENERATION_ENABLED | When Hive generates an execution plan, specify whether to asynchronously submit Hive Split tasks to the queue for execution. Valid values:
Note This parameter is supported only for Milvus version 3.1.10.1 or later. |
| ||
FAST_PARQUET_READER_ENABLE | When the statistics information in Parquet files is incorrect, Specifies whether to use the native Parquet reader as a fallback.
|
| ||
OSS_VECTORIZED_PARQUET_READER_ENABLE | If the data contains no NULL values, specify whether to enable vectorized Parquet reading. Valid values:
|
| ||
OSS_VECTORIZED_PARQUET_READER_NULL_SCENE_ENABLE | If the data contains NULL values, specify whether to enable vectorized Parquet reading. Valid values:
|
| ||
HUDI_METADATA_TABLE_ENABLED | Specify whether to load Hudi metadata. Valid values:
|
| ||
HUDI_QUERY_TYPE | Set the method for querying Hudi tables. Valid values:
|
| ||
HUDI_REALTIME_SKIP_MERGE | When querying Hudi table data in snapshot mode, specify whether to skip merging records from base files and log files. Valid values:
|
| ||
HUDI_MAX_SPLIT_SIZE= | The maximum split size. Default value: 128. Unit: MB. |
| ||
HUDI_SPLIT_PARALLEL_GENERATION_ENABLED | Specify whether to generate splits in parallel. Valid values:
|
| ||
HUDI_DATA_SKIPPING_ENABLED | When reading data, use statistics from the Hudi MetaTable to skip unnecessary shards, improving query performance. Valid values:
|
| ||
HUDI_SPLIT_ASYNC_GENERATION_ENABLED | During the execution plan generation phase, specify whether to asynchronously generate Hudi splits. Valid values:
|
| ||
ApsaraDB RDS for MySQL external tables | JDBC_SCAN_SPLITS | The number of splits during a Java Database Connectivity (JDBC)-based TableScan. Default value: 1. Valid values: 1 to 100. |
| |
JDBC_SPLIT_COLUMN | The column used to split when reading an ApsaraDB RDS for MySQL external table. |
| ||
JDBC_INSERT_DIRECTLY | When writing data to an ApsaraDB RDS for MySQL external table, data is written to a temporary table first, then to the target table by default. You can modify the hint to write data directly to the target table. However, any errors during the write process may result in dirty data in the target table. Valid values:
|
| ||
Tablestore external tables | OTS-INSERT-AS-UPDATE | Specify whether to use INSERT statements instead of UPDATE statements. Valid values:
|
| |
MaxCompute external tables | ODPS_SPLIT_LIMIT | The maximum number of splits when reading data via Tunnel. Default value: 1000. Valid values: 1 to 1000. |
| |
IGNORE_PARTITION_CACHE | When executing a query with partition conditions, query partition metadata directly from the MaxCompute table instead of using cached partition metadata.
|
| ||
ROWS_PER_SPLIT | The maximum number of data rows a single split can read when reading data via Tunnel. Default value: 500000. Valid values: 10000 to 500000. |
| ||
STORAGE_API_ENABLED | Specifies whether to use the MaxCompute Storage API to read MaxCompute data. The MaxCompute Storage API greatly improves read performance compared with the traditional Tunnel service. Value:
|
| ||
APLIT_BYTE_SIZE | When reading data via MaxCompute Storage API, the maximum size of a single split. Default value: 256. Unit: MB. |
| ||
MAX_BATCH_ROW_COUNT | When reading data via MaxCompute Storage API, the maximum number of rows in a single split. Default value: 1024. |
| ||
PAGE_BATCH_SIZE | When reading data via MaxCompute Storage API, the maximum number of rows each page contains. Default value: 1024. |
| ||
MAX_ALLOCATION_PER_SPLIT | When reading data via MaxCompute Storage API, the maximum peak memory allocated at the split level. Default value: 300 MB. Unit: MB. |
| ||
ASYNC_GET_SPLIT | When MaxCompute generates an execution plan, specify whether to optimize split loading performance by asynchronously submitting tasks to the queue for execution. Valid values:
Note This parameter is supported only for Milvus version 3.1.10.1 or later. |
| ||
Scan concurrency control | SPLIT_FLOW_CONTROL_ENABLED | AnalyticDB for MySQL supports configuring scan concurrency quotas. This prevents node instability caused by excessive scan concurrency and resource consumption during queries. Valid values:
|
| |
MIN_RUNNING_SPLITS_LIMIT_PER_TASK | The minimum scan concurrency quota for tasks. Default value: 1. Valid values: [ |
| ||
TARGET_RUNNING_SPLITS_LIMIT_PER_TASK | The intermediate scan concurrency quota for tasks. Increase or decrease the scan concurrency quota based on this value. Default value: 32. Valid values: [ |
| ||
MAX_RUNNING_SPLITS_LIMIT_PER_TASK | The maximum scan concurrency quota for tasks. Default value: 64. The value must be greater than |
| ||
Paging cache (performance optimization for deep paged queries) | PAGING_ID | Enable the Paging Cache feature and manually specify |
| Paging cache (performance optimization for deep paged queries) |
PAGING_CACHE_ENABLED | Enable the Paging Cache feature and automatically generate
|
| ||
PAGING_CACHE_VALIDITY_INTERVAL | Set the cache validity period. After this period, the cache becomes invalid. If the cache is not accessed within one hour after invalidation, the system automatically deletes the corresponding temporary cache table. The value must be an integer greater than 0. Unit: seconds (s). Use this parameter with |
| ||
INVALIDATE_PAGING_CACHE | Specify whether to clear the paged query cache for a specified pattern. Valid values:
This parameter must be used together with the |
| ||
Specify table engine | RC_DDL_ENGINE_REWRITE_XUANWUV2 | Specify the table engine for the table created by the current SQL statement. Valid values:
|
| |
Others | RESOURCE_GROUP | Specify the resource group used by the query. If you do not specify a resource group, the resource group bound to the database account is used by default. If the database account is not bound to a resource group, the default resource group is used. |
| None |
DML_EXCEPTION_TO_NULL_ENABLED | If invalid values appear in data written by an INSERT SELECT statement, specify whether to throw an exception. Valid values:
|
| ||
DISPLAY_HINT | Used to mark corresponding queries, such as those from different business parties, for easy identification. |
|
FAQ
How do I delete an ADB_CONFIG parameter setting?
If you set a configuration parameter using SET ADB_CONFIG key=value;, use the following method to restore it to its default value.
Method | Example |
| |
After the command executes successfully, run SHOW ADB_CONFIG KEY=key; to confirm that the configuration parameter has been restored to its default value.