All Products
Search
Document Center

AnalyticDB:Config and hint configuration parameters

Last Updated:Mar 13, 2025

AnalyticDB for MySQL supports configuring parameters through the SET ADB_CONFIG command and hints. The SET ADB_CONFIG command is used to modify global configurations, which apply to the entire cluster after modification. When parameters are modified using hints, the hint only applies to the specified SQL and does not affect the global configuration. If both the SET ADB_CONFIG command and hints are used to configure parameters, the parameters configured using hints take precedence. This topic introduces common configuration parameters in AnalyticDB for MySQL.

Common config parameters

Category

Parameter

Description

Example

Document link

Switchover window

REPLICATION_SWITCH_TIME_RANGE

During the switchover from the existing cluster to the new cluster, the existing cluster supports only read-only operations for a period of 5 to 10 minutes. After you connect to the existing cluster, you can use the REPLICATION_SWITCH_TIME_RANGE parameter to specify the switchover window.

Note

If the switchover window is not specified, the existing cluster is switched over to a new cluster after the incremental data of the existing cluster is synchronized in real time to the new cluster.

SET ADB_CONFIG REPLICATION_SWITCH_TIME_RANGE=`23:00, 23:30`;

Data Lakehouse Edition scale-in and scale-out

Maximum number of IN conditions

MAX_IN_ITEMS_COUNT

The maximum number of IN conditions. The value must be a positive integer.

  • Default value for AnalyticDB for MySQL clusters of V3.1.8 or earlier: 2000.

  • Default value for AnalyticDB for MySQL clusters of V3.1.9.x to V3.1.10.x: 4000.

  • Default value for AnalyticDB for MySQL clusters of V3.2.1 or later: 20000.

SET ADB_CONFIG MAX_IN_ITEMS_COUNT=4000;

Write and query

Query timeout period

QUERY_TIMEOUT

The timeout period of all queries in a cluster. The value must be an integer that is greater than 0. Unit: milliseconds.

SET ADB_CONFIG QUERY_TIMEOUT=1000;

Timeout period of INSERT, UPDATE, and DELETE statements

INSERT_SELECT_TIMEOUT

The maximum execution duration of the INSERT, UPDATE, and DELETE statements in a cluster. Default value: 24 × 3600000. The value must be an integer that is greater than 0. Unit: milliseconds.

SET ADB_CONFIG INSERT_SELECT_TIMEOUT=3600000;

Limits

Filter conditions without pushdown

  • V3.1.4 and later: FILTER_NOT_PUSHDOWN_COLUMNS

  • Versions earlier than V3.1.4: NO_INDEX_COLUMNS

Disables filter condition pushdown for specific columns in a cluster.

  • V3.1.4 and later: SET ADB_CONFIG FILTER_NOT_PUSHDOWN_COLUMNS=[Schema1.tableName1:colName1|colName2;Schema2.tableName2:colName1|colName2]

  • Versions earlier than V3.1.4: SET ADB_CONFIG NO_INDEX_COLUMNS=[tableName1.colName1;colName2,tableName2.colName1]

Filter conditions without pushdown

Query execution mode

QUERY_TYPE

The query execution mode of a cluster. Valid values:

  • interactive

  • batch

Note

Data Lakehouse Edition reserved mode clusters, Enterprise Edition, Basic Edition, and Data Lakehouse Edition clusters do not support switching the query execution mode of an instance.

SET ADB_CONFIG QUERY_TYPE=interactive;

Query execution mode

Query queues

XIHE_ENV_QUERY_ETL_MAX_CONCURRENT_SIZE

The maximum number of queries that can be executed for LOWEST queues on a single frontend node. Default value: 20.

SET ADB_CONFIG XIHE_ENV_QUERY_ETL_MAX_CONCURRENT_SIZE=20;

Priority queues and concurrency of interactive resource groups

XIHE_ENV_QUERY_ETL_MAX_QUEUED_SIZE

The maximum number of queries that can be queued for LOWEST queues on a single frontend node. Default value: 200.

SET ADB_CONFIG XIHE_ENV_QUERY_ETL_MAX_QUEUED_SIZE=200;

XIHE_ENV_QUERY_LOW_PRIORITY_MAX_CONCURRENT_SIZE

The maximum number of queries that can be executed for LOW queues on a single frontend node. Default value: 20.

SET ADB_CONFIG XIHE_ENV_QUERY_LOW_PRIORITY_MAX_CONCURRENT_SIZE=20;

XIHE_ENV_QUERY_LOW_PRIORITY_MAX_QUEUED_SIZE

The maximum number of queries that can be queued for LOW queues on a single frontend node. Default value: 200.

SET ADB_CONFIG XIHE_ENV_QUERY_LOW_PRIORITY_MAX_QUEUED_SIZE=200;

XIHE_ENV_QUERY_NORMAL_MAX_CONCURRENT_SIZE

The maximum number of queries that can be executed for NORMAL queues on a single frontend node. Default value: 20.

SET ADB_CONFIG XIHE_ENV_QUERY_NORMAL_MAX_CONCURRENT_SIZE=20;

XIHE_ENV_QUERY_NORMAL_MAX_QUEUED_SIZE

The maximum number of queries that can be queued for NORMAL queues on a single frontend node. Default value: 200.

SET ADB_CONFIG XIHE_ENV_QUERY_NORMAL_MAX_QUEUED_SIZE=200;

XIHE_ENV_QUERY_HIGH_MAX_CONCURRENT_SIZE

The maximum number of queries that can be executed for HIGH queues on a single frontend node. Default value: 40.

SET ADB_CONFIG XIHE_ENV_QUERY_HIGH_MAX_CONCURRENT_SIZE=40;

XIHE_ENV_QUERY_HIGH_MAX_QUEUED_SIZE

The maximum number of queries that can be queued for HIGH queues on a single frontend node. Default value: 400.

SET ADB_CONFIG XIHE_ENV_QUERY_HIGH_MAX_QUEUED_SIZE=400;

Query priority

ELASTIC_JOB_MAX_ACU

Data import by using OSS external tables

HIVE_SPLIT_ASYNC_GENERATION_ENABLED

Specifies whether to asynchronously submit tasks that run splits to a queue for execution when Hive generates an execution plan. Valid values:

  • true: Yes.

  • false (default value): No.

Note

This parameter is supported only for AnalyticDB for MySQL clusters of V3.1.10.1 or later.

SET ADB_CONFIG HIVE_SPLIT_ASYNC_GENERATION_ENABLED=true;

Data import by using MaxCompute external tables

SQL_OUTPUT_BATCH_SIZE

The maximum number of entries that can be imported at a time. The value must be an integer that is greater than 0.

SET ADB_CONFIG SQL_OUTPUT_BATCH_SIZE = 6000;

Import and export

ENABLE_ODPS_MULTI_PARTITION_PART_MATCH

Specifies whether to obtain the number of entries in each MaxCompute partition in advance.

SET ADB_CONFIG ENABLE_ODPS_MULTI_PARTITION_PART_MATCH=false;

ASYNC_GET_SPLIT

Specifies whether to asynchronously submit tasks that run splits to a queue for execution when MaxCompute generates an execution plan. Valid values:

  • true: Yes.

  • false (default value): No.

Note

This parameter is supported only for AnalyticDB for MySQL clusters of V3.1.10.1 or later.

SET ADB_CONFIG ASYNC_GET_SPLIT=true;

Automatic scheduling for BUILD tasks

RC_CSTORE_BUILD_SCHEDULE_PERIOD

The time range within which you want to schedule BUILD tasks. This parameter applies to job scheduling, not job execution. The value must be an integer from 0 to 24.

SET ADB_CONFIG RC_CSTORE_BUILD_SCHEDULE_PERIOD=`6,8`;

Configure automatic scheduling for BUILD tasks

Scheduling priority of BUILD tasks

ADB_CONFIG RC_BUILD_TASK_PRIORITY_LIST

The scheduling priorities of BUILD jobs for one or more tables. The configured priorities are effective until you configure new scheduling priorities of BUILD jobs for the tables.

The scheduling priority of BUILD jobs is controlled by the task_priority parameter. The value must be an integer. Default value: 0.

SET ADB_CONFIG RC_BUILD_TASK_PRIORITY_LIST = `<db1_name>.<table1_name>.<task_priority>;<db2_name>.<table2_name>.<task_priority>`;

BUILD

Elastic import

RC_ELASTIC_JOB_SCHEDULER_ENABLE

Enables the elastic import feature.

SET ADB_CONFIG RC_ELASTIC_JOB_SCHEDULER_ENABLE=true;

REMOTE_CALL function

XIHE_REMOTE_CALL_SERVER_ENDPOINT

When you use the REMOTE_CALL function to remotely call user-defined functions in Function Compute, the internal service registration address of Function Compute. For more information, see service registration address.

SET ADB_CONFIG XIHE_REMOTE_CALL_SERVER_ENDPOINT="1234567890000****.cn-zhangjiakou-internal.fc.aliyuncs.com"

User-defined function (UDF)

XIHE_REMOTE_CALL_SERVER_AK

When you use the REMOTE_CALL function to remotely call user-defined functions in Function Compute, the AccessKey ID of the Alibaba Cloud account or RAM user to which Function Compute belongs.

SET ADB_CONFIG XIHE_REMOTE_CALL_SERVER_AK=LTAI5t8sqJn5GhpBVtN8****;

XIHE_REMOTE_CALL_SERVER_SK

When you use the REMOTE_CALL function to remotely call user-defined functions in Function Compute, the AccessKey Secret of the Alibaba Cloud account or RAM user to which Function Compute belongs.

SET ADB_CONFIG XIHE_REMOTE_CALL_SERVER_SK=HlClegbiV5mJjBYBJHEZQOnRF7****;

XIHE_REMOTE_CALL_COMPRESS_ENABLED

When remotely invoking a custom function in the Function Compute service using the REMOTE_CALL function, determine whether to compress the data using the GZIP format before transmitting it to the Function Compute service.

SET ADB_CONFIG REMOTE_CALL_COMPRESS_ENABLED=true;

XIHE_REMOTE_CALL_MAX_BATCH_SIZE

When remotely calling a custom function in the Function Compute service through the REMOTE_CALL function, the number of data rows sent to the Function Compute service.

SET ADB_CONFIG XIHE_REMOTE_CALL_MAX_BATCH_SIZE=500000;

Split flow control

ADB_CONFIG SPLIT_FLOW_CONTROL_ENABLED

AnalyticDB for MySQL supports configuring the quota for concurrent split scans to prevent node instability caused by excessive concurrent split scans that occupy too many resources. Valid values:

  • true (default value): Enables split flow control.

  • false: Disables split flow control.

SET ADB_CONFIG SPLIT_FLOW_CONTROL_ENABLED=true;

Split flow control

ADB_CONFIG NODE_LEVEL_SPLIT_FLOW_CONTROL_ENABLED

AnalyticDB for MySQL supports dynamically adjusting the quota for concurrent split scans in a task based on the overall quota for concurrent split scans on a node. Valid values:

  • true: Enables dynamic adjustment of the quota for concurrent split scans in a task.

  • false (default value): Disables dynamic adjustment of the quota for concurrent split scans in a task.

SET ADB_CONFIG NODE_LEVEL_SPLIT_FLOW_CONTROL_ENABLED=true;

MIN_RUNNING_SPLITS_LIMIT_PER_TASK

The minimum quota for concurrent split scans in a task. Default value: 1. Valid values: [1,TARGET_RUNNING_SPLITS_LIMIT_PER_TASK].

SET ADB_CONFIG MIN_RUNNING_SPLITS_LIMIT_PER_TASK=24;

TARGET_RUNNING_SPLITS_LIMIT_PER_TASK

The intermediate quota for concurrent split scans in a task. The actual quota is dynamically adjusted based on the value of this parameter. Default value: 32. Valid values: [MIN_RUNNING_SPLITS_LIMIT_PER_TASK,MAX_RUNNING_SPLITS_LIMIT_PER_TASK].

TARGET_RUNNING_SPLITS_LIMIT_PER_TASK=32;

MAX_RUNNING_SPLITS_LIMIT_PER_TASK

The maximum quota for concurrent split scans in a task. Default value: 64. The value must be greater than the value of TARGET_RUNNING_SPLITS_LIMIT_PER_TASK.

SET ADB_CONFIG MAX_RUNNING_SPLITS_LIMIT_PER_TASK=128;

ADB_CONFIG WORKER_MAX_RUNNING_SOURCE_SPLITS_PER_NODE

By default, the quota for concurrent split scans on a storage node is 256. We recommend that you do not change the default value because an excessively large or small quota can affect the cluster performance.

SET ADB_CONFIG WORKER_MAX_RUNNING_SOURCE_SPLITS_PER_NODE=256;

ADB_CONFIG EXECUTOR_MAX_RUNNING_SOURCE_SPLITS_PER_NODE

By default, the quota for concurrent split scans on a compute node is 256. We recommend that you do not change the default value because an excessively large or small quota can affect the cluster performance.

SET ADB_CONFIG EXECUTOR_MAX_RUNNING_SOURCE_SPLITS_PER_NODE=256;

Case-sensitivity of logical view names

VIEW_OUTPUT_NAME_CASE_SENSITIVE

Specifies whether to set the name of a logical view as case-sensitive. Valid values:

  • true: Case-sensitive.

  • false (default value): Case-insensitive.

SET ADB_CONFIG VIEW_OUTPUT_NAME_CASE_SENSITIVE=true;

CREATE VIEW

Consecutive execution of multiple SQL statements

ALLOW_MULTI_QUERIES

Specifies whether to enable the multi-statement feature for the consecutive execution of multiple SQL statements. Valid values:

  • true: Enable.

  • false (default value): Disable.

SET ADB_CONFIG ALLOW_MULTI_QUERIES=true;

Multi-Statement

Enable binary logging

BINLOG_ENABLE

  • You must manually enable the binary logging feature for AnalyticDB for MySQL clusters earlier than V3.2.0.0. Then, you can enable the feature for tables.

  • By default, the binary logging feature is enabled for AnalyticDB for MySQL clusters of V3.2.0.0 or later. You do not need to specify this parameter.

SET ADB_CONFIG BINLOG_ENABLE=true;

Refresh materialized views

Paging cache (performance optimization for deep paged queries)

PAGING_CACHE_SCHEMA

The database that is used to store temporary cache tables for paged queries. If you do not specify this parameter, the connected internal database is used.

SET ADB_CONFIG PAGING_CACHE_SCHEMA=paging_cache;

Paging cache (performance optimization for deep paged queries)

PAGING_CACHE_MAX_TABLE_COUNT

The maximum number of temporary cache tables. If the number of temporary cache tables exceeds the value of this parameter, you cannot create more cache tables. Default value: 32. The value must be an integer that is greater than 0.

SET ADB_CONFIG PAGING_CACHE_MAX_TABLE_COUNT=32;

PAGING_CACHE_EXPIRATION_TIME

The cache expiration time. The cache data of paged queries that is not accessed within the specified time range is automatically cleared. Default value: 3600. The value must be an integer that is greater than 0. Unit: seconds.

SET ADB_CONFIG PAGING_CACHE_EXPIRATION_TIME=3600;

Common hints

Category

Parameter

Description

Example

Document link

Query timeout period

QUERY_TIMEOUT

The timeout period of a single query. The value must be an integer that is greater than 0. Unit: milliseconds.

/*+ QUERY_TIMEOUT=1000 */select count(*) from t;

Write and query

Timeout period of INSERT, UPDATE, and DELETE statements

INSERT_SELECT_TIMEOUT

The maximum execution duration of the INSERT, UPDATE, and DELETE statements in a query. Default value: 24 × 3600000. The value must be an integer that is greater than 0. Unit: milliseconds.

/*+ INSERT_SELECT_TIMEOUT=3600000 */update customer set customer_name ='adb' where customer_id ='2369';

Limits

Filter conditions without pushdown

  • V3.1.4 and later: FILTER_NOT_PUSHDOWN_COLUMNS

  • Versions earlier than V3.1.4: NO_INDEX_COLUMNS

Disables filter condition pushdown for specific columns in a query.

  • V3.1.4 and later: /*+ FILTER_NOT_PUSHDOWN_COLUMNS=[Schema1.table1:colName1|colName2;Schema2.table2:colName1|colName2] */

  • Versions earlier than V3.1.4: /*+ NO_INDEX_COLUMNS=[tableName1.colName1;colName2,tableName2.colName1] */

Filter conditions without pushdown

Scheduling priority of BUILD tasks

build_task_priority

The scheduling priority of BUILD jobs. You can use a hint to configure the scheduling priority of BUILD jobs for only one table at a time. Default value: 0. The value must be an integer that is greater than or equal to 0. A greater value specifies a higher scheduling priority.

/*+ BUILD_TASK_PRIORITY = 2*/ BUILD TABLE db.table;

Configure scheduling priority of BUILD tasks

Elastic import

  • elastic_load

  • elastic_load_configs

Selects elastic import and configures a resource group for the import job.

/*+ ELASTIC_LOAD=true, ELASTIC_LOAD_CONFIGS=[adb.load.resource.group.name=resource_group]*/

SUBMIT JOB INSERT OVERWRITE INTO adb_demo.adb_import_test SELECT * FROM adb_external_db.person;

Execution plan

  • V3.1.8 and later: O_CBO_RULE_SWAP_OUTER_JOIN

  • Versions earlier than V3.1.8: LEFT_TO_RIGHT_ENABLED

The hash join algorithm uses the right table to build a hash table, and LEFT JOIN does not reorder the left and right tables. As a result, large amounts of data in the right table may cause issues such as slow execution and excessive memory consumption. You can add a hint before an SQL statement to configure an optimizer to change LEFT JOIN to RIGHT JOIN based on the resource consumption. Valid values:

  • true: Enable.

  • false (default value): Disable.

  • /*+ O_CBO_RULE_SWAP_OUTER_JOIN=true*/

  • /*+ LEFT_TO_RIGHT_ENABLED=true*/

Optimize LEFT JOIN to RIGHT JOIN

reorder_joins

AnalyticDB for MySQL enables automatic adjustment of join orders by default. You can manually adjust join orders based on the characteristics of the queried data to execute queries based on the join orders specified in SQL statements. Valid values:

  • true (default value): Enable automatic adjustment of join orders.

  • false: Disable automatic adjustment of join orders.

/*+ REORDER_JOINS=false*/

Manually adjust join orders

aggregation_path_type

If a two-step aggregation method is used, many groups in grouping and aggregation scenarios consume many computing resources. You can set the value of this hint parameter to single_agg to skip partial aggregation and directly perform final aggregation for a query. Valid values:

  • single_agg: Skip partial aggregation.

  • auto (default value): The optimizer determines whether to skip partial aggregation.

/*+ AGGREGATION_PATH_TYPE=single_agg*/

Optimize grouping and aggregation queries

Runtime policy

hash_partition_count

The number of concurrent tasks into which each computing job can be divided. Task division helps make full use of computing resources. You can use this hint parameter to specify the concurrency of tasks. Default value: 32. The value must be an integer that is greater than 0.

/*+ HASH_PARTITION_COUNT=32*/

None

task_writer_count

The number of concurrent import jobs executed by using the INSERT INTO SELECT statement. If a storage node is overloaded, you can use this hint parameter to reduce the number of concurrent import jobs. Default value: 16. The value must be an integer that is greater than 0.

/*+ TASK_WRITER_COUNT=8*/

cte_execution_mode

Specifies whether to enable the common table expression (CTE) execution optimization feature. If a CTE subquery is referenced repeatedly after the CTE execution optimization feature is enabled, the subquery can be executed only once to improve the performance of specific queries. Valid values:

  • shared: Enable the CTE execution optimization feature.

  • inline (default value): Disable the CTE execution optimization feature.

/*+ CTE_EXECUTION_MODE=shared*/

WITH

Limits on query conditions

query_max_shuffle_data_size_mb

The maximum amount of shuffled data for a query. By default, no error is returned regardless of the amount of queried data. If you specify this parameter and the amount of queried data in a query exceeds the specified value, the system terminates the query and returns an error. The value must be an integer that is greater than 0. Unit: MB.

/*+ QUERY_MAX_SHUFFLE_DATA_SIZE_MB=1024*/

None

max_select_items_count

The maximum number of SELECT conditions. Default value: 1024. The value must be an integer that is greater than 0.

/*+ MAX_SELECT_ITEMS_COUNT=2048*/

max_in_items_count

The maximum number of IN conditions. Default value: 4000. The value must be an integer that is greater than 0.

/*+ MAX_IN_ITEMS_COUNT=1000*/

max_where_items_count

The maximum number of WHERE conditions. Default value: 256. The value must be an integer that is greater than 0.

/*+ MAX_WHERE_ITEMS_COUNT=512*/

OSS external tables

output_filename

The name of the OSS object to which you want to export data. When you use an OSS external table to export data, you can add this hint before the data export statement to specify the name of an OSS object.

/*+ OUTPUT_FILENAME=output.csv*/

Export to OSS

sql_output_oss_file_head_enable

Specifies whether to include column names in the data exported to OSS. By default, the data exported to OSS by using an OSS external table does not contain column names. If you want the data to have column names, add the /*+ SQL_OUTPUT_OSS_FILE_HEAD_ENABLE=true*/ hint before the data import statement. Valid values:

  • true: Yes.

  • false (default value): No.

/*+ SQL_OUTPUT_OSS_FILE_HEAD_ENABLE=true*/

hive_split_async_generation_enabled

Specifies whether to asynchronously submit tasks that run splits to a queue for execution when Hive generates an execution plan. Valid values:

  • true: Yes.

  • false (default value): No.

Note

This parameter is supported only for AnalyticDB for MySQL clusters of V3.1.10.1 or later.

/*+ HIVE_SPLIT_ASYNC_GENERATION_ENABLED=true*/

Import to Data Lakehouse Edition by using external tables

fast_parquet_reader_enable

When the statistics information in the Parquet file is incorrect,

Specifies whether to roll back to the native Parquet reader if the statistics in a Parquet file are incorrect. Valid values:

  • true (default value): Yes.

  • false: No.

/*+ FAST_PARQUET_READER_ENABLE=true*/

oss_vectorized_parquet_reader_enable

Specifies whether to enable vectorized reading of Parquet files when the data does not contain NULL values. Valid values:

  • true: Yes.

  • false (default value): No.

/*+ OSS_VECTORIZED_PARQUET_READER_ENABLE=true*/

oss_vectorized_parquet_reader_null_scene_enable

Specifies whether to enable vectorized reading of Parquet files when the data contains NULL values. Valid values:

  • true: Yes.

  • false (default value): No.

/*+ OSS_VECTORIZED_PARQUET_READER_NULL_SCENE_ENABLE=true*/

hudi_metadata_table_enabled

Specifies whether to load Hudi metadata. Valid values:

  • true: Yes.

  • false (default value): No.

/*+ HUDI_METADATA_TABLE_ENABLED=true*/

hudi_query_type

The mode in which you want to query Hudi tables. Valid values:

  • snapshot: Queries all committed or compacted data at a specified time.

  • read_optimized (default value): Queries data after the last commit or compaction.

/*+ HUDI_QUERY_TYPE=snapshot*/

hudi_realtime_skip_merge

Specifies whether to skip the merging of base files and log files when Hudi tables are queried in snapshot mode. Valid values:

  • true: Yes.

  • false (default value): No.

/*+ HUDI_REALTIME_SKIP_MERGE=true*/

hudi_max_split_size

The maximum size of a split. Default value: 128. Unit: MB.

/*+ HUDI_MAX_SPLIT_SIZE=128*/

hudi_split_parallel_generation_enabled

Specifies whether to generate splits in parallel. Valid values:

  • true (default value): Yes.

  • false: No.

/*+ HUDI_SPLIT_PARALLEL_GENERATION_ENABLED=true*/

hudi_data_skipping_enabled

Specifies whether to use the statistics in Hudi MetaTable to skip splits that do not need to be read to improve query performance. Valid values:

  • true: Yes.

  • false (default value): No.

/*+ HUDI_DATA_SKIPPING_ENABLED=true*/

hudi_split_async_generation_enabled

Specifies whether to asynchronously generate Hudi splits during the generation of an execution plan. Valid values:

  • true (default value): Yes.

  • false: No.

/*+ HUDI_SPLIT_ASYNC_GENERATION_ENABLED=true*/

RDS MySQL external tables

jdbc_scan_splits

The number of splits when executing TableScan through JDBC. The default value is 1, and the range of values is 1 to 100.

/*+ JDBC_SCAN_SPLITS=1*/

Import to Data Lakehouse Edition by using external tables

jdbc_split_column

The name of the column that is used to divide splits when you read an ApsaraDB RDS for MySQL external table.

/*+ JDBC_SPLIT_COLUMN=column1*/

jdbc_insert_directly

Specifies whether to directly write data to the destination table. By default, when you use an ApsaraDB RDS for MySQL external table to write data, data is written first to the temporary table and then to the destination table. If errors occur during the writing process, the destination table has dirty data. Valid values:

  • true: Yes.

  • false (default value): No.

/*+ JDBC_INSERT_DIRECTLY=true*/

Tablestore external tables

ots-insert-as-update

Specifies whether to replace the UPDATE statement with the INSERT statement. Valid values:

  • true: Yes.

  • false (defaultvalue): No.

/*+ OTS-INSERT-AS-UPDATE=true*/

Query and import Tablestore data

MaxCompute external tables

odps_split_limit

The maximum number of splits allowed when you use Tunnel to read data. Default value: 1000. Valid values: 1 to 1000.

/*+ ODPS_SPLIT_LIMIT=1000*/

Import to Data Lakehouse Edition by using external tables

ignore_partition_cache

Specifies whether to directly query the partition metadata from the MaxCompute table instead of using the cached partition metadata if a query is executed based on partitioning conditions. Valid values:

  • true: Yes.

  • false (default value): No.

/*+ IGNORE_PARTITION_CACHE=true*/

rows_per_split

The maximum number of rows that a single split can contain when you use Tunnel to read data. Default value: 500000. Valid values: 10000 to 500000.

/*+ ROWS_PER_SPLIT=500000*/

storage_api_enabled

Whether to use the MaxCompute Storage API to read MaxCompute data. The MaxCompute Storage API significantly improves performance compared to traditional Tunnel reading.

Specifies whether to use the MaxCompute Storage API to read MaxCompute data. The MaxCompute Storage API significantly improves performance compared to the traditional Tunnel method. Valid values:

  • true: Yes.

  • false (default value): No.

/*+ STORAGE_API_ENABLED=true*/

split_byte_size

The maximum value of a single split when reading data through the MaxCompute Storage API. Default value: 256. Unit: MB.

/*+ SPLIT_BYTE_SIZE=256*/

max_batch_row_count

When reading data through the MaxCompute Storage API, the maximum number of rows for a single split. Default value: 1024.

/*+ MAX_BATCH_ROW_COUNT=1024*/

page_batch_size

When reading data through the MaxCompute Storage API, the maximum number of rows contained in each page. Default value: 1024.

/*+ PAGE_BATCH_SIZE=1024*/

max_allocation_per_split

When reading data through the MaxCompute Storage API, the maximum peak memory assigned at the Split level. Default value: 300. Unit: MB.

/*+ MAX_ALLOCATION_PER_SPLIT=300*/

async_get_split

Specifies whether to asynchronously submit tasks that run splits to a queue for execution when MaxCompute generates an execution plan. Valid values:

  • true: Yes.

  • false (default value): No.

Note

This parameter is supported only for AnalyticDB for MySQL clusters of V3.1.10.1 or later.

/*+ ASYNC_GET_SPLIT=true*/

Split flow control

SPLIT_FLOW_CONTROL_ENABLED

AnalyticDB for MySQL supports configuring the quota for concurrent split scans to prevent node instability caused by excessive concurrent split scans that occupy too many resources. Valid values:

  • true (default value): Enables split flow control.

  • false: Disables split flow control.

/*+ SPLIT_FLOW_CONTROL_ENABLED=true*/ SELECT * FROM table;

Split flow control

MIN_RUNNING_SPLITS_LIMIT_PER_TASK

The minimum quota for concurrent split scans in a task. Default value: 1. Valid values: [1,TARGET_RUNNING_SPLITS_LIMIT_PER_TASK].

/*+ MIN_RUNNING_SPLITS_LIMIT_PER_TASK=10*/SELECT * FROM orders;

TARGET_RUNNING_SPLITS_LIMIT_PER_TASK

The intermediate quota for concurrent split scans in a task. The actual quota is dynamically increased or decreased based on the value of this parameter. Default value: 32. Valid values: [MIN_RUNNING_SPLITS_LIMIT_PER_TASK,MAX_RUNNING_SPLITS_LIMIT_PER_TASK].

/*+ TARGET_RUNNING_SPLITS_LIMIT_PER_TASK=32*/SELECT * FROM adb_test;

MAX_RUNNING_SPLITS_LIMIT_PER_TASK

The maximum quota for concurrent split scans in a task. Default value: 64. The value must be greater than the value of TARGET_RUNNING_SPLITS_LIMIT_PER_TASK.

/*+ MAX_RUNNING_SPLITS_LIMIT_PER_TASK=100*/SELECT * FROM adb_test;

Paging cache (performance optimization for deep paged queries)

paging_id

Enables the paging cache feature and manually specifies a paging_id to identify a set of related paged query caches (that is, a set of queries with the same pattern but different LIMIT and OFFSET parameters).

/*PAGING_ID=paging123*/ SELECT * FROM t_order ORDER BY id LIMIT 0, 100;

Paging cache (performance optimization for deep paged queries)

paging_cache_enabled

Enables the paging cache feature and automatically generates a paging_id based on the SQL pattern to identify a set of related paged query caches (that is, a set of queries with the same pattern but different LIMIT and OFFSET parameters). Valid values:

  • true: Enable the paging cache feature.

  • false (default value): Disable the paging cache feature.

/*PAGING_CACHE_ENABLED=true*/ SELECT * FROM t_order ORDER BY id LIMIT 0, 100;

paging_cache_validity_interval

The validity period of a cache table. After the validity period expires, the cache table becomes invalid. If the cache table is not accessed within 1 hour after it becomes invalid, the system automatically deletes the cache table. The value must be an integer that is greater than 0. Unit: seconds. This parameter must be used in conjunction with paging_cache_enabled.

/*PAGING_CACHE_ENABLED=true,PAGING_CACHE_VALIDITY_INTERVAL=300*/ SELECT * FROM t_order ORDER BY id LIMIT 0, 100;

invalidate_paging_cache

Specifies whether to clear the cache data of paged queries specified by an SQL pattern. Valid values:

  • true: Yes.

  • false (default value): No.

This parameter must be used in conjunction with paging_id or paging_cache_enabled.

/*PAGING_CACHE_ENABLED=true,INVALIDATE_PAGING_CACHE=true*/ SELECT * FROM t_order ORDER BY id LIMIT 0, 100;

Others

resource_group

The resource group used by the query. If you do not specify a resource group, the resource group that is associated with the database account is used. If the database account is not associated with a resource group, the default resource group is used.

/*+ RESOURCE_GROUP=my_group_name*/

None

dml_exception_to_null_enabled

Specifies whether to write NULL values without throwing an exception if the data written by the INSERT SELECT statement contains an invalid value. Valid values:

  • true: Do not throw an exception and write NULL values.

  • false (default value): Throw an exception.

/*+ DML_EXCEPTION_TO_NULL_ENABLED=true*/

display_hint

The identifier of the query, such as a string that specifies the user who executes the query.

/*+ DISPLAY_HINT=from_user_123*/

FAQ

How to delete a configuration of ADB_CONFIG?

To reset a configuration parameter to its default value after setting it with SET ADB_CONFIG key=value;, use the following method.

Method

Example

SET ADB_CONFIG REMOVE key;
SET ADB_CONFIG REMOVE QUERY_TIMEOUT; --Restore the query timeout period to the default value.

Once the command is executed successfully, you can verify the reset by running the SHOW ADB_CONFIG KEY=key; command to ensure the parameter has reverted to its default value.