ApsaraDB for SelectDB variables control query behavior, session settings, and cluster-wide configuration. Variables fall into two categories: those specific to ApsaraDB for SelectDB, and those compatible with the MySQL client protocol.
How variables work
Variables can be set at two scopes:
Session scope: Takes effect only for the current connection. Resets when the session ends.
Global scope: Takes effect for the current session and all subsequent sessions. Only the admin user can set global variables.
When multiple levels are configured, settings take effect in this priority order (highest to lowest):
Session variables
User properties
Global variables
Default values
If a higher-priority setting is not configured, the next level takes effect automatically.
Query variables
Run SHOW VARIABLES to list all variables and their current values:
SHOW VARIABLES;To filter by name pattern:
SHOW VARIABLES LIKE '%time_zone%';Set variables
Set a session variable
Session variables take effect immediately and apply only to the current connection:
SET exec_mem_limit = 137438953472;
SET forward_to_master = true;
SET time_zone = "Asia/Shanghai";Variables also accept constant expressions:
SET exec_mem_limit = 10 * 1024 * 1024 * 1024;
SET forward_to_master = concat('tr', 'u', 'e');Set a global variable
Global variables take effect for the current session and all subsequent sessions:
SET GLOBAL exec_mem_limit = 137438953472;Only the admin user can set global variables.
The following variables support both session and global scope:
time_zonewait_timeoutsql_modeenable_profilequery_timeoutinsert_timeoutexec_mem_limitbatch_sizeallow_partition_column_nullableinsert_visible_timeout_msenable_fold_constant_by_be
The following variables support global scope only:
default_rowset_typedefault_password_lifetimepassword_historyvalidate_password_policy
Override a variable for a single query
Use the SET_VAR hint to set a session variable for a single statement without changing the session state:
SELECT /*+ SET_VAR(exec_mem_limit = 8589934592) */ name FROM people ORDER BY name;
SELECT /*+ SET_VAR(query_timeout = 1, enable_partition_cache=true) */ sleep(3);Comments must start with /*+ and follow the SELECT statement.Time zone variables
ApsaraDB for SelectDB provides two time zone variables:
| Variable | Description | Modifiable |
|---|---|---|
system_time_zone | System time zone of the cluster. Set automatically when the server starts, based on the server's OS time zone. | No |
time_zone | Current time zone of the cluster. Affects time functions and imported data with time zone offsets. | Yes |
Query and set time zone variables
To view both time zone variables:
SHOW VARIABLES LIKE '%time_zone%';To set the session time zone:
SET time_zone = 'Asia/Shanghai';To set the cluster time zone for all subsequent sessions:
SET GLOBAL time_zone = 'Asia/Shanghai';Supported time zone formats
| Format type | Examples |
|---|---|
| Named (IANA tz database) | Asia/Shanghai, America/Los_Angeles |
| UTC offset | +02:30, -10:00 |
| Abbreviations | GMT, UTC (equivalent to +00:00); CST (equivalent to Asia/Shanghai) |
| Zulu | Z (equivalent to +00:00) |
Some data import methods may accept additional time zone formats not listed above. Do not rely on non-standard formats in production — ApsaraDB for SelectDB may change how it handles them at any time.
How time zone settings affect functions and data
Affected functions
The following functions return results based on the current time_zone setting:
| Function | Behavior |
|---|---|
FROM_UNIXTIME | Converts a UTC timestamp to date and time in the configured time zone. With CST, FROM_UNIXTIME(0) returns 1970-01-01 08:00:00. |
UNIX_TIMESTAMP | Converts a date and time to a UTC timestamp. With CST, UNIX_TIMESTAMP('1970-01-01 08:00:00') returns 0. |
CURTIME | Returns the current time in the configured time zone. |
NOW | Returns the current date and time in the configured time zone. |
CONVERT_TZ | Converts a date and time from one time zone to another. |
SHOW LOAD and SHOW BACKENDS output also reflects the configured time zone.
Not affected by time zone settings:
The
less thanvalues of time-type partition key columns inCREATE TABLEValues stored as
DATEorDATETIMEtype (the stored value itself does not change)
Time type data during import
When importing data of the DATE, DATEV2, DATETIME, or DATETIMEV2 type, ApsaraDB for SelectDB handles time zone conversion as follows:
Data with a time zone suffix (e.g.,
2020-12-12 12:12:12+08:00): Converted to the cluster's currenttime_zone. For example, iftime_zoneis+00:00, the stored value becomes2020-12-12 04:12:12.Data without a time zone suffix (e.g.,
2020-12-12 12:12:12): Treated as absolute time — imported as-is with no conversion.
Daylight saving time
Daylight saving time adjustments are embedded in named time zones. For example, America/Los_Angeles shifts from -08:00 to -07:00 in March and back in November.
To avoid daylight saving time shifts, set time_zone to a fixed UTC offset (e.g., -08:00) instead of a named zone like America/Los_Angeles.
Time zone data sources
On each backend (BE) node, ApsaraDB for SelectDB searches the following locations for time zone data, in order:
The directory returned by the
TZDIRcommand/usr/share/zoneinfoThe
zoneinfodirectory in the BE deployment directory
If no time zone data is found in any location, the BE node fails to start.
Best practices
Time zone-sensitive data
Before importing data, decide how to handle time zone information consistently. Data of time types such as DATETIME does not store a time zone — ApsaraDB for SelectDB distinguishes two categories:
Absolute time: No time zone in the data, no time zone configured at import time. Example: 2023-12-12 08:00:00. Stored and queried as-is, unaffected by time_zone.
Time in a specific time zone: Data includes a time zone suffix. Example: 2020-12-12 12:12:12+02:00. The import method determines which time zone setting applies:
INSERT INTO: Governed by thetime_zonesession variable.Stream Load or Broker Load: Governed by the
timezonerequest header. If thetimezoneheader conflicts with thetime_zonesession variable, thetimezoneheader takes precedence.
ApsaraDB for SelectDB converts the literal time zone in the incoming data to the time zone configured by time_zone at import time.
Recommended approach: Set time_zone once before importing, then keep it consistent. Changing time_zone after importing timezone-aware data causes the stored values to be misinterpreted in subsequent queries.
The following example shows what happens when time_zone is changed mid-session:
-- Current time zone: Asia/Hong_Kong
SELECT @@time_zone;
-- +----------------+
-- | @@time_zone |
-- +----------------+
-- | Asia/Hong_Kong |
-- +----------------+
-- Import data with literal time zone +02:00
-- Stored as 2020-12-12 18:12:12 (converted to Asia/Hong_Kong)
INSERT INTO dtv23 VALUES('2020-12-12 12:12:12+02:00');
SELECT * FROM dtv23;
-- +-------------------------+
-- | k0 |
-- +-------------------------+
-- | 2020-12-12 18:12:12.000 |
-- +-------------------------+
-- Changing time_zone does not update stored values
SET time_zone = 'America/Los_Angeles';
-- A second import now uses America/Los_Angeles for conversion
INSERT INTO dtv23 VALUES('2020-12-12 12:12:12+02:00');
SELECT * FROM dtv23;
-- Rows now carry inconsistent time zone semantics
-- +-------------------------+
-- | k0 |
-- +-------------------------+
-- | 2020-12-12 02:12:12.000 | -- imported under America/Los_Angeles
-- | 2020-12-12 18:12:12.000 | -- imported under Asia/Hong_Kong
-- +-------------------------+Summary of time zone best practices:
Confirm the cluster's
time_zonebefore importing data and do not change it afterward.When using Stream Load or Broker Load, set the
timezoneheader to the same value as thetime_zonesession variable.For absolute time data, import without a time zone suffix. For time zone-aware data, include the time zone suffix and let ApsaraDB for SelectDB handle the conversion.
Daylight saving time accuracy
Named time zone data (including daylight saving time start and end dates) is maintained by the Internet Assigned Numbers Authority (IANA). The data stored on each BE node may not reflect the most recent changes confirmed by IANA for the current year.
If your workload requires the latest daylight saving time schedule, make sure the time zone data source on each BE node contains the latest IANA tz database release. For the latest data, see The tz-announce Archives.
Timeout settings
ApsaraDB for SelectDB supports timeout configuration at multiple levels using the query_timeout and insert_timeout variables and user properties.
Priority order (highest to lowest): Session variables > User properties > Global variables > Default values
| Variable | Scope | Default | Description |
|---|---|---|---|
query_timeout | Session, Global, User property | 900 s (15 min) | Controls the timeout for all query statements. |
insert_timeout | Session, Global, User property | 14400 s (4 h) | Controls the timeout for INSERT statements specifically. When running INSERT, the longer of query_timeout and insert_timeout applies. |
User properties for timeout:
The
query_timeoutandinsert_timeoutuser properties can be set only by the admin user for a specific user.These properties do not have quota semantics — they change the default timeout for the specified user.
Changes to user properties take effect after the client reconnects.
Appendix
Supported variables
Boolean variables
Boolean variables enable or disable a feature. Valid values: true and false.
| Variable | Default | Description |
|---|---|---|
delete_without_partition | false | When true, a DELETE statement without a partition clause applies to all partitions automatically. This extends the duration of the DELETE operation and generates a large number of tasks. Keep this disabled unless required. |
disable_colocate_join | false | When true, disables the Colocation Join feature. The execution plan does not attempt Colocation Join operations. |
enable_bucket_shuffle_join | true | When false, disables the Bucket Shuffle Join feature. The execution plan does not attempt Bucket Shuffle Join operations. |
disable_streaming_preaggregations | false | When true, disables streaming pre-aggregation. This variable cannot be modified. |
enable_insert_strict | false | When true, enables strict mode for INSERT statement data imports. |
enable_spilling | false | When true, enables on-disk sorting for large datasets. Temporary data is written to the doris-scratch/ directory on the BE and cleared after the query completes. This feature is experimental — do not enable in production. |
forward_to_master | true | When true, forwards certain SHOW statements to the master FE node for execution. The master FE node returns more detailed information for SHOW FRONTENDS, SHOW BACKENDS, SHOW BROKER, SHOW TABLET, SHOW REPLICA DISTRIBUTION, SHOW REPLICA STATUS, and SHOW PROC. |
enable_profile | false | When true, sends the query profile to the FE after each query. View profiles at fe_host:fe_http_port/query. The page shows the 100 most recent queries after this variable is enabled. |
rewrite_count_distinct_to_bitmap_hll | true | When true, replaces COUNT DISTINCT with BITMAP_UNION_COUNT or HLL_UNION_AGG for bitmap or HyperLogLog (HLL) data. |
allow_partition_column_nullable | true | When true, partition key columns can be defined as NULL. When false, partition key columns must be NOT NULL. |
enable_exchange_node_parallel_merge | false | When true, the exchange node uses multiple threads to merge sorted data in parallel, reducing single-thread merge bottlenecks in large sort operations. |
extract_wide_range_expr | false | When true, enables wide common factor extraction. |
enable_fold_constant_by_be | false | When true, constant folding is performed on the BE via remote procedure call (RPC) requests. When false, constant folding runs on the FE. |
disable_join_reorder | false | When true, disables all automatic Join Reorder algorithms. JOIN operations follow the table order in the SQL statement. |
enable_infer_predicate | true | When true, enables predicate derivation. When false, ApsaraDB for SelectDB uses the original predicate without derivation. |
trim_tailing_spaces_for_external_table_query | false | When true, trims trailing spaces from fields when querying Hive external tables. |
enable_inverted_index_query | true | When true, inverted indexes are used in queries. |
skip_missing_version | false | When true, queries ignore the visible version recorded on the FE and use the BE replica version instead, skipping missing versions. Use only in emergency data recovery scenarios. This variable takes effect only when use_fix_replica is set to -1. |
enable_parquet_lazy_materialization | true | When true, enables late materialization for Parquet Reader. |
enable_orc_lazy_materialization | true | When true, enables late materialization for ORC Reader. |
enable_strong_consistency_read | true | When true, enables strong consistency for data reads across sessions. By default, strong consistency is maintained within the same session only. If you want to enable strong consistency for data reads between sessions, set this variable to true. |
truncate_char_or_varchar_columns | false | When true, truncates CHAR and VARCHAR columns to the maximum length defined in the table schema, even if the underlying Parquet or ORC file uses a different length. |
jdbc_clickhouse_query_final | false | When true, appends the FINAL keyword to ClickHouse queries made via the JDBC Catalog feature, for deduplication with the ReplacingMergeTree table engine. |
enable_memtable_on_sink_node | false | When true, creates a memory table on the Data Sink node during import and sends segments to other BE nodes via bRPC streaming, reducing duplicate work across replicas. |
enable_unique_key_partial_update | false | When true, enables partial column update semantics for INSERT INTO on Unique Key model tables using Merge on Write (MOW) mode. If both enable_unique_key_partial_update and enable_insert_strict are true, INSERT INTO can only update existing rows. To insert new rows with partial column updates, set enable_unique_key_partial_update to true and enable_insert_strict to false. |
describe_extend_variant_column | false | When true, displays sub-columns for VARIANT type columns in DESCRIBE output. |
dry_run_query | false | When true, returns only the row count instead of the actual result set. The Data Sink node drops data instead of writing it. Useful for testing large datasets without the overhead of transmitting results. |
drop_table_if_ctas_failed | true | When true, deletes the created table if CREATE TABLE AS SELECT fails due to a write error. |
show_user_default_role | false | When true, includes the implicit role of each user in SHOW ROLES output. |
group_by_and_having_use_alias_first | false | When true, GROUP BY and HAVING clauses resolve column aliases before searching the FROM clause. |
enable_file_cache | false | When true, enables the block file cache. This variable takes effect only if enable_file_cache=true is also set in the BE's be.conf file. |
enable_auto_analyze | true | When true, enables automatic statistics collection. |
enable_cost_based_join_reorder | false | When true, enables cost-based Join Reorder to automatically adjust table order in JOIN operations. |
enable_fallback_to_original_planner | false | When true, automatically falls back to the original optimizer when the new optimizer encounters unsupported scenarios. |
enable_odbc_transcation | false | When true, enables transaction support for writing data to a JDBC Catalog. |
enable_parallel_outfile | false | When true, enables parallel scan for SELECT INTO OUTFILE. |
enable_sql_cache | false | When true, enables SQL statement result caching. |
enable_unicode_name_support | false | When true, supports table and column names containing Unicode characters (including Chinese). |
enable_vectorized_engine | true | When true, enables the vectorized engine to accelerate queries. |
experimental_enable_nereids_planner | true | When true, enables the new optimizer (Nereids). This feature is experimental. |
experimental_enable_pipeline_engine | true | When true, enables the Pipeline execution engine. This feature is experimental. |
experimental_enable_single_replica_insert | false | When true, selects one replica as the primary for data import. Other replicas pull data files from the primary after the write completes, reducing redundant compute across replicas. |
faster_float_convert | false | When true, uses a faster floating-point number conversion algorithm. Note that this algorithm may change the output format. |
enable_page_cache | true | When true, enables the page-level cache. |
strict_mode | false | When true, enables strict mode for data import. |
Configuration variables
Configuration variables accept a range of values beyond a simple true/false.
| Variable | Default | Description | |
|---|---|---|---|
auto_broadcast_join_threshold | 0.8 | The maximum ratio of hash table memory to overall execution memory allowed for Broadcast Join. When the hash table exceeds this ratio, ApsaraDB for SelectDB switches to Shuffle Join. Set to -1 to disable broadcast entirely. Use a query hint such as join[shuffle] to explicitly control the join method. | |
batch_size | 1024 | The number of rows per packet transmitted between nodes during a query. Larger values increase throughput for large scans but may increase latency and memory usage for small queries. Recommended range: 1024–4096. | |
default_order_by_limit | -1 | The default number of rows returned after ORDER BY. -1 means no limit (up to the maximum LONG value). | |
exec_mem_limit | 2 GB | The maximum memory for a single query. Accepts units: bytes (default), KB, MB, GB, TB, PB. Limits memory per fragment instance — not the total memory for the query across the cluster. If a Memory Exceed Limit error occurs, increase this value exponentially (4 GB, 8 GB, 16 GB). | |
lower_case_table_names | 0 | Controls table name case sensitivity. 0: Case-sensitive. 1: Case-insensitive; names are stored and compared in lowercase. 2: Case-insensitive; names are stored as specified in CREATE TABLE but compared in lowercase. Must be set in fe.conf during cluster initialization and cannot be changed afterward with SET. | |
max_pushdown_conditions_per_column | -1 | The maximum number of predicate conditions pushed down to the storage engine per column. -1 uses the value from be.conf. A positive value overrides be.conf for the current session. | |
max_scan_key_num | -1 | The maximum number of scan keys a scan node can split per query. -1 uses the value from be.conf. A positive value overrides be.conf for the current session. | |
parallel_exchange_instance_num | -1 | The number of exchange nodes an upstream node uses to receive data from downstream nodes. -1 matches the number of downstream instances. A positive value lower than the downstream instance count limits the number of exchange nodes, useful for reducing overhead in aggregate queries with small aggregated output. | |
analyze_timeout | 43200 s | The timeout for ANALYZE statements, in seconds. | |
auto_analyze_end_time | — | The end time for the automatic statistics collection window. | |
auto_analyze_start_time | — | The start time for the automatic statistics collection window. | |
auto_analyze_table_width_threshold | 70 | Tables with more columns than this threshold are excluded from automatic statistics collection. | |
div_precision_increment | 4 | The number of additional decimal digits added to the result of / division operations. | |
external_agg_bytes_threshold | — | The memory threshold for aggregate operations. When exceeded, in-memory data is spilled to disk. | |
external_sort_bytes_threshold | — | The memory threshold for sort operations. When exceeded, in-memory data is spilled to disk. | |
external_table_auto_analyze_interval_in_millis | 86400000 ms | The minimum interval between automatic statistics collections for external tables. | |
fragment_transmission_compression_codec | lz4 | The compression format used during data transmission between nodes. | |
huge_table_auto_analyze_interval_in_millis | 0 | The minimum interval between automatic statistics collections for large tables (more than 5x the size specified by huge_table_lower_bound_size_in_bytes). | |
huge_table_default_sample_rows | 4194304 | The number of rows sampled for large tables during automatic statistics collection. | |
huge_table_lower_bound_size_in_bytes | 0 | Tables larger than this threshold are sampled during automatic statistics collection. | |
ignore_runtime_filter_ids | (empty) | Runtime filter IDs to ignore. | |
max_filter_ratio | 0 | The maximum ratio of rows that can be filtered out during import (for example, rows that fail validation). | |
max_instance_num | 64 | The maximum number of concurrent Pipeline tasks per SQL query when parallel_pipeline_task_num is not set. | |
parallel_pipeline_task_num | 0 | The number of concurrent Pipeline tasks per SQL query. When 0, ApsaraDB for SelectDB sets this to half the CPU cores on each BE node, up to max_instance_num. | |
runtime_filter_type | IN or Bloom filter | The type of runtime filter used. Options: Bloom filter, MinMax filter, IN predicate, IN or Bloom filter, bitmap filter. | |
runtime_filter_mode | GLOBAL | The pushdown policy for runtime filters. Options: OFF, LOCAL, GLOBAL. | |
runtime_filter_wait_time_ms | 1000 ms | The duration the left table's scan node waits for each runtime filter. | |
runtime_filters_max_num | 10 | The maximum number of Bloom filters per query. | |
runtime_bloom_filter_min_size | 1048576 bytes (1 MB) | The minimum size of a Bloom filter. | |
runtime_bloom_filter_max_size | 16777216 bytes (16 MB) | The maximum size of a Bloom filter. | |
runtime_bloom_filter_size | 2097152 bytes (2 MB) | The default size of a Bloom filter. | |
runtime_filter_max_in_num | 1024 | The maximum number of rows in the right table for which an IN predicate can be generated. | |
parallel_fragment_exec_instance_num | 1 | The number of instances processing scan ranges on each BE node. Increasing this value improves throughput for large queries but increases CPU, memory, and disk I/O usage. | |
query_timeout | 900 s | The timeout for all query statements in the current connection. For INSERT statements, use insert_timeout instead. | |
insert_timeout | 14400 s | The timeout for INSERT statements. If insert_timeout is less than query_timeout, query_timeout applies instead (for backward compatibility). | |
send_batch_parallelism | — | The default concurrency for batch data sends during INSERT operations. Capped at the BE configuration item max_send_batch_parallelism_per_job. | |
sql_mode | — | Controls SQL syntax compatibility and data validation strictness. See SQL Mode. | |
sql_select_limit | (no limit) | The default maximum number of rows returned by SELECT, including SELECT within INSERT. | |
time_zone | Value of system_time_zone | The current session's time zone. Affects time function results. See Time zone variables. | |
wait_timeout | 28800 s (8 h) | The idle connection timeout. ApsaraDB for SelectDB closes connections that have been idle longer than this period. | |
default_rowset_type | beta | The default storage format for compute nodes. Valid values: alpha, beta. | |
use_v2_rollup | — | Controls whether queries use ROLLUP indexes in Segment V2 format. Used for verification during Segment V2 migration. | |
prefer_join_method | — | The preferred join method when Broadcast Join and Shuffle Join have equal cost. Valid values: broadcast, shuffle. | |
insert_visible_timeout_ms | 10000 ms | The time ApsaraDB for SelectDB waits for a transaction to commit and data to become visible after an INSERT completes. Minimum value: 1000 ms. | |
cpu_resource_limit | -1 (no limit) | Limits the number of scan threads a query can use on a single node. Overrides parallel_fragment_exec_instance_num. Overridden by the cpu_resource_limit user property. This feature is experimental. | |
return_object_data_as_binary | — | When true, returns bitmap and HLL data in binary format in SELECT results. For SELECT INTO OUTFILE, CSV output encodes as Base64; Parquet output stores as a byte array. Example in Java: stmt.execute("set return_object_data_as_binary=true");. For more information, see samples. | |
block_encryption_mode | NULL | The block encryption mode. AES_128_ECB uses the Advanced Encryption Standard (AES) algorithm. SM4_128_ECB uses the ShangMi 4 (SM4) algorithm. See the variable description for the full list of supported modes. | |
default_password_lifetime | 0 (never expires) | The default password expiration period in days. Applies only to users with PASSWORD_EXPIRE DEFAULT policy. | |
password_history | 0 (no limit) | The number of previous passwords retained. Applies only to users with PASSWORD_HISTORY DEFAULT policy. | |
validate_password_policy | NONE (0) | The password strength policy. STRONG (2) requires passwords of at least 8 characters containing at least 3 of: uppercase letters, lowercase letters, digits, and special characters (`~ ! @ # $ % ^ & * ( ) _ + | < > , . ? / : ; ' [ ] { } "`). |
rewrite_or_to_in_predicate_threshold | 2 | The minimum number of OR operators before rewriting to an IN predicate. | |
file_cache_base_path | random | The storage path for the block file cache on the BE. Defaults to a randomly selected path from the BE configuration. | |
topn_opt_limit_threshold | 1024 | The LIMIT clause threshold for enabling top-N query optimizations (dynamic filtering, two-phase result retrieval, key-ordered reads). Optimizations apply when n <= threshold. | |
use_fix_replica | -1 (disabled) | When set to 0 or higher, always uses the replica with that serial number rank (0 = smallest). |
Compatible and reserved variables
These variables have no functional effect in ApsaraDB for SelectDB. They exist for compatibility with MySQL clients or JDBC connection pools.
| Variable | Purpose |
|---|---|
SQL_AUTO_IS_NULL | Compatibility with the c3p0 JDBC connection pool. |
auto_increment_increment | MySQL client compatibility. Does not affect auto-increment column behavior. |
auto_increment_offset | MySQL client compatibility. Does not affect auto-increment column behavior. |
autocommit | MySQL client compatibility. |
character_set_client | MySQL client compatibility. |
character_set_connection | MySQL client compatibility. |
character_set_results | MySQL client compatibility. |
character_set_server | MySQL client compatibility. |
collation_connection | MySQL client compatibility. |
collation_database | MySQL client compatibility. |
collation_server | MySQL client compatibility. |
have_query_cache | MySQL client compatibility. |
init_connect | MySQL client compatibility. |
interactive_timeout | MySQL client compatibility. |
language | MySQL client compatibility. |
max_allowed_packet | Compatibility with the c3p0 JDBC connection pool. |
net_buffer_length | MySQL client compatibility. |
net_read_timeout | MySQL client compatibility. |
net_write_timeout | MySQL client compatibility. |
query_cache_size | MySQL client compatibility. |
query_cache_type | Compatibility with the c3p0 JDBC connection pool. |
sql_safe_updates | MySQL client compatibility. |
tx_isolation | MySQL client compatibility. |
tx_read_only | MySQL client compatibility. |
transaction_read_only | MySQL client compatibility. |
transaction_isolation | MySQL client compatibility. |
version | MySQL client compatibility. |
performance_schema | Compatibility with MySQL JDBC 8.0.16 and later. |
codegen_level | Sets the LLVM codegen level. Has no effect. |
license | Displays the ApsaraDB for SelectDB license. |
resource_group | Reserved. |
system_time_zone | Set to the system time zone at cluster initialization. Cannot be modified. |
version_comment | Displays the ApsaraDB for SelectDB version. Cannot be modified. |
dump_nereids_memo | Used for regression testing. |
group_concat_max_len | Compatibility with some business intelligence (BI) tools. |
be_number_for_test | No practical effect. |
workload_group | Reserved. |
References
The tz-announce Archives — Latest IANA time zone database releases