All Products
Search
Document Center

Lindorm:Lindorm SQL FAQ

Last Updated:Mar 28, 2026

Common issues and solutions for using Lindorm SQL with LindormTable (wide table engine).

Note

All issues on this page apply only to LindormTable.

Query issues

Why do some queries return "This query may be a full table scan"?

LindormTable blocks queries that cannot use a primary key or index to narrow the scan range. These queries perform a full table scan, which degrades database performance and stability.

The matching rules follow the leftmost prefix rule — the same rule MySQL uses for composite indexes. The system matches the columns in your WHERE clause against the primary key (or index key) columns starting from the leftmost column. If your query skips the first column, the key is not used and a full table scan results.

For example, if the table test has a composite primary key (p1, p2, p3) and you run:

SELECT * FROM test WHERE p2 < 30;

The query skips p1, so LindormTable cannot use the primary key. The entire table is scanned to satisfy the p2 < 30 condition.

To fix or avoid this:

  • Include the primary key's first column in the WHERE clause, following the leftmost prefix rule.

  • Redesign the table's primary key. See How to design a primary key for a wide table.

  • Create a secondary index on the columns you query. See Secondary indexes.

  • For multidimensional queries across multiple columns, create a search index. See Search indexes.

  • To force execution of the inefficient query, add the /*+ _l_allow_filtering_ */ hint:

    SELECT /*+ _l_allow_filtering_ */ * FROM dt WHERE nonPK = 100;
Important

Forcing a full table scan risks degrading overall database performance and stability. Use this approach only after assessing the impact.

Why does a GROUP BY query fail with a "subPlan groupby keys" error?

Error message:

The diff group keys of subPlan is over lindorm.aggregate.subplan.groupby.keys.limit=..., it may cost a lot memory so we shutdown this SubPlan

The GROUP BY operation produced too many groups. Large numbers of groups consume excessive memory and increase instance load, so LindormTable shuts down the subplan.

To resolve this:

  • Add filter conditions to reduce the number of groups before aggregation.

  • For multidimensional aggregation scenarios, create a search index to offload the work. See Introduction to search indexes.

  • To increase the group count threshold, contact Lindorm technical support (DingTalk ID: s0s3eg3).

Important

Raising the group count threshold increases memory consumption and may affect instance stability. Evaluate the impact before making changes.

Why does SELECT * on a dynamic columns table fail with a "Limit not set" error?

Error message:

Limit of this select statement is not set or exceeds config when select all columns from table with property DYNAMIC_COLUMNS=true

Tables with dynamic columns have no fixed schema and can contain a large and unpredictable number of columns. Running SELECT * without a row limit causes high I/O and increases instance load, so LindormTable requires a LIMIT clause on such queries.

Add a LIMIT clause to the SELECT statement:

SELECT * FROM test LIMIT 10;

Why does a query fail with "Code grows beyond 64 KB"?

The Lindorm SQL engine uses Just-In-Time (JIT) compilation: it generates bytecode from the query's physical plan and compiles it at runtime. This error means the bytecode for a generated method exceeds the 64 KB limit imposed by the Java Virtual Machine (JVM).

The most common cause is a predicate in the SQL query statement that is too long or too complex, resulting in bytecode that is too large to be executed.

Simplify the predicate expressions in the SQL statement. Break complex conditions into smaller parts or rewrite the logic to reduce bytecode size.

Why does a query fail with "The estimated memory used by the query exceeds the maximum limit"?

The SQL engine consumes significant memory when processing result sets — during aggregation, sorting, or deduplication. Because Lindorm SQL is designed for high-concurrency online workloads, it limits each query to 8 MB of memory by default. Exceeding this limit triggers a memory overflow exception.

Step 1 — Diagnose before acting:

Check the execution plan to determine whether aggregation and sorting operators are pushed down to the storage engine or executed in the SQL engine. See Interpret an execution plan.

  • If heavy operators run in the SQL engine, query optimization is the right fix (see Option 1).

  • If operators are already pushed down and the query is already optimized, raise the memory limit (see Option 2).

Option 1 — Optimize the query (preferred):

Push aggregation and sorting down to the storage engine using indexes, and tighten filter conditions to reduce the amount of data the SQL engine processes.

Option 2 — Raise the memory limit:

If the query is already optimized and you need a higher limit, adjust QUERY_MAX_MEM using ALTER SYSTEM:

ALTER SYSTEM SET QUERY_MAX_MEM = 8388608;

Check the current value with the SHOW VARIABLES statement.

If your SQL engine version is earlier than 2.9.6.0, contact Lindorm technical support (DingTalk ID: s0s3eg3) to increase the limit.

Important

In high-concurrency environments, raising QUERY_MAX_MEM increases memory pressure on the cluster and can trigger a forced Full GC, reducing responsiveness across the entire cluster. Evaluate query throughput and concurrency carefully before increasing this value.

Index and schema issues

Why does creating a secondary index fail with "Executing job number exceed, max job number = 8"?

Each Lindorm instance allows a maximum of 8 concurrent secondary index build tasks. If 8 tasks are already running, new index creation attempts fail.

Avoid creating many secondary indexes at the same time. If you need to create a large number of indexes in bulk, contact Lindorm technical support (DingTalk ID: s0s3eg3).

After deleting a column, why does re-adding a column with the same name fail with "column is under deleting"?

After you delete a column, LindormTable asynchronously cleans up the column's data from memory, hot storage, and cold storage. The system prevents re-adding a column with the same name until cleanup completes — to avoid dirty data caused by type mismatches or data collisions.

The cleanup runs in the background and can take a long time. To accelerate it, run the following statements on the table (replace dt with your table name):

-- Flush residual in-memory data to storage.
ALTER TABLE dt FLUSH;

-- Run compaction to merge and remove deleted data.
ALTER TABLE dt COMPACT;

After the cleanup completes, re-add the column.

Important
  • FLUSH is supported starting from SQL engine version 2.7.1. Check your version in the SQL version guide.

  • Both FLUSH and COMPACT are asynchronous. A successful statement execution does not mean the cleanup is complete.

  • Running COMPACT on a table with a large data volume consumes significant system resources. Avoid running it during peak business hours.

After creating a secondary index, why does writing data fail with a "User-Defined-Timestamp" error?

Error message:

Performing put operations with User-Defined-Timestamp in indexed column on MULTABLE_LATEST table is unsupported

When you write data with an explicit custom timestamp — for example, using the /*+ _l_ts */ hint in an UPSERT statement — both the primary table and the secondary index table must have their mutability set to MUTABLE_ALL. However, Lindorm defaults new tables and indexes to MUTABLE_LATEST for performance reasons. Writing with a custom timestamp to an index table with MUTABLE_LATEST mutability triggers this error.

The MUTABILITY property cannot be changed after an index table is created. You must delete the existing index, update the primary table's mutability, and then recreate the index.

  1. Disable and drop the existing secondary index:

    -- Disable the index.
    ALTER INDEX IF EXISTS <original_secondary_index_name> ON <primary_table_name> DISABLED;
    
    -- Drop the index.
    DROP INDEX IF EXISTS <original_secondary_index_name> ON <primary_table_name>;

    See Delete a secondary index.

  2. Set the primary table's mutability to MUTABLE_ALL:

    ALTER TABLE IF EXISTS <primary_table_name> SET MUTABILITY='MUTABLE_ALL';
  3. Create a new secondary index. See CREATE INDEX.

Note

For details on writing data with a custom timestamp, see Use HINTs to set timestamps for multi-version data management.

Note

For details on how secondary index mutability interacts with custom timestamps, see Update an index with a custom timestamp.

Batch operations

Why are batch updates not supported, or why does the error "Update's WHERE clause can only contain PK columns" occur?

Only single-row updates are supported by default. For information on how to enable batch updates, see Batch operations FAQ.

How do I enable batch deletes?

For information on how to enable and configure batch deletes, see Batch operations FAQ.