All Products
Search
Document Center

Lindorm:Summary of Issues

Last Updated:Mar 30, 2026

This page covers common issues when using LindormTable, organized by category. Each section explains the cause and provides steps to resolve the issue.

Issue summary

Connection issues

Minor version updates

What is the impact of a minor version update? How long does it take?

Storage-related topics (compaction)

Data management

Data queries

Monitoring

HBase-related topics

Batch operations

Connection

Why does Lindorm-cli fail to connect to LindormTable?

Check each item:

What are the common port numbers for LindormTable?

Port Protocol Description
30060 Avatica protocol SQL port
33060 MySQL protocol SQL port
30020 HBase-compatible protocol Wide-table port (Java access)
9042 Cassandra-compatible protocol CQL port

Minor version updates

What is the impact of a minor version update? How long does it take?

A minor version update performs a rolling restart — one node at a time. During each restart, Regions go offline briefly and then come back online. After the update, the system rebalances load automatically.

Impact: Low-load instances are minimally affected. High-load or latency-sensitive instances may see brief disruptions. Schedule updates during off-peak hours.

Duration: 5–30 minutes per node, depending on the number of Regions and current load.

Storage and compaction

What does compaction do?

Compaction cleans up expired data (TTL (time-to-live)), removes delete markers, archives hot and cold data, and compresses data to reduce storage usage.

How often does compaction run automatically?

The default interval is 20 days. In TTL (time-to-live) scenarios, the default is min(TTL value, 20 days).

To change the interval, use either method:

Note

COMPACTION_MAJOR_PERIOD is in milliseconds (ms).

Can I trigger compaction manually?

Yes. Use the execute major compaction SQL statement.

Important

Manually triggering compaction on high-load instances, large tables, or tables with hot/cold data separation carries risk. After triggering, monitor the maximum number of files per Region. Too many files can cause backpressure on writes. For alerts about the maximum number of files per Region, see monitoring and alerting best practices.

What is the business impact of compaction?

Compaction runs on multiple threads. The number of threads depends on instance specification — higher specs process faster. Lower specifications may queue many tasks. When CPU is available, compaction improves read performance and frees storage while having minimal effect on write throughput.

To check compaction status, view Compaction queue length under LindormTable metrics — cluster load in instance monitoring:

  • Normal: The value steadily decreases, or rises periodically then falls — and does not keep rising or stay flat for more than a day.

  • Abnormal: The value keeps rising or stays flat for more than a day.

If CPU utilization is below 40%: LindormTable 2.6.5 and later auto-adjust compaction parameters. Update to a newer minor version to get this behavior.

If CPU utilization exceeds 40%: Add more LindormTable nodes.

Why does storage keep growing even after I set TTL?

Cause: If the compaction queue has a large backlog, data cleanup lags behind data expiration.

Resolution:

  1. Check Compaction queue length in instance monitoring under LindormTable metrics — cluster load.

  2. If the queue is backlogged:

  3. If the queue is empty but storage is still growing, I/O load may be low. Manually trigger compaction or reduce the compaction interval. For example, to set the interval to 2 days: ALTER TABLE <tableName> SET 'COMPACTION_MAJOR_PERIOD'='172800000';

Note

COMPACTION_MAJOR_PERIOD is in milliseconds. The default interval is 20 days; in TTL scenarios, min(TTL value, 20 days).

How do I reduce storage space using compression?

Set the table's compression algorithm to ZSTD and the codec to INDEX, then run major compaction.

Important

SQL tables created via SQL already have these settings applied by default. No action is needed.

SQL

Connect using Lindorm-cli or Lindorm Insight and run:

-- Set compression to ZSTD and encoding to INDEX
ALTER TABLE <tablename> SET 'COMPRESSION' = 'ZSTD', 'DATA_BLOCK_ENCODING' = 'INDEX';
ALTER TABLE <tablename> COMPACT;
-- For tables with many Regions, wait for the queue to drain

HBase API

alter 'ns:tablename', {NAME=>'family', DATA_BLOCK_ENCODING => 'INDEX', COMPRESSION => 'ZSTD'}
major_compact 'ns:tablename'
-- For tables with many Regions, wait for the queue to drain

Lindorm Insight

Use table change management to set compression to ZSTD. Then go to the table Overview page and scroll to the bottom to monitor compaction progress.

Track progress via Compaction queue length in instance monitoring.

What do I do when disk capacity is full?

Take any of the following actions:

Important

Do not use DELETE to free space when the disk is full. LindormTable prioritizes write throughput — a DELETE writes a delete marker rather than immediately removing data. Physical removal only happens during the next compaction. When the disk is already full, even delete markers cannot be written, so compaction cannot purge the data. Use DROP TABLE or TRUNCATE TABLE instead.

Why can I not delete data when the disk is full?

Cause: LindormTable prioritizes write throughput. A DELETE operation does not remove data immediately — it writes a delete marker that hides the data from queries. Physical removal only happens during compaction. When the disk is full, the system blocks all writes, including delete markers. Because no delete marker can be written, compaction has nothing to purge and cannot free space.

Resolution: Use DROP TABLE or TRUNCATE TABLE to free space immediately, or scale up hot storage capacity.

Does LindormTable support scaling nodes or disk capacity?

Changing node count is an engine-level operation. Scaling disk capacity is an instance-level operation.

Operation Local-disk instances Cloud-disk instances
Change number of LindormTable nodes Supported Supported
Scale hot storage capacity Not supported Supported
Scale cold storage capacity Not supported Supported
Note

Scaling down requires copying data and takes time.

Data management

What units do common table properties use?

Property Parameter Unit Notes
Major compaction interval COMPACTION_MAJOR_PERIOD Milliseconds (ms) 2 days = 172800000 ms
Timestamp Milliseconds (ms) Some hints use seconds (s), e.g., /*+ _l_ts_(%s) */
TTL (time-to-live) Seconds (s)

How do I set NUMREGIONS when creating a table?

If NUMREGIONS is not specified, the table starts with 1 partition. As a starting point, set NUMREGIONS to number of server nodes × 4.

Partitions split automatically when:

  • Partition data reaches 8 GB, or

  • Combined read/write QPS for the partition exceeds 1,000 (the system detects the hot spot and decides whether to split)

For better hot-spot self-healing, use LindormTable 2.4.x or later. Update to a newer minor version if needed.

What happens when I run ALTER TABLE?

ALTER TABLE closes and reopens all Regions of the table. The impact is minimal. If your application requires millisecond read latency, schedule this operation during off-peak hours.

Why does my write fail with a column size limit error?

Error:

com.alibaba.lindorm.client.exception.IllegalDataException: Column [xxx] is too big, max length is 2097152 bytes but has 7621168 bytes.

Cause: The default maximum cell size is 2 MB (2,097,152 bytes). VARBINARY columns have no size limit. For other limits, see quotas and limits.

Resolution: If load is low, temporarily increase the limit with (not recommended for production):

ALTER TABLE <tablename> SET 'MAX_NONPK_LEN'='4194304';  -- unit: bytes

Stay within these bounds based on node memory:

Node memory Maximum MAX_NONPK_LEN
32 GB 5 MB
64 GB 10 MB

What are the methods and considerations for deleting data?

LindormTable supports two deletion methods:

  • TRUNCATE TABLE: Clears all data in a table immediately. Use TRUNCATE TABLE.

  • Row deletion by primary key: Deletes specific rows using full primary keys. Range deletes are not supported — query the full primary key first, then delete using exact conditions.

After deletion, if your application is read-latency sensitive, manually trigger major compaction. Otherwise, wait for the next scheduled compaction cycle.

How do I verify that data has moved to cold storage?

Compare the results of two queries using the same primary key:

  1. Run a full query to retrieve all data.

  2. Run a hot-data-only query using a HINT.

If both queries return the same result, the data is still in hot storage. If they differ — and the data is missing from the hot-data query — the data has moved to cold storage.

Before querying, check the cold storage and hot storage sizes on the table Overview page in Lindorm Insight. Compare sizes before and after archiving.

Why has my data not moved to cold storage?

See why data has not moved to cold storage after compaction.

Common causes:

  • Flush not performed: Data must be flushed to disk before compaction can archive it. Run flush first.

  • Compaction backlog: Check Compaction queue length in instance monitoring under LindormTable metrics — cluster load. If the value stays above 0 and keeps growing, a backlog exists. Scale out or upgrade to resolve it.

  • Data has a timestamp: Data with a custom or special timestamp may not be eligible for cold storage archiving.

Data queries

Why does a secondary index query not return NULL values?

When using primary key reordering or multi-column indexes, the system skips index entries where the first non-primary-key column is NULL. Only rows with actual (non-NULL) values in that column appear in the index table.

What causes unexpected query results?

See common reasons why query results do not match expectations.

Monitoring

What does the cold storage token metric mean?

Cold storage is for infrequently accessed archived data — minimize reads from it. The cold storage token metric tracks rate limiting for cold storage access. A continuously decreasing token count means some requests were throttled.

What are the recommended monitoring configurations?

See monitoring and alerting best practices.

Table-level monitoring FAQs

Why do monitoring metrics not update after I rename a table?

Only Wide Table Engine > Table-level monitoring reflects the rename. Other metrics — such as system-level metrics — remain unchanged.

Why can I not find my table in table monitoring?

Extend the time range (for example, from 1 hour to 24 hours). If the table still does not appear, it had no read or write activity during that period, so no monitoring data was reported.

HBase compatibility

What is the difference between SQL tables and HBase tables?

SQL tables have fixed schemas with column names and types defined at creation, and support only SQL operations. HBase tables have no fixed schema, support dynamic columns, and are written through HBase APIs (though they can be read via SQL).

Dimension SQL table HBase table
Creation SQL commands hbase shell or HBase sync tools
Schema Fixed — column types strictly defined None — dynamic columns supported
Write access SQL API only HBase API only
Read access SQL API SQL API (see Htype mapping docs)

To check whether a table is a SQL table or HBase table:

SHOW TABLE VARIABLES FROM <database_name> LIKE 'IS_HBASE_LIKE';
  • true: HBase table

  • false: SQL table

Does ApsaraDB for HBase Performance-enhanced Edition support SQL?

Yes. ApsaraDB for HBase Performance-enhanced Edition uses the LindormTable engine (compatible with HBase or Cassandra) and supports SQL. Connect using Lindorm-cli:

./lindorm-cli -url jdbc:lindorm:table:url=http://ld-bp17j28j2y7pm****-proxy-lindorm-pub.lindorm.rds.aliyuncs.com:30060 -username xxx -password xxx
# After connecting
lindorm:default> show databases;
Note

Before connecting, verify network connectivity using telnet and add your client IP to the whitelist.

What should I know before using an open-source HBase client?

Open-source HBase clients do not support authentication or multi-zone deployments. Before connecting to LindormTable, install the HBase SDK.

Batch operations

How do I enable batch deletion?

Warning

Normal deletion rarely causes performance issues. Large-scale deletion accumulates many delete markers, which increases scan overhead and can cause query timeouts. See query timeout after batch deletion.

Note

Batch deletion requires LindormTable 2.8.2.13 or later. See the LindormTable version guide and minor version updates.

-- Enable batch deletion
ALTER SYSTEM SET `lindorm.allow.range.delete`=TRUE;
-- Verify the setting
SHOW SYSTEM variables LIKE 'lindorm.allow.range.delete';

Why does batch update fail with "Update's WHERE clause can only contain PK columns"?

Cause: Single-row updates are enabled by default. Batch updates are disabled.

Resolution: Enable batch updates using SQL. Batch update requires LindormTable 2.8.2.13 or later. See the LindormTable version guide and minor version updates.

-- Enable batch update
ALTER SYSTEM SET `lindorm.allow.batch.update`=TRUE;
-- Verify the setting
SHOW SYSTEM variables LIKE 'lindorm.allow.batch.update';

If batch updating a table with a secondary index causes query timeouts, see query timeout after batch update with a secondary index.

Query timeout after batch deletion

Cause: LindormTable prioritizes write throughput. DELETE writes a delete marker instead of immediately removing data — deleted data is hidden from reads but physically remains on disk until compaction. Large-scale deletions accumulate many delete markers. For example, a range scan with 100,000 valid rows alongside 1,000,000 deleted rows and 1,000,000 delete markers forces the system to scan approximately 2,100,000 records to return valid results, significantly increasing read latency.

Resolution: Run compaction to permanently remove delete markers and expired data. Compaction can be triggered automatically or manually — see how compaction works.

Query timeout after batch update with a secondary index

Cause: A secondary index is a separate index table. Its primary key is [indexed column value] + [primary table RowKey]. When primary table records are updated, LindormTable automatically deletes old index entries (by writing delete markers) and inserts new ones. Large-scale updates to indexed columns accumulate many delete markers in the index table. Queries using that index must scan all RowKeys for the target value and skip deleted entries — if few entries are valid, scan overhead rises sharply. For example, a range scan with 100,000 valid rows alongside 1,000,000 deleted rows and 1,000,000 delete markers forces the system to scan approximately 2,100,000 records to return valid results.

Resolution: Run compaction to permanently remove delete markers and expired data. Compaction can be triggered automatically or manually — see how compaction works.