All Products
Search
Document Center

Lindorm:Engines

Last Updated:Mar 28, 2026

Lindorm provides five specialized engines — the wide table engine, time series engine, search engine, compute engine, and streaming engine — each optimized for a specific data workload. The engines are compatible with open-source APIs including HBase, Cassandra, OpenTSDB, Apache Solr, Elasticsearch, Apache Spark, Apache Kafka, and HDFS, and support SQL queries across multiple engines.

Each engine scales independently, so you can right-size resources for each workload without over-provisioning. You can enable multiple engines on a single instance.

Choose an engine

Use the following table to match your data type and use case to the appropriate engine.

EngineCompatible APIsBest forKey capabilities
Wide table engine (LindormTable)SQL, HBase API, Cassandra Query Language (CQL), Amazon S3 APISemi-structured and structured data at scale: metadata, orders, bills, user personas, social information, feeds, logs, and trajectoriesHandles tens of millions of concurrent requests and stores up to hundreds of petabytes. Compared with open source HBase: 3–7x read/write throughput, 1/10 P99 latency, 2x compression ratio, and 50% lower storage cost. Supports global secondary indexes, multi-dimensional queries, dynamic columns, and time to live (TTL). Includes hot/cold data separation. The built-in GanosBase service supports spatial and spatio-temporal data for large-scale historical trajectory queries.
Time series engine (LindormTSDB)HTTP API, OpenTSDB APIDevice telemetry, IoT sensor data, and operational metrics where data arrives in time order and queries span a time intervalDedicated time series compression for higher compression ratios. Supports SQL queries, multi-dimensional timeline queries, aggregation, downsampling, and elastic scaling.
Search engine (LindormSearch)SQL, Apache Solr API, Elasticsearch APIFull-text search and complex multi-dimensional queries over large datasets: logs, text, documents, bills, and user personasDecoupled storage and compute. Seamlessly indexes data from the wide table and time series engines. Supports full-text search, aggregation, complex multi-dimensional queries, horizontal scaling, one-write-multiple-read architecture, cross-zone disaster recovery, and TTL.
Compute engine (LDPS)Apache Spark APIProduction of large amounts of data, interactive analytics, computational learning, and graph computingCloud-native distributed computing compatible with Apache Spark community models and APIs. Deeply integrated with Lindorm storage engines to use underlying data features and indexes for efficient distributed job execution.
Streaming engineSQL, Apache Kafka APIReal-time streaming data: IoT data processing, application log processing, logistics aging analysis, travel data processing, and real-time trajectory processingStores and performs lightweight computation on streaming data. Combined with the wide table engine's GanosBase service, supports real-time trajectory analysis including geofencing and regional statistics.

Choose node specifications and quantity

Lindorm supports horizontal scale-out of engine nodes. Adding nodes resolves issues such as high latency and unstable performance.

However, adding nodes alone cannot resolve single-node hotspot issues — you must upgrade the node specification instead. The node specification determines the single-node hotspot handling capacity. Nodes with insufficient specifications may experience excessive load or out-of-memory (OOM) errors under heavy traffic.

To upgrade node specifications, use the Lindorm console. For more information, see Modify instance specifications. For assistance, contact Lindorm technical support (DingTalk ID: s0s3eg3).

Wide table engine (LindormTable)

LindormTable nodes support specifications from 4 cores / 8 GB to 32 cores / 256 GB.

When Product Type is set to Lindorm, the minimum LindormTable specification is 4 cores / 16 GB.
Some performance optimizations require nodes with more than 16 GB of memory, and some write optimizations require at least 3 nodes. Start with at least 3 nodes at 8 cores / 32 GB each (16 cores / 64 GB preferred).

Select a specification based on your per-node request rate and region count:

Request rate (per node)Region count (per node)Recommended specification
< 1,000 requests/s< 500 regions4 cores / 16 GB
< 20,000 requests/s< 1,000 regions8 cores / 32 GB or higher
> 20,000 requests/s> 1,000 regions16 cores / 64 GB or higher
Important

Request rate and region count are not the only sizing factors. Choose a higher specification if any of the following apply:

  • Row sizes reach kilobytes or megabytes.

  • SCAN requests use complex filters.

  • Cache hit rate is low — most requests read from disk.

  • The instance contains many tables.

  • CPU utilization stays at 70% or above. For online services, prioritize larger memory to improve cache hit rates. For offline heavy-load tasks (MapReduce, Spark) or very high TPS/QPS, prioritize more CPU cores.

Time series engine (LindormTSDB)

LindormTSDB nodes support specifications from 4 cores / 8 GB to 32 cores / 256 GB.

When Product Type is set to Lindorm, the minimum LindormTSDB specification is 4 cores / 16 GB.

Select a specification based on your write throughput (measurement points per second), assuming a 3-node cluster:

Write throughput (TPS)Recommended specification per node
< 1.9M points/s4 cores / 16 GB
< 3.9M points/s8 cores / 32 GB
< 7.8M points/s16 cores / 64 GB
< 11M points/s32 cores / 128 GB
These recommendations assume optimal data processing conditions. Actual capacity depends on your business model, batch size, and concurrency. For measured performance data, see Write test results and Query test results.