All Products
Search
Document Center

Lindorm:What is Lindorm?

Last Updated:Mar 28, 2026

Lindorm is a cloud-native, multi-modal, hyperconverged database built for Internet of Things (IoT), Internet, and Internet of vehicles workloads. It stores, queries, and analyzes wide table, time series, text, object, stream, and spatial data through a single unified service — eliminating the need to run separate databases for each data type.

Compatible interfaces: SQL · HBase · Cassandra · S3 · TSDB · HDFS · Solr · Kafka

The problem with traditional data architectures

Most data-intensive applications outgrow a single database. Teams end up maintaining separate systems for structured records, time series metrics, full-text search, and object storage. Each component has its own API, scaling model, and failure mode. The result: complex technology stacks, long data synchronization pipelines, and high operational overhead.

5G and IoT workloads amplify this problem. Device fleets generate high-velocity time series alongside metadata records, event streams, and spatial traces — all at once, all requiring low-latency access.

Lindorm addresses this by providing a single storage and query layer for all these data types, with independent elastic scaling of compute and storage.

image

Core capabilities

CapabilityDescription
Multi-modal hyper-convergenceSupports wide table, time series, object, text, queue, and spatial data models in one service. Data is shared across models. Unified SQL enables cross-model federated queries.
High cost-effectivenessHandles tens of millions of concurrent requests at millisecond-level latency. Reduces storage costs through multi-level storage media, intelligent hot and cold data separation, and adaptive compression.
Cloud-native elasticityCompute and storage resources scale independently..
Open and compatibleCompatible with SQL, HBase, Cassandra, S3, TSDB, HDFS, Solr, and Kafka. Integrates with Hadoop, Spark, Flink, and Kafka ecosystems.

For a complete breakdown, see Features and Benefits.

Service architecture

Lindorm uses a compute-storage separation architecture with a shared multi-modal data layer. All engines write to and read from a single distributed storage foundation — LindormDFS — which eliminates per-engine data silos.

image

Key components:

  • LindormDFS: The cloud-native distributed file system that serves as the unified storage foundation for all engines.

  • Multi-modal engines: Dedicated engines for wide table, time series, search, vector, column store, compute, and AI workloads — all running on LindormDFS.

  • Unified SQL layer: Cross-model federated queries without moving data between systems.

  • Open interfaces: HBase, Cassandra, OpenTSDB, Spark, and HDFS APIs for zero-friction migration of existing workloads.

  • Lindorm Tunnel Service (LTS): Real-time data forwarding and change data capture between engines. Supports data migration, real-time subscription, data lake dumping, data warehouse backflow, multi-active geo-redundancy, and backup and recovery.

Multi-modal engines

Lindorm runs seven specialized engines on a shared storage foundation. Each engine is optimized for a specific data model and workload pattern while sharing data with other engines through unified SQL.

Wide table engine

Stores and serves wide table and object data. Designed for high-throughput, low-latency access to records with flexible schemas.

Compatible interfaces: SQL, HBase, Cassandra (CQL), S3

Features: Global secondary indexes, multi-dimensional retrieval, dynamic columns, TTL, hot and cold data separation

Performance:

  • Throughput: tens of millions of concurrent throughput, petabyte-scale storage

  • Throughput vs Apache HBase: 3–7x that of Apache HBase

  • P99 latency: one-tenth of Apache HBase

  • Fault recovery: 10x faster than Apache HBase

  • Compression ratio: twice that of Apache HBase; overall storage cost is half

Use cases: Metadata, orders, bills, user personas, social networking, feed streams, logs

Time series engine

Stores and queries measurement data, monitoring metrics, and device operating data. The compression algorithm is designed specifically for time series data and achieves up to a 10:1 compression ratio.

Features: SQL-based management and querying, native Prometheus Query Language (PromQL) support, multi-dimensional queries and aggregate computing, pre-downsampling, continuous queries

Use cases: Industrial IoT, infrastructure monitoring, device telemetry

Search engine

Accelerates retrieval and analysis across multi-modal data using column store and inverted index technologies.

Compatible interfaces: SQL, open source Solr

Features: Full-text search, aggregate computing, complex multi-dimensional queries

Use cases: Logs, bills, user personas

Compute engine

A distributed computing service deeply integrated with the Lindorm storage engine. The resources are owned by the user.

Compatible interface: Open source Spark

Use cases: Data production, interactive analysis, machine learning, graph computing

Vector engine

Stores, indexes, and retrieves massive amounts of vector data. Supports multiple index algorithms, distance functions, and hybrid retrieval methods.

Key capability: Full-text and vector hybrid retrieval for retrieval-augmented generation (RAG) systems, improving the accuracy of large model responses.

Use cases: Recommendation systems, NLP services, AI chat applications

Column store engine

A high-performance, low-cost online column store database engine designed for high-volume write and analytical workloads.

Features: Efficient reads and writes, high-compression storage, high-performance online analysis

Use cases: IoT, Internet of vehicles, logs

AI engine

The resources of the Lindorm AI engine are owned by the user. Provides one-stop integrated AI inference. Use Lindorm SQL to import and deploy pre-trained models for intelligent analysis and processing of multi-modal data at scale.