ApsaraDB for Lindorm (Lindorm) is a cloud native multi-model database service developed by Alibaba Cloud. It provides the wide table engine, time series engine, search engine, and file engine. Lindorm is compatible with open standards of multiple open source software, such as Apache HBase, Apache Phoenix, Apache Cassandra, OpenTSDB, Apache Solr, and Hadoop Distributed File System (HDFS). It also provides capabilities such as SQL queries, time series data processing, and text retrieval and analysis. To meet the requirements of dynamic workloads, each engine supports independent auto scaling. The wide table engine and the time series engine support high concurrency and high throughput.

Engines

The features provided by the engines of Lindorm vary based on the engine type. The following table describes the engines. You can select one or more engines based on your business needs.

Engine Compatibility Scenario Description
Wide table engine Compatible with the HBase API, Cassandra Query Language (CQL), and Phoenix SQL Applicable to store metadata, orders, bills, user personas, social information, feeds, and logs. The wide table engine is used for the distributed storage of large amounts of semi-structured and structured data. The wide table engine supports global secondary indexes, multi-dimensional searches, dynamic columns, and Time to Live (TTL). It supports tens of millions of concurrent requests, storage of petabytes of data, and separation of hot and cold data. Compared with the performance of open source HBase, the read/write performance is increased by 2 to 6 times, the percentile 99% (P99) latency is reduced by 90%, the compression ratio is increased by 100%, and the storage cost is reduced by 50%.
Time series engine Provides an HTTP API and is compatible with the OpenTSDB API Applicable to store and process time series data such as measurement data and device operational data in scenarios such as Internet of Things (IoT) and monitoring. The time series engine is a distributed storage engine used to process large amounts of time series data. The time series engine supports SQL queries. It provides a compression algorithm dedicated to time series data. The data compression ratio can reach up to 15:1. The time series engine allows you to perform multi-dimensional queries and aggregation of large amounts of time series data by timeline. It supports downsampling and elastic scaling.
Search engine Compatible with the Solr API Applicable to query large amounts of data, such as logs, text, and documents. For example, you can use the search engine to search for bills and user personas. Lindorm provides a distributed search engine. The search engine uses a decoupled storage and computing architecture. It can be seamlessly used to store the indexes of the wide table and time series engines to accelerate data retrieval. The search engine provides various capabilities, including full-text searches, aggregation, and complex multi-dimensional queries. It also provides features such as horizontal scaling, write-once read-many, cross-zone disaster recovery, and TTL to meet the needs of efficient retrieval of large amounts of data.
File engine (Select specifications for the file engine) Compatible with Hadoop Distributed File System (HDFS) APIs Applicable to scenarios in which enterprise-grade data lakes are used for storage, Hadoop is used as a storage base, and historical data is archived and compressed. The file engine provides cloud native storage capabilities. It is compatible with HDFS communication protocols. You can directly connect to the file engine by using open source HDFS clients. You can use all the features of the file engine by calling the open source HDFS APIs. You can also seamlessly connect the file engine to all open source ecosystems of HDFS and cloud computing ecosystems. The file engine is developed and optimized based on HDFS. It can store exabytes of data at a low cost and perform automatic scale-up operations within several minutes. The file engine also provides multiple features, such as horizontal bandwidth scaling and automatic and transparent data compression (available soon). The file engine is suitable for building enterprise-grade low-cost data lakes based on HDFS. The decoupled storage and computing architecture of the file engine reduces the overall cost.