All Products
Search
Document Center

Lindorm:Storage types

Last Updated:Nov 09, 2023

Lindorm uses LindormDFS as the underlying storage. This way, storage resources in Lindorm are decoupled from computing resources. You are individually charged for the storage resources of your instance. You can scale up the storage of an instance without interrupting your business. The storage capacity of a Lindorm instance is shared among multiple engines within the instance.

Storage types supported by Lindorm

The following table describes the storage types supported by Lindorm and the scenarios that are applicable to the storage types.

Storage type

Latency

Scenario

Supported engine

Scalability

Standard storage

3ms ~ 5ms

Standard storage is applicable to scenarios in which data needs to be accessed in real time. For example, you can use this storage type for feed storage, data exchanges in chats, real-time report processing, and online computing.

LindormTable, LindormTSDB, LindormSearch, LindormDFS, and the Lindorm streaming engine

Optional capacity storage can be purchased.

Performance storage

0.2ms ~ 0.5ms

Performance storage is applicable to scenarios in which data access requires low latency. For example, you can use this storage type for bid advertising, user persona creation, customer group selection, real-time searches, and risk management.

LindormTable, LindormTSDB, LindormSearch, LindormDFS, and the Lindorm streaming engine

Optional capacity storage can be purchased.

Capacity storage

15ms ~ 3s

Capacity storage is applicable to scenarios in which infrequently accessed data is stored. For example, you can use this storage type to store monitoring logs and historical orders, archive audio and video files, store data to data lakes, and compute data offline.

Note

Capacity storage uses high-density disk arrays to provide cost-effective storage services and support high read/write throughput. However, it delivers relatively poor random read performance. Capacity storage is suitable for scenarios in which a large number of write requests and a small number of read requests are processed or big data analytics scenarios.

LindormTable, LindormDFS, and the Lindorm streaming engine

N/A

Local SSDs

0.1ms ~ 0.3ms

Local SSDs are suitable for online business scenarios, such as online gaming, e-commerce, live streaming, and media. This storage type helps you meet the low latency and high I/O performance requirements of I/O-intensive applications for block storage.

LindormTable, LindormTSDB, LindormSearch, and LindormDFS

Note

If you select Local SSD for Storage Type when you purchase an instance, you can select only Node Spec of Local Disk and the number of engine nodes.

  • Optional capacity storage can be purchased.

  • Local SSDs can be pooled together with attached cloud disks.

  • Heterogeneous replicas are supported.

  • Erasure coding that uses 1.5 replicas is supported.

Local HDDs

10ms ~ 300ms

Local HDDs are the preferred storage media for industries such as Internet and finance that have high requirements for big data computing, storage, and analytics. These disks are suited for mass storage and offline computing scenarios

LindormTable, LindormTSDB, LindormSearch, and LindormDFS

Note

If you select Local HDD for Storage Type when you purchase an instance, you can select only Node Spec of Local Disk and the number of engine nodes.

  • The access to attached cloud disks can be accelerated.

  • Heterogeneous replicas are supported.

  • Erasure coding that uses 1.5 replicas is supported.

Important
  • Latency refers to only the storage latency and does not refer to the end-to-end latency.

  • By default, local SSDs and HDDs store three replicas of data for redundancy. To ensure that three data replicas are stored for redundancy when one node fails, you must configure at least three nodes for a Lindorm instance that uses local disks.

  • The usage of cloud storage and local disks is measured in different methods.

    • The usage of performance storage, standard storage, and capacity storage is measured by logical capacity. For example, if the logical size of a database file is 100 GiB, the capacity that is used to store the file in cloud storage is 100 GiB. The availability and reliability of the data is ensured by LindormDFS. You do not need to calculate the number of data replicas when you plan the storage capacity.

    • The usage of local SSDs, local HDDs, and attached cloud disks is measured by physical capacity. You must calculate the number of data replicas when you plan the storage capacity. For example, if the logical size of a database file is 100 GiB and three replicas of the file are stored in the local HDDs of the Lindorm instance, the capacity that is used to store the file in local HDDs is 300 GiB. The availability and reliability of the data is ensured by the multiple replicas generated by LindormDFS. By default, three replicas are generated for data stored in local disks and two replicas are generated for data stored in cloud disks for data redundancy.

Scalability for different storage types

Scalability

Description

Optional capacity storage can be purchased.

You can purchase additional capacity storage to store cold data.

Local SSDs can be pooled together with attached cloud disks.

The storage capacity of a single compute node that uses local SSDs is too small to meet the storage requirements of large-scale businesses. However, if you purchase more compute nodes for larger storage capacity, the computing resources may be wasted. You can attach cloud disks to a Lindorm instance that uses local SSDs. In this case, the local SSDs of the instance and the attached cloud disks can be used together as a storage pool.

The access to attached cloud disks can be accelerated.

You can attach cloud disks to a Lindorm instance that uses local HDDs. Cloud disks can provide a lower average latency and higher IOPS compared with lock HDDs. You can separately use the attached cloud disks to store hot data or use the attached cloud disks together with local HDDs to store heterogeneous replicas.

Heterogeneous replicas are supported.

Lindorm allows you to use high-performance storage medium and cost-effective storage together to store the heterogeneous replicas of a data file. This way, less high-performance storage capacity is used and the storage costs can be reduced. In normal cases, read requests access data replicas stored in high-performance storage for better experience. If the nodes that use high-performance storage is not available, read requests access data replicas stored in cost-effective storage for data availability and reliability. Heterogeneous replicas are suitable for scenarios in which high performance is required and request glitches are acceptable.

Lindorm supports the following combinations of high-performance and cost-effective storage media for heterogeneous replicas:

  • One replica in local SSDs or cloud disks + one replica in capacity storage

  • One replica in cloud disks + two replicas in local HDDs

Note

To activate heterogeneous replicas, contact the technical support of Lindorm (DingTalk ID: s0s3eg3).

Erasure coding that uses 1.5 replicas is supported.

You can enable erasure coding that uses 1.5 replicas for Lindorm instances that use local SSDs or HDDs. After you enable this feature for an instance, the redundancy for data replicas in the instance is reduced from 3 to 1.5. By default, Lindorm uses the RS-4-2 algorithm for data redundancy.

For example, if you enable erasure coding that uses 1.5 replicas for an instance, the replicas of data are separately stored on six storage nodes and a redundant storage node is required to ensure data availability. In this case, you must configure at least seven storage nodes for the instance.

Note

To enable erasure coding that uses 1.5 replicas, contact the technical support of Lindorm (DingTalk ID: s0s3eg3).