All Products
Search
Document Center

Lindorm:Storage types

Last Updated:Mar 28, 2026

Lindorm uses LindormDFS as the underlying storage system. Storage resources are decoupled from computing resources, so storage is billed separately and can scale without interrupting your workloads. Storage capacity is shared across all engines within a single Lindorm instance.

Storage classes

Lindorm offers five storage types. Before choosing one, note how each category is measured—this directly affects capacity planning and cost.

Billing measurement

Storage categoryBilling basisCapacity planning
Performance storage, standard storage, capacity storageLogical capacityNo replica math needed. A 100 GiB database file consumes 100 GiB of storage. LindormDFS handles redundancy.
Local SSDs, local HDDs, attached cloud disksPhysical capacityMultiply logical size by replica count. A 100 GiB database file stored with three replicas consumes 300 GiB.

By default, local disk data is stored with three replicas; cloud disk data with two replicas.

Storage type comparison

Storage typeStorage latencyUse casesSupported enginesScalability
Standard storage3–5 msReal-time data access for streaming data, chat applications, real-time reporting, and online computingLindormTable, LindormTSDB, LindormSearch, LindormDFS, and the Lindorm streaming engineOptional capacity storage can be purchased
Performance storage0.2–0.5 msLatency-sensitive workloads such as ad bidding, user personas, audience segmentation, real-time search, and risk controlLindormTable, LindormTSDB, LindormSearch, LindormDFS, and the Lindorm streaming engineOptional capacity storage can be purchased
Capacity storage15 ms–3 sInfrequently accessed data: monitoring logs, historical orders, audio and video archives, data lake storage, and offline computingLindormTable, LindormDFS, and the Lindorm streaming engineN/A
Local SSDs0.1–0.3 msI/O-intensive online workloads such as online gaming, e-commerce, ApsaraVideo Live, and media that require ultra-low latency and high I/O throughputLindormTable, LindormTSDB, search engine, and file engineOptional capacity storage can be purchased; local SSDs can be pooled with attached cloud disks; heterogeneous replicas and erasure coding are supported
Local HDDs10–300 msMassive data storage, offline computing, and big data analytics in internet and finance industriesLindormTable, LindormTSDB, LindormSearch, and LindormFileAttached cloud disks for acceleration; heterogeneous replicas and erasure coding are supported
Important

Storage latency values reflect storage-layer latency only, not end-to-end latency.

Capacity storage uses high-density disk arrays to deliver cost-effective storage with high read/write throughput. Random read performance is lower compared to other storage types. Capacity storage is best suited for write-heavy workloads and big data analytics. For details on read behavior, see Capacity storage read throttling.

Local SSDs and local HDDs store three replicas by default. To maintain three replicas when one node fails, configure at least three nodes for any Lindorm instance that uses local disks.

When purchasing a local SSD or local HDD instance, select Node Spec of Local Disk and specify the number of data engine nodes. Other storage options are not available for local disk instances.

Choose a storage type

Match your workload to a storage type using the following guidelines.

If your workload requires...Choose
Sub-millisecond latency for online applicationsLocal SSDs (0.1–0.3 ms)
Sub-millisecond latency without managing local disksPerformance storage (0.2–0.5 ms)
Reliable real-time access with moderate latencyStandard storage (3–5 ms)
Low-cost storage for infrequently accessed or archival dataCapacity storage (15 ms–3 s)
High-density storage for big data analytics with local throughputLocal HDDs (10–300 ms)

If your per-request latency budget is under 1 ms, start with local SSDs or performance storage. If storage cost is the primary concern and data is accessed infrequently, capacity storage is the most cost-effective option.

Extension capabilities

Optional capacity storage

Standard storage and performance storage instances can add capacity storage to hold cold data at a lower cost. This lets you keep hot data on faster storage while archiving older data without migrating the instance.

Local SSD and cloud disk pooling

A single compute node with local SSDs may not have enough storage for large-scale workloads, but adding nodes to get more storage wastes computing resources. Instead, attach cloud disks to the instance. The local SSDs and attached cloud disks form a single storage pool.

Cloud disk acceleration for local HDD instances

Attach cloud disks to a local HDD instance to lower average latency and increase IOPS for a subset of your data. Use the attached cloud disks exclusively for hot data, or combine them with local HDDs as heterogeneous replicas.

Heterogeneous replicas

Heterogeneous replicas store different replicas of the same data file on different storage media—one on high-performance storage and another on cost-effective storage. Under normal conditions, reads are served from the high-performance replica. If those nodes are unavailable, reads fall back to the cost-effective replica.

Use heterogeneous replicas when high performance is the priority and occasional request latency spikes are acceptable.

Supported combinations:

  • One replica on local SSDs or cloud disks + one replica on capacity storage

  • One replica on cloud disks + two replicas on local HDDs

To activate heterogeneous replicas, contact Lindorm technical support (DingTalk ID: s0s3eg3).

Erasure coding (RS-4-2)

Lindorm instances that use local SSDs or HDDs can enable RS-4-2 erasure coding (Reed-Solomon), which reduces redundancy overhead from 3x replication to 1.5x. The RS-4-2 algorithm distributes data across six storage nodes and requires one additional redundancy node, so the instance must have at least seven storage nodes.

To enable erasure coding, contact Lindorm technical support (DingTalk ID: s0s3eg3).