Local disks are located on the physical machines that host their associated Elastic Compute Service (ECS) instances. Local disks provide local storage for ECS instances. Local disks are cost-effective and offer high random IOPS, high throughput, and low latency.

Limits

  • All of the local disks for an instance reside on a single physical machine. This increases single point of failure (SPOF) risks. The durability of data stored in a local disk is determined by the reliability of the associated physical machine.
    Warning For example, data stored on local disks may be lost when a hardware failure occurs on their associated physical machine. We recommend that you store only temporary data on local disks.
    • To ensure data availability, we recommend that you implement data redundancy at the application layer. You can use deployment sets to distribute ECS instances across multiple physical machines for high availability and disaster recovery. For more information, see Create a deployment set.
    • If your applications do not utilize an architecture that prioritizes data reliability, we recommend that you use cloud disks or a backup service with ECS instances to improve data reliability. For more information, see Disks or What is HBR?.
  • After you purchase an ECS instance that has local disks attached, you must log on to the instance to partition and format the local disks. For more information, see Initialize a data disk whose size does not exceed 2 TiB on a Linux instance or Initialize a data disk up to 2 TiB in size on a Windows instance.
  • Local disks do not support the following operations:
    • Create a separate local disk.
    • Use a snapshot to create a local disk.
    • Attach a local disk.
    • Detach and release a local disk.
    • Resize a local disk.
    • Re-initialize a local disk.
    • Create a snapshot for a local disk.
    • Use a snapshot to roll back a local disk.

Disk categories

Note This topic provides information about local disks that are purchased together with ECS instances. For more information about the performance of instance families that are equipped with local SSDs and big data instance families, see Instance families.

Local disks are suited for scenarios that require high storage I/O performance, mass storage, and high cost efficiency. Alibaba Cloud provides two categories of local disks. The following table describes the categories.

CategoryInstance familyUse scenario
Local Non-Volatile Memory Express (NVMe) SSD
The following instance families use local NVMe SSDs:
  • Instance families equipped with local SSDs: i4, i4g, i4r, i3, i3g, i2, i2g, i2ne, i2gne, and i1
  • GPU-accelerated compute-optimized instance family: gn5
Instance families equipped with local NVMe SSDs are suited for the following scenarios:
  • I/O-intensive applications that require high I/O performance and low latency, such as online gaming, e-commerce, live streaming, and media
  • Applications that require high storage I/O performance and a high-availability architecture at the application layer, such as NoSQL databases (including Cassandra, MongoDB, and HBase), MPP data warehouses, and distributed file systems
Local SATA HDD

The d3s, d2c, d2s, d1ne, and d1 big data instance families use local SATA HDDs.

Local SATA HDDs are the preferred storage media for industries such as Internet and finance that have high requirements for big data computing, storage, and analytics. These disks are suited for mass storage and offline computing scenarios and can meet the high requirements of distributed computing services such as Hadoop in terms of storage performance, storage capacity, and internal bandwidth.

Local NVMe SSDs

Note You can test the bandwidth, IOPS, and latency of local NVMe SSDs to obtain the benchmark performance data and measure the Quality of Service (QoS) of Alibaba Cloud local disks. For more information, see Commands used to test the performance of local disks.
  • The following table describes the performance metrics of local NVMe SSDs that the d3c compute-intensive big data instance family uses.
    Performance metricSingle disk performanceecs.d3c.3xlargeecs.d3c.7xlargeecs.d3c.14xlarge
    Maximum read IOPS8,0008,00016,00032,000
    Maximum read throughput4 GB/s4 GB/s8 GB/s16 GB/s
    Maximum write throughput2 GB/s2 GB/s4 GB/s8 GB/s
  • The following table describes the performance metrics of the local NVMe SSDs that the i4 instance family uses.
    Performance metricecs.i4.largeecs.i4.xlargeecs.i4.2xlargeecs.i4.4xlargeecs.i4.8xlargeecs.i4.16xlargeecs.i4.32xlarge
    Maximum read IOPS112,500225,000450,000900,0001,800,0003,600,0007,200,000
    Maximum read throughput0.75 GB/s1.5 GB/s3 GB/s6 GB/s12 GB/s24 GB/s48 GB/s
    Maximum write throughput0.375 GB/s0.75 GB/s1.5 GB/s3 GB/s6 GB/s12 GB/s24 GB/s
  • The following table describes the performance metrics of the local NVMe SSDs that the i4g and i4r instance families use.
    Performance metricecs.i4g.4xlarge and ecs.i4r.4xlargeecs.i4g.8xlarge and ecs.i4r.8xlargeecs.i4g.16xlarge and ecs.i4r.16xlargeecs.i4g.32xlarge and ecs.i4r.32xlarge
    Maximum read IOPS250,000500,0001,000,0002,000,000
    Maximum read throughput1.5 GB/s3 GB/s6 GB/s12 GB/s
    Maximum write throughput1 GB/s2 GB/s4 GB/s8 GB/s
  • The following table describes the performance metrics of the local NVMe SSDs that the i3 instance family uses.
    Performance metricecs.i3.xlargeecs.i3.2xlargeecs.i3.4xlargeecs.i3.8xlargeecs.i3.13xlargeecs.i3.26xlarge
    Maximum read IOPS250,000500,0001,000,0002,000,0003,000,0006,000,000
    Maximum read throughput1.5 GB/s3 GB/s6 GB/s12 GB/s18 GB/s36 GB/s
    Maximum write throughput1 GB/s2 GB/s4 GB/s8 GB/s12 GB/s24 GB/s
    Note The performance data in the preceding table represents the highest performance levels of local storage in the i3 instance family. We recommend that you use images that contain Linux kernel 4.10 or later, such as Alibaba Cloud Linux 2 and CentOS 8.x images, to obtain optimal performance.
  • The following table describes the performance metrics of the local NVMe SSDs that the i3g instance family uses.
    Performance metricecs.i3g.2xlargeecs.i3g.4xlargeecs.i3g.8xlargeecs.i3g.13xlargeecs.i3g.26xlarge
    Maximum read IOPS125,000250,000500,000750,0001,500,000
    Maximum read throughput0.75 GB/s1.5 GB/s3 GB/s4.5 GB/s9 GB/s
    Maximum write throughput0.5 GB/s1 GB/s2 GB/s3 GB/s6 GB/s
    Note The performance data in the preceding table represents the highest performance levels of local storage in the i3g instance family. We recommend that you use images that contain Linux kernel 4.10 or later, such as Alibaba Cloud Linux 2 and CentOS 8.x images, to obtain optimal performance.
  • The following table describes the performance metrics of the local NVMe SSDs that the i2 and i2g instance families use.
    Performance metricSingle disk performance Overall instance performance
    ecs.i2.xlarge and ecs.i2g.2xlargeOther i2 and i2g instance types
    Maximum capacity894 GiB1,788 GiB8 × 1,788 GiB
    Maximum read IOPS150,000300,0001,500,000
    Maximum read throughput1 GB/s2 GB/s16 GB/s
    Maximum write throughput0.5 GB/s1 GB/s8 GB/s
    Access latencyWithin microseconds

    Overall instance performance data in the preceding table applies only to the ecs.i2.16xlarge instance type and represents the highest performance levels of local storage in the i2 instance family.

  • The following table describes the performance metrics of the local NVMe SSDs that the i2ne and i2gne instance families use.
    Performance metricecs.i2ne.xlarge and ecs.i2gne.2xlargeecs.i2ne.2xlarge and ecs.i2gne.4xlargeecs.i2ne.4xlarge and ecs.i2gne.8xlargeecs.i2ne.8xlarge and ecs.i2gne.16xlargeecs.i2ne.16xlarge
    Maximum capacity894 GiB1,788 GiB2 × 1,788 GiB4 × 1,788 GiB8 × 1,788 GiB
    Maximum read IOPS250,000500,0001,000,0002,000,0004,000,000
    Maximum read throughput1.5 GB/s3 GB/s6 GB/s12 GB/s24 GB/s
    Maximum write throughput1 GB/s2 GB/s4 GB/s8 GB/s16 GB/s
    Access latencyWithin microseconds
    Note To obtain the maximum throughput performance of disks for Linux instances, we recommend that you use the latest versions of Alibaba Cloud Linux 2 images. Otherwise, Linux instances may be unable to deliver their maximum IOPS.
  • The following table describes the performance metrics of the local NVMe SSDs that the i1 instance family uses.
    Performance metricSingle disk performanceOverall instance performance
    Maximum capacity1,456 GiB2,912 GiB
    Maximum IOPS240,000480,000
    Write IOPS min{165 × Capacity, 240,000}2 × min{165 × Capacity, 240,000}
    Read IOPS
    Maximum read throughput2 GB/s4 GB/s
    Read throughput min{1.4 × Capacity, 2,000} MB/s2 × min{1.4 × Capacity, 2,000} MB/s
    Maximum write throughput1.2 GB/s2.4 GB/s
    Write throughput min{0.85 × Capacity, 1,200} MB/s2 × min{0.85 × Capacity, 1,200} MB/s
    Access latencyWithin microseconds
    Items in the formulas used to calculate the performance specifications of a single local NVMe SSD:
    • In the formula used to calculate the write IOPS, each GiB of capacity produces a write IOPS of 165 for a maximum of 240,000 IOPS.
    • In the formula used to calculate the write throughput, each GiB of capacity produces a write throughput of 0.85 MB/s for a maximum of 1,200 MB/s.

    Overall instance performance data in the preceding table applies only to the ecs.i1.14xlarge instance type and represents the highest performance levels for local storage in the i1 instance family.

Local SATA HDDs

Note You can test the bandwidth, IOPS, and latency of local NVMe SSDs to obtain the standard performance data and measure the QoS of Alibaba Cloud local disks. For more information, see Commands used to test the performance of local disks.

The following table describes the performance metrics of local SATA HDDs.

Performance metricd1 and d1ned2cd2s
Single disk performanceOverall instance performanceSingle disk performanceOverall instance performanceSingle disk performanceOverall instance performance
Maximum capacity5,500 GiB154,000 GiB3,700 GiB44,400 GiB7,300 GiB219,000 GiB
Maximum throughput190 MB/s5,320 MB/s190 MB/s2,280 MB/s190 MB/s5,700 MB/s
Access latencyWithin milliseconds
Note Overall instance performance data in the preceding table applies only to the ecs.d1.14xlarge, ecs.d1ne.14xlarge, ecs.d2c.24xlarge, and ecs.d2s.20xlarge instance types and represents the highest performance levels for local storage in the corresponding instance families.

Billing methods

Local disks are billed along with the instances to which they are attached. For more information, see Subscription and Pay-as-you-go.

Disk initialization sequence

When you create an ECS instance that has local disks attached, all disks of the created instance are initialized based on the following rules:
  • Rule 1: If the image used to create the instance does not contain data disk snapshots, the local disks are initialized prior to the cloud disks that were created together with the instance.
  • Rule 2: If the image used to create the instance contains data disk snapshots, the data disks created from these snapshots are initialized based on the sequence of data disks in the image. The remaining disks are initialized in the order that was specified in Rule 1.
For example, assume that a Linux image which contains the snapshots of two data disks is used to create an ECS instance. The disks on the created instance are initialized in the following sequence:
  • If the device names of the two data disks in the image are /dev/xvdb and /dev/xvdc, Alibaba Cloud first allocates /dev/xvdb and /dev/xvdc as device names to the data disks created from the image. The system disk is initialized first. Then, the data disks are initialized in the following sequence: data disk 1 created from the image, data disk 2 created from the image, local disk 1, local disk 2, cloud disk 1, cloud disk 2, and cloud disk N. The following figure shows the sequence in which the disks are initialized. Rule 2 - Schematic diagram 1
  • If the device names of the two data disks in the image are /dev/xvdc and /dev/xvdd, Alibaba Cloud first allocates /dev/xvdc and /dev/xvdd as device names to the data disks created from the image. Then, remaining available device names are allocated to the local disks first and then to other disks in ascending alphabetic order. The system disk is initialized first. Then, the data disks are initialized in the following sequence: local disk 1, data disk 1 created from the image, data disk 2 created from the image, local disk 2, cloud disk 1, cloud disk 2, and cloud disk N. The following figure shows the sequence in which the disks are initialized. Rule 2 - Schematic diagram 2

Lifecycle

A local disk shares the same lifecycle as the instance to which it is attached. For more information, see Instance lifecycle.

Impacts of instance operations on data stored on local disks

The following table describes the impacts of instance operations on data stored on local disks.

Instance operationData stored on local disksLocal disk
Restart the operating system, restart an instance by using the ECS console, or forcefully restart an instance.RetainedRetained
Shut down the operating system, stop an instance by using the ECS console, or forcefully stop an instance.RetainedRetained
Automatically recover an instance.ErasedReleased
Release an instanceErasedReleased
When a subscription instance expires or when you have an overdue payment for a pay-as-you-go instance, the instance is stopped but has not been released. RetainedRetained
When a subscription instance expires or when you have an overdue payment for a pay-as-you-go instance, the instance is stopped and then released.ErasedReleased
Manually renew an expired subscription instance.RetainedRetained
Reactivate a pay-as-you-go instance that was stopped due to an overdue payment.RetainedRetained

References

For information about retired local SSDs, see Previous-generation disks - local SSDs.

For information about how to handle system events on ECS instances that are equipped with local disks, see O&M scenarios and system events for instances equipped with local disks.