Local disks are located on the physical machines that host their associated Elastic Compute Service (ECS) instances. Local disks provide local storage for ECS instances. Local disks are cost-effective and offer high random IOPS, high throughput, and low latency.
Limits
- All of the local disks for an instance reside on a single physical machine. This increases single point of failure (SPOF) risks. The durability of data stored in a local disk is determined by the reliability of the associated physical machine. Warning For example, data stored on local disks may be lost when a hardware failure occurs on their associated physical machine. We recommend that you store only temporary data on local disks.
- To ensure data availability, we recommend that you implement data redundancy at the application layer. You can use deployment sets to distribute ECS instances across multiple physical machines for high availability and disaster recovery. For more information, see Create a deployment set.
- If your applications do not utilize an architecture that prioritizes data reliability, we recommend that you use cloud disks or a backup service with ECS instances to improve data reliability. For more information, see Disks or What is HBR?.
- After you purchase an ECS instance that has local disks attached, you must log on to the instance to partition and format the local disks. For more information, see Initialize a data disk whose size does not exceed 2 TiB on a Linux instance or Initialize a data disk up to 2 TiB in size on a Windows instance.
- Local disks do not support the following operations:
- Create a separate local disk.
- Use a snapshot to create a local disk.
- Attach a local disk.
- Detach and release a local disk.
- Resize a local disk.
- Re-initialize a local disk.
- Create a snapshot for a local disk.
- Use a snapshot to roll back a local disk.
Disk categories
Local disks are suited for scenarios that require high storage I/O performance, mass storage, and high cost efficiency. Alibaba Cloud provides two categories of local disks. The following table describes the categories.
Category | Instance family | Use scenario |
---|---|---|
Local Non-Volatile Memory Express (NVMe) SSD | The following instance families use local NVMe SSDs:
| Instance families equipped with local NVMe SSDs are suited for the following scenarios:
|
Local SATA HDD | The d3s, d2c, d2s, d1ne, and d1 big data instance families use local SATA HDDs. | Local SATA HDDs are the preferred storage media for industries such as Internet and finance that have high requirements for big data computing, storage, and analytics. These disks are suited for mass storage and offline computing scenarios and can meet the high requirements of distributed computing services such as Hadoop in terms of storage performance, storage capacity, and internal bandwidth. |
Local NVMe SSDs
- The following table describes the performance metrics of local NVMe SSDs that the d3c compute-intensive big data instance family uses.
Performance metric Single disk performance ecs.d3c.3xlarge ecs.d3c.7xlarge ecs.d3c.14xlarge Maximum read IOPS 8,000 8,000 16,000 32,000 Maximum read throughput 4 GB/s 4 GB/s 8 GB/s 16 GB/s Maximum write throughput 2 GB/s 2 GB/s 4 GB/s 8 GB/s - The following table describes the performance metrics of the local NVMe SSDs that the i4 instance family uses.
Performance metric ecs.i4.large ecs.i4.xlarge ecs.i4.2xlarge ecs.i4.4xlarge ecs.i4.8xlarge ecs.i4.16xlarge ecs.i4.32xlarge Maximum read IOPS 112,500 225,000 450,000 900,000 1,800,000 3,600,000 7,200,000 Maximum read throughput 0.75 GB/s 1.5 GB/s 3 GB/s 6 GB/s 12 GB/s 24 GB/s 48 GB/s Maximum write throughput 0.375 GB/s 0.75 GB/s 1.5 GB/s 3 GB/s 6 GB/s 12 GB/s 24 GB/s - The following table describes the performance metrics of the local NVMe SSDs that the i4g and i4r instance families use.
Performance metric ecs.i4g.4xlarge and ecs.i4r.4xlarge ecs.i4g.8xlarge and ecs.i4r.8xlarge ecs.i4g.16xlarge and ecs.i4r.16xlarge ecs.i4g.32xlarge and ecs.i4r.32xlarge Maximum read IOPS 250,000 500,000 1,000,000 2,000,000 Maximum read throughput 1.5 GB/s 3 GB/s 6 GB/s 12 GB/s Maximum write throughput 1 GB/s 2 GB/s 4 GB/s 8 GB/s - The following table describes the performance metrics of the local NVMe SSDs that the i3 instance family uses.
Performance metric ecs.i3.xlarge ecs.i3.2xlarge ecs.i3.4xlarge ecs.i3.8xlarge ecs.i3.13xlarge ecs.i3.26xlarge Maximum read IOPS 250,000 500,000 1,000,000 2,000,000 3,000,000 6,000,000 Maximum read throughput 1.5 GB/s 3 GB/s 6 GB/s 12 GB/s 18 GB/s 36 GB/s Maximum write throughput 1 GB/s 2 GB/s 4 GB/s 8 GB/s 12 GB/s 24 GB/s Note The performance data in the preceding table represents the highest performance levels of local storage in the i3 instance family. We recommend that you use images that contain Linux kernel 4.10 or later, such as Alibaba Cloud Linux 2 and CentOS 8.x images, to obtain optimal performance. - The following table describes the performance metrics of the local NVMe SSDs that the i3g instance family uses.
Performance metric ecs.i3g.2xlarge ecs.i3g.4xlarge ecs.i3g.8xlarge ecs.i3g.13xlarge ecs.i3g.26xlarge Maximum read IOPS 125,000 250,000 500,000 750,000 1,500,000 Maximum read throughput 0.75 GB/s 1.5 GB/s 3 GB/s 4.5 GB/s 9 GB/s Maximum write throughput 0.5 GB/s 1 GB/s 2 GB/s 3 GB/s 6 GB/s Note The performance data in the preceding table represents the highest performance levels of local storage in the i3g instance family. We recommend that you use images that contain Linux kernel 4.10 or later, such as Alibaba Cloud Linux 2 and CentOS 8.x images, to obtain optimal performance. - The following table describes the performance metrics of the local NVMe SSDs that the i2 and i2g instance families use.
Performance metric Single disk performance Overall instance performance① ecs.i2.xlarge and ecs.i2g.2xlarge Other i2 and i2g instance types Maximum capacity 894 GiB 1,788 GiB 8 × 1,788 GiB Maximum read IOPS 150,000 300,000 1,500,000 Maximum read throughput 1 GB/s 2 GB/s 16 GB/s Maximum write throughput 0.5 GB/s 1 GB/s 8 GB/s Access latency Within microseconds ① Overall instance performance data in the preceding table applies only to the ecs.i2.16xlarge instance type and represents the highest performance levels of local storage in the i2 instance family.
- The following table describes the performance metrics of the local NVMe SSDs that the i2ne and i2gne instance families use.
Performance metric ecs.i2ne.xlarge and ecs.i2gne.2xlarge ecs.i2ne.2xlarge and ecs.i2gne.4xlarge ecs.i2ne.4xlarge and ecs.i2gne.8xlarge ecs.i2ne.8xlarge and ecs.i2gne.16xlarge ecs.i2ne.16xlarge Maximum capacity 894 GiB 1,788 GiB 2 × 1,788 GiB 4 × 1,788 GiB 8 × 1,788 GiB Maximum read IOPS 250,000 500,000 1,000,000 2,000,000 4,000,000 Maximum read throughput 1.5 GB/s 3 GB/s 6 GB/s 12 GB/s 24 GB/s Maximum write throughput 1 GB/s 2 GB/s 4 GB/s 8 GB/s 16 GB/s Access latency Within microseconds Note To obtain the maximum throughput performance of disks for Linux instances, we recommend that you use the latest versions of Alibaba Cloud Linux 2 images. Otherwise, Linux instances may be unable to deliver their maximum IOPS. - The following table describes the performance metrics of the local NVMe SSDs that the i1 instance family uses.
Performance metric Single disk performance Overall instance performance② Maximum capacity 1,456 GiB 2,912 GiB Maximum IOPS 240,000 480,000 Write IOPS ① min{165 × Capacity, 240,000} 2 × min{165 × Capacity, 240,000} Read IOPS ① Maximum read throughput 2 GB/s 4 GB/s Read throughput ① min{1.4 × Capacity, 2,000} MB/s 2 × min{1.4 × Capacity, 2,000} MB/s Maximum write throughput 1.2 GB/s 2.4 GB/s Write throughput ① min{0.85 × Capacity, 1,200} MB/s 2 × min{0.85 × Capacity, 1,200} MB/s Access latency Within microseconds ① Items in the formulas used to calculate the performance specifications of a single local NVMe SSD:- In the formula used to calculate the write IOPS, each GiB of capacity produces a write IOPS of 165 for a maximum of 240,000 IOPS.
- In the formula used to calculate the write throughput, each GiB of capacity produces a write throughput of 0.85 MB/s for a maximum of 1,200 MB/s.
② Overall instance performance data in the preceding table applies only to the ecs.i1.14xlarge instance type and represents the highest performance levels for local storage in the i1 instance family.
Local SATA HDDs
The following table describes the performance metrics of local SATA HDDs.
Performance metric | d1 and d1ne | d2c | d2s | |||
---|---|---|---|---|---|---|
Single disk performance | Overall instance performance | Single disk performance | Overall instance performance | Single disk performance | Overall instance performance | |
Maximum capacity | 5,500 GiB | 154,000 GiB | 3,700 GiB | 44,400 GiB | 7,300 GiB | 219,000 GiB |
Maximum throughput | 190 MB/s | 5,320 MB/s | 190 MB/s | 2,280 MB/s | 190 MB/s | 5,700 MB/s |
Access latency | Within milliseconds |
Billing methods
Local disks are billed along with the instances to which they are attached. For more information, see Subscription and Pay-as-you-go.
Disk initialization sequence
- Rule 1: If the image used to create the instance does not contain data disk snapshots, the local disks are initialized prior to the cloud disks that were created together with the instance.
- Rule 2: If the image used to create the instance contains data disk snapshots, the data disks created from these snapshots are initialized based on the sequence of data disks in the image. The remaining disks are initialized in the order that was specified in Rule 1.
- If the device names of the two data disks in the image are /dev/xvdb and /dev/xvdc, Alibaba Cloud first allocates /dev/xvdb and /dev/xvdc as device names to the data disks created from the image. The system disk is initialized first. Then, the data disks are initialized in the following sequence: data disk 1 created from the image, data disk 2 created from the image, local disk 1, local disk 2, cloud disk 1, cloud disk 2, and cloud disk N. The following figure shows the sequence in which the disks are initialized.
- If the device names of the two data disks in the image are /dev/xvdc and /dev/xvdd, Alibaba Cloud first allocates /dev/xvdc and /dev/xvdd as device names to the data disks created from the image. Then, remaining available device names are allocated to the local disks first and then to other disks in ascending alphabetic order. The system disk is initialized first. Then, the data disks are initialized in the following sequence: local disk 1, data disk 1 created from the image, data disk 2 created from the image, local disk 2, cloud disk 1, cloud disk 2, and cloud disk N. The following figure shows the sequence in which the disks are initialized.
Lifecycle
A local disk shares the same lifecycle as the instance to which it is attached. For more information, see Instance lifecycle.
Impacts of instance operations on data stored on local disks
The following table describes the impacts of instance operations on data stored on local disks.
Instance operation | Data stored on local disks | Local disk |
---|---|---|
Restart the operating system, restart an instance by using the ECS console, or forcefully restart an instance. | Retained | Retained |
Shut down the operating system, stop an instance by using the ECS console, or forcefully stop an instance. | Retained | Retained |
Automatically recover an instance. | Erased | Released |
Release an instance | Erased | Released |
When a subscription instance expires or when you have an overdue payment for a pay-as-you-go instance, the instance is stopped but has not been released. | Retained | Retained |
When a subscription instance expires or when you have an overdue payment for a pay-as-you-go instance, the instance is stopped and then released. | Erased | Released |
Manually renew an expired subscription instance. | Retained | Retained |
Reactivate a pay-as-you-go instance that was stopped due to an overdue payment. | Retained | Retained |
References
For information about retired local SSDs, see Previous-generation disks - local SSDs.
For information about how to handle system events on ECS instances that are equipped with local disks, see O&M scenarios and system events for instances equipped with local disks.