Different types of block storage offer varying performance and prices. You can select a block storage product that suits your workload and application requirements. This topic describes the performance metrics and specifications of disks, local disks, and elastic ephemeral disks.
For information about the prices and billing of different types of block storage, see Block storage billing.
For more information about the features and use cases of different types of block storage, see Overview of block storage.
Performance metrics
The key metrics used to measure the performance of block storage products include IOPS, throughput, and access latency. The performance of some block storage products is linked to their capacity. For example, ESSDs at different performance levels require different capacity ranges.
I/O size
I/O size is the amount of data in each read or write operation, such as 4 KiB. The relationship between I/O size, IOPS, and throughput is defined by the following formula: IOPS × I/O size = Throughput. Therefore, the performance metrics that you need to monitor depend on the I/O size of your application.
IOPS (Input/Output Operations per Second): The number of I/O operations that can be processed per second. This metric indicates the read and write processing capability of a block storage device.
If your application involves latency-sensitive random small I/O, such as a database application, you should focus on IOPS performance.
NoteIn database applications, data is frequently inserted, updated, and deleted. High IOPS ensures that the system runs efficiently even under the pressure of many random read and write operations. This prevents performance degradation or increased latency caused by I/O bottlenecks.
Throughput: The amount of data that can be successfully transferred per unit of time, measured in MB/s.
If your application involves many sequential reads and writes with large I/O sizes, such as big data analytics, you should focus on throughput.
NoteOffline computing services, such as Hadoop, analyze and process petabytes of data. If the system throughput is low, the overall processing time increases significantly, which affects business efficiency and response speed.
Access latency: The time required for a block storage device to process an I/O operation, measured in seconds (s), milliseconds (ms), or microseconds (μs). High latency can lead to application performance degradation or errors.
If your application is sensitive to latency, such as a database application, you should focus on this metric and use low-latency products such as ESSD AutoPL disks and ESSDs.
Capacity: The amount of storage space, measured in TiB, GiB, MiB, or KiB.
Block storage capacity is calculated in binary units, which represent data sizes in powers of 1,024. For example, 1 GiB = 1,024 MiB. Although capacity is not a performance metric, different capacities can achieve different levels of performance. A larger capacity generally corresponds to a stronger data processing capability. For block storage products of the same type, the I/O performance per unit of capacity is consistent. The overall performance of a disk increases linearly with its capacity until it reaches the maximum performance for that disk type.
Disk performance
The following table compares the performance of different types of disks.
The actual performance of a disk is limited by both its own specifications and the specifications of the instance to which it is attached. For more information, see Storage I/O performance.
Standard SSDs, ultra disks, and basic disks are previous-generation disks and are gradually being phased out in some regions and zones. We recommend that you use PL0 ESSDs or ESSD Entry disks to replace ultra disks and basic disks, and use ESSD AutoPL disks to replace standard SSDs.
Performance category | ESSD-series disks | Previous-generation disks | ||||||||
Zone-redundant ESSD | ESSD AutoPL | PL3 ESSD | PL2 ESSD | PL1 ESSD | PL0 ESSD | ESSD Entry | Standard SSD | Ultra Disk | Basic disk | |
Single-disk capacity range (GiB) | 10 to 65,536 | 1 to 65,536 | 1,261 to 65,536 | 461 to 65,536 | 20 to 65,536 | 1 to 65,536 | 10 to 32,768 | 20 to 32,768 | 20 to 32,768 | 5 to 2,000 |
Maximum IOPS | 50,000 | 1,000,000 | 1,000,000 | 100,000 | 50,000 | 10,000 | 6,000 | 25,000② | 5,000 | Hundreds |
Maximum throughput (MB/s) | 350 | 4,096 | 4,000 | 750 | 350 | 180 | 150 | 300② | 140 | 30 to 40 |
Formula for calculating single-disk IOPS① | min{1,800 + 50 × Capacity, 50,000} | Baseline performance: max{min{1,800 + 50 × Capacity, 50,000}, 3,000} Provisioned performance: Capacity (GiB) <=3: You cannot set provisioned performance. Capacity (GiB) >=4: [1, min{(1,000 IOPS/GiB × Capacity - Baseline IOPS), 50,000}] Performance burst③: Actual final IOPS - Baseline IOPS - Provisioned IOPS | min{1,800 + 50 × Capacity, 1,000,000} | min{1,800 + 50 × Capacity, 100,000} | min{1,800 + 50 × Capacity, 50,000} | min{1,800 + 12 × Capacity, 10,000} | min{1,800 + 8 × Capacity, 6,000} | min{1,800 + 30 × Capacity, 25,000} | min{1,800 + 8 × Capacity, 5,000} | None |
Formula for calculating single-disk throughput (MB/s) ① | min{120 + 0.5 × Capacity, 350} | Baseline performance: max{min{120 + 0.5 × Capacity, 350}, 125} Provisioned performance: 16 KB × Provisioned IOPS/1,024 Performance burst③: Actual final throughput - Baseline throughput - Provisioned throughput | min{120 + 0.5 × Capacity, 4,000} | min{120 + 0.5 × Capacity, 750} | min{120 + 0.5 × Capacity, 350} | min{100 + 0.25 × Capacity, 180} | min{100 + 0.15 × Capacity, 150} | min{120 + 0.5 × Capacity, 300} | min{100 + 0.15 × Capacity, 140} | None |
Data reliability | 99.9999999% | |||||||||
Average latency of single-channel random writes (ms), Block Size=4K | Millisecond-level④ | 0.2 | 0.2 | 0.2 | 0.2 | 0.3 to 0.5 | 1 to 3 | 0.5 to 2 | 1 to 3 | 5 to 10 |
Baseline performance: The maximum IOPS and throughput that a disk provides upon purchase. The baseline performance increases linearly with the disk capacity. The maximum baseline performance varies based on the disk specifications.
Provisioned performance: Allows you to flexibly configure performance based on your business needs without changing the storage capacity. This feature decouples capacity from performance.
①Formula for single-disk performance:
Formula for calculating the maximum IOPS of a PL0 ESSD: The baseline IOPS is 1,800 and increases by 12 per additional GiB of capacity, up to a maximum of 10,000.
Formula for calculating the maximum throughput of a PL0 ESSD: The baseline throughput is 100 MB/s and increases by 0.25 MB/s per additional GiB of capacity, up to a maximum of 180 MB/s.
②The performance of standard SSDs varies based on the size of data blocks:
When IOPS remains unchanged, a smaller block size results in lower throughput.
When throughput remains unchanged, a smaller block size results in higher IOPS.
I/O size (KiB)
Maximum IOPS
Throughput (MB/s)
4
Approximately 25,000
Approximately 100
16
Approximately 17,200
Approximately 260
32
Approximately 9,600
Approximately 300
64
Approximately 4,800
Approximately 300
③In addition to baseline performance and provisioned performance, an ESSD AutoPL disk can also provide burst performance. You can use CloudLens for EBS to monitor the burst details of an ESSD AutoPL disk in real time, including the burst duration and burst IOPS (total burst I/O). For more information, see Disk analysis.
④Data written to a zone-redundant ESSD is automatically distributed and stored across multiple zones. It achieves a recovery point objective (RPO) of 0 through physical replication. However, because data must be synchronously written to different zones, the write latency varies between zones in different regions and is higher than that of a PL1 ESSD. You can test the average write latency of a zone-redundant ESSD. For more information, see Test the performance of block storage.
Local disk performance
Local disks cannot be created separately. The data reliability of a local disk depends on the reliability of the physical machine to which the local disk is attached. A single point of failure (SPOF) may occur. An SPOF on a physical machine may affect multiple instances. Data stored on local disks is at risk of being lost. Do not store business data that must be retained for a long period of time on local disks. For more information about local disks, see Local disks.
NVMe SSD local disks
The following table describes the performance of NVMe SSD local disks for the d3c instance family for big data.
Metric
Single-disk performance
ecs.d3c.3xlarge
ecs.d3c.7xlarge
ecs.d3c.14xlarge
Maximum read IOPS
100,000
100,000
200,000
400,000
Maximum read throughput
4 GB/s
4 GB/s
8 GB/s
16 GB/s
Maximum write throughput
2 GB/s
2 GB/s
4 GB/s
8 GB/s
Access latency
Microsecond-level (μs)
The following table describes the performance of NVMe SSD local disks for the i5e instance family with local SSDs.
NVMe SSD metric
ecs.i5e.2xlarge
ecs.i5e.4xlarge
ecs.i5e.8xlarge
ecs.i5e.12xlarge
ecs.i5e.16xlarge
ecs.i5e.32xlarge
Maximum read IOPS
1,400,000
2,900,000
5,800,000
8,700,000
11,600,000
23,200,000
Maximum read throughput
7 GB/s
14 GB/s
28 GB/s
42 GB/s
56 GB/s
112 GB/s
Maximum write throughput
4.5 GB/s
9 GB/s
18 GB/s
27 GB/s
36 GB/s
72 GB/s
Access latency
Microsecond-level (μs)
The following table describes the performance of NVMe SSD local disks for the i5 instance family with local SSDs.
NVMe SSD metric
ecs.i5.xlarge
ecs.i5.2xlarge
ecs.i5.4xlarge
ecs.i5.8xlarge
ecs.i5.12xlarge
ecs.i5.16xlarge
Maximum read IOPS
700,000
1,400,000
2,900,000
5,800,000
8,700,000
11,800,000
Maximum read throughput
3.5 GB/s
7 GB/s
14 GB/s
28 GB/s
42 GB/s
56 GB/s
Maximum write throughput
2 GB/s
4 GB/s
8 GB/s
16 GB/s
24 GB/s
32 GB/s
Access latency
Microsecond-level (μs)
The following table describes the performance of NVMe SSD local disks for the i5g instance family with local SSDs.
NVMe SSD metric
ecs.i5g.8xlarge
ecs.i5g.16xlarge
Maximum read IOPS
1,400,000
2,900,000
Maximum read throughput
7 GB/s
14 GB/s
Maximum write throughput
4 GB/s
8 GB/s
Access latency
Microsecond-level (μs)
The following table describes the performance of NVMe SSD local disks for the i5ge instance family with local SSDs.
NVMe SSD metric
ecs.i5ge.3xlarge
ecs.i5ge.6xlarge
ecs.i5ge.12xlarge
ecs.i5ge.24xlarge
Maximum read IOPS
1,400,000
2,900,000
5,800,000
11,800,000
Maximum read throughput
7 GB/s
14 GB/s
28 GB/s
56 GB/s
Maximum write throughput
4 GB/s
8 GB/s
16 GB/s
32 GB/s
Access latency
Microsecond-level (μs)
The following table describes the performance of NVMe SSD local disks for the i4 instance family with local SSDs.
NVMe SSD metric
ecs.i4.large
ecs.i4.xlarge
ecs.i4.2xlarge
ecs.i4.4xlarge
ecs.i4.8xlarge
ecs.i4.16xlarge
ecs.i4.32xlarge
Maximum read IOPS
112,500
225,000
450,000
900,000
1,800,000
3,600,000
7,200,000
Maximum read throughput
0.75 GB/s
1.5 GB/s
3 GB/s
6 GB/s
12 GB/s
24 GB/s
48 GB/s
Maximum write throughput
0.375 GB/s
0.75 GB/s
1.5 GB/s
3 GB/s
6 GB/s
12 GB/s
24 GB/s
Access latency
Microsecond-level (μs)
NoteThe metrics in the table represent optimal performance. To achieve the best performance, use the latest version of a Linux image. This instance family supports only Linux images. For example, see Alibaba Cloud Linux 3.
The following table describes the performance of NVMe SSD local disks for the i4g and i4r instance families with local SSDs.
NVMe SSD metric
ecs.i4g.4xlarge and ecs.i4r.4xlarge
ecs.i4g.8xlarge and ecs.i4r.8xlarge
ecs.i4g.16xlarge and ecs.i4r.16xlarge
ecs.i4g.32xlarge and ecs.i4r.32xlarge
Maximum read IOPS
250,000
500,000
1,000,000
2,000,000
Maximum read throughput
1.5 GB/s
3 GB/s
6 GB/s
12 GB/s
Maximum write throughput
1 GB/s
2 GB/s
4 GB/s
8 GB/s
Access latency
Microsecond-level (μs)
NoteThe metrics in the table represent optimal performance. For best performance, we recommend that you use the latest version of a Linux image. This instance family supports only Linux images, such as an image from the Alibaba Cloud Linux 3 Image Release Notes.
The following table describes the performance of NVMe SSD local disks for the i3 instance family with local SSDs.
NVMe SSD metric
ecs.i3.xlarge
ecs.i3.2xlarge
ecs.i3.4xlarge
ecs.i3.8xlarge
ecs.i3.13xlarge
ecs.i3.26xlarge
Maximum read IOPS
250,000
500,000
1,000,000
2,000,000
3,000,000
6,000,000
Maximum read throughput
1.5 GB/s
3 GB/s
6 GB/s
12 GB/s
18 GB/s
36 GB/s
Maximum write throughput
1 GB/s
2 GB/s
4 GB/s
8 GB/s
12 GB/s
24 GB/s
Access latency
Microsecond-level (μs)
NoteThe metrics in the table represent optimal performance. To achieve the best performance, use the latest version of a Linux image. This instance family supports only Linux images. For more information, see the Alibaba Cloud Linux 3 image release notes.
The following table describes the performance of NVMe SSD local disks for the i3g instance family with local SSDs.
NVMe SSD metric
ecs.i3g.2xlarge
ecs.i3g.4xlarge
ecs.i3g.8xlarge
ecs.i3g.13xlarge
ecs.i3g.26xlarge
Maximum read IOPS
125,000
250,000
500,000
750,000
1,500,000
Maximum read throughput
0.75 GB/s
1.5 GB/s
3 GB/s
4.5 GB/s
9 GB/s
Maximum write throughput
0.5 GB/s
1 GB/s
2 GB/s
3 GB/s
6 GB/s
Access latency
Microsecond-level (μs)
NoteThe metrics in the table represent optimal performance. To achieve the best performance, use the latest version of a Linux image. This instance family supports only Linux images. For more information, see the Alibaba Cloud Linux 3 image release notes.
The following table describes the performance of NVMe SSD local disks for the i2 and i2g instance families with local SSDs.
NVMe SSD metric
Single-disk performance
Overall instance performance①
ecs.i2.xlarge and ecs.i2g.2xlarge only
Other i2 and i2g instance types
Maximum capacity
894 GiB
1,788 GiB
8 × 1,788 GiB
Maximum read IOPS
150,000
300,000
1,500,000
Maximum read throughput
1 GB/s
2 GB/s
16 GB/s
Maximum write throughput
0.5 GB/s
1 GB/s
8 GB/s
Access latency
Microsecond-level (μs)
① The overall instance performance data applies only to the ecs.i2.16xlarge instance type and represents the highest local storage performance of the i2 instance family.
The following table describes the performance of NVMe SSD local disks for the i2ne and i2gne instance families with local SSDs.
NVMe SSD metric
ecs.i2ne.xlarge and ecs.i2gne.2xlarge
ecs.i2ne.2xlarge and ecs.i2gne.4xlarge
ecs.i2ne.4xlarge and ecs.i2gne.8xlarge
ecs.i2ne.8xlarge and ecs.i2gne.16xlarge
ecs.i2ne.16xlarge
Maximum capacity
894 GiB
1,788 GiB
2 × 1,788 GiB
4 × 1,788 GiB
8 × 1,788 GiB
Maximum read IOPS
250,000
500,000
1,000,000
2,000,000
4,000,000
Maximum read throughput
1.5 GB/s
3 GB/s
6 GB/s
12 GB/s
24 GB/s
Maximum write throughput
1 GB/s
2 GB/s
4 GB/s
8 GB/s
16 GB/s
Access latency
Microsecond-level (μs)
The following table describes the performance of NVMe SSD local disks for the i1 instance family with local SSDs.
NVMe SSD metric
Single-disk performance
Overall instance performance ②
Maximum capacity
1,456 GiB
2,912 GiB
Maximum IOPS
240,000
480,000
Write IOPS ①
min{165 × Capacity, 240,000}
2 × min{165 × Capacity, 240,000}
Read IOPS ①
Maximum read throughput
2 GB/s
4 GB/s
Read throughput ①
min{1.4 × Capacity, 2,000} MB/s
2 × min{1.4 × Capacity, 2,000} MB/s
Maximum write throughput
1.2 GB/s
2.4 GB/s
Write throughput ①
min{0.85 × Capacity, 1,200} MB/s
2 × min{0.85 × Capacity, 1,200} MB/s
Access latency
Microsecond-level (μs)
① The following examples describe how to calculate the performance of a single disk:
Example of the formula for calculating the write IOPS of a single NVMe SSD local disk: 165 IOPS per GiB, up to a maximum of 240,000 IOPS.
Example of the formula for calculating the write throughput of a single NVMe SSD local disk: 0.85 MB/s per GiB, up to a maximum of 1,200 MB/s.
② The overall instance performance data applies only to the ecs.i1.14xlarge instance type and represents the highest local storage performance of the i1 instance family.
SATA HDD local disks
The following table describes the performance of SATA HDD local disks.
SATA HDD metric | d1, d1ne | d2c | d2s | d3s | ||||
Single-disk performance | Overall instance performance | Single-disk performance | Overall instance performance | Single-disk performance | Overall instance performance | Single-disk performance | Overall instance performance | |
Maximum capacity | 5,500 GiB | 154,000 GiB | 3,700 GiB | 44,400 GiB | 7,300 GiB | 219,000 GiB | 11,100 GiB | 355,200 GiB |
Maximum throughput | 190 MB/s | 5,320 MB/s | 190 MB/s | 2,280 MB/s | 190 MB/s | 5,700 MB/s | 260 MB/s | 8,320 MB/s |
Access latency | Millisecond-level (ms) | |||||||
The overall instance performance data applies only to the ecs.d1.14xlarge, ecs.d1ne.14xlarge, ecs.d2c.24xlarge, ecs.d2s.20xlarge, and ecs.d3s.16xlarge instance types and represents the highest local storage performance of the corresponding instance families.
Elastic ephemeral disk performance
You can customize the capacity of an elastic ephemeral disk for temporary data storage. For more information about elastic ephemeral disks, see Elastic ephemeral disks.
Two categories of elastic ephemeral disks are available: standard and premium. Standard elastic ephemeral disks are suitable for scenarios with large data volumes and high throughput needs, while premium elastic ephemeral disks are suitable for scenarios requiring small capacity but high IOPS. The following table describes the performance of each type:
Metric | Standard elastic ephemeral disks | Premium elastic ephemeral disks |
Single-disk capacity range (GiB) | 64 to 8,192 | 64 to 8,192 |
Maximum read IOPS per disk | Either 100 times the capacity or 820,000, whichever is smaller | Either 300 times the capacity or 1,000,000, whichever is smaller |
Maximum write IOPS per disk | Either 20 times the capacity or 160,000, whichever is smaller | Either 150 times the capacity or 500,000, whichever is smaller |
Maximum read throughput per disk (MB/s) | Either 0.8 times the capacity or 4,096, whichever is smaller | Either 1.6 times the capacity or 4,096, whichever is smaller |
Maximum write throughput per disk (MB/s) | Either 0.4 times the capacity or 2,048, whichever is smaller | Either the capacity or 2,048, whichever is smaller |
Write I/O density① | 20 | 150 |
Read I/O density① | 100 | 300 |
①: I/O density = IOPS / disk capacity, unit: IOPS/GiB, indicating the IOPS capability per GiB.
Test block storage performance
You can test the performance of block storage using the following methods:
Troubleshooting slow read/write speeds or high I/O on disks
You can view the monitoring information of your disks in the ECS console, EBS console, or Cloud Monitor console to determine whether the current disk performance meets your business requirements or has reached a performance bottleneck. For more information, see View the monitoring information of a disk.
Check if the disk uses the pay-as-you-go billing method. If it does, the I/O speed of the disk is limited when your account has an overdue payment. The speed is restored after you add funds to your account.
Note: If you do not renew the disk within 15 days after the payment becomes overdue, the disk is automatically released and its data cannot be recovered.
For Linux systems, see Troubleshoot high disk I/O load on a Linux instance to identify programs that cause high IOPS.
When you import data, the performance of both the client and the server affects the read and write speeds.
On the server, you can use the atop tool to monitor Linux system metrics. This tool continuously monitors the usage of various resources on the server. By default, the resource usage information is recorded in the /var/log/atop directory. You can use the atop logs to further investigate the issue.
If the disk performance does not meet your business needs, you can also try to improve the disk performance. For more information, see How to improve disk performance.
How to improve disk performance
If the current performance of a disk does not meet your business requirements, you can try the following methods to improve its performance:
The actual performance of a disk is limited by its own specifications and the specifications of the instance to which it is attached. If the IOPS and bandwidth of an instance type are lower than the performance limits of the disk, you cannot improve the performance by only upgrading the disk. You must also upgrade the instance type. For more information about the limits of instance types on disks, see Instance families.
Application scenario | Method to improve performance |
If your current disk type, such as standard SSD, cannot meet the higher IOPS or throughput requirements of your growing business, you can change the disk to a higher-performance type, such as a PL1 ESSD. This provides higher IOPS and better response times. This method is suitable for applications that have strict storage performance requirements and experience significant growth in business scale or access volume. | |
If you are using an ESSD, you can adjust its performance level based on changes in your business workload. | |
If you are using an ESSD AutoPL disk, you can set provisioned performance or enable performance burst to improve the disk's performance. | |
If your business requires higher IOPS and has insufficient storage space, we recommend that you resize the disk. For some disk types, such as PL1 ESSDs, the baseline IOPS increases with the capacity. This enhances the disk's data processing capability and improves its performance. This method is suitable for applications with continuously growing data volumes that require both high storage capacity and high IOPS. For example, the IOPS of a PL1 ESSD is calculated using the formula: min{1,800 + 50 × Capacity, 50,000}. The IOPS of a 40 GiB PL1 ESSD is 3,800. If you resize the disk to 100 GiB, the IOPS increases to 6,800. | |
If you want to manage and optimize storage resource allocation more flexibly and improve disk performance, you can use Logical Volume Manager (LVM). By distributing data across multiple logical volumes, you can achieve parallel processing of read and write operations. This improves disk performance and is especially suitable for multi-threaded applications and databases that require high concurrency. | |
To improve IOPS and throughput while ensuring data redundancy, you can create a Redundant Array of Independent Disks (RAID) array. For example, you can use RAID 0 to increase read and write speeds, or use RAID 1 or RAID 10 to improve performance and provide data redundancy. |