All Products
Search
Document Center

Elastic Compute Service:Block storage performance

Last Updated:Jan 09, 2025

Block storage performance and pricing differ across types. Select the appropriate block storage products based on your specific workloads and application needs. This topic covers the performance metrics and specifications for cloud disks and local disks, along with elastic ephemeral disks.

Note

Performance metrics

Key metrics for block storage performance include IOPS, throughput, and latency. Performance for certain block storage products correlates with capacity. For instance, ESSDs with varying performance levels have specific capacity requirements.

  • I/O size

    I/O size refers to the data size in each read/write operation, such as 4 KiB. It impacts the performance metrics IOPS and throughput: IOPS × I/O size = throughput. Hence, different I/O sizes necessitate a focus on corresponding performance metrics.

  • IOPS (input/output operations per second): Reflects the number of I/O operations a block storage can handle per second, indicating its read/write capacity. Unit: operations/second.

    For latency-sensitive random small I/O operations, such as those in database applications, IOPS performance is crucial.

    Note

    In database applications, frequent data insertions, updates, or deletions occur. High IOPS ensures efficient system operation under numerous random read/write operations, preventing performance drops or increased latency due to I/O bottlenecks.

    Common IOPS metrics

    Metric

    Description

    Data access method

    Total IOPS

    The total number of I/O operations per second

    Access locations on storage devices in a continuous or non-continuous manner

    Random read IOPS

    The average number of random read I/O operations per second

    Access locations on storage devices in a non-continuous manner

    Random write IOPS

    The average number of random write I/O operations per second

    Sequential read IOPS

    The average number of sequential read I/O operations per second

    Access locations on storage devices in a continuous manner

    Sequential write IOPS

    The average number of sequential write I/O operations per second

  • Throughput: Measures data transfer per second. Unit: MB/s.

    Applications with sequential read/write operations or large I/Os, like database applications, should prioritize throughput.

    Note

    Offline computing tasks, such as those performed by Hadoop, involve petabyte-scale data analysis and processing. Insufficient throughput can extend overall processing times, impacting business efficiency and response times.

  • Latency: The time taken for a block storage device to process an I/O request. Units: seconds, milliseconds, or microseconds. High latency can degrade performance or cause errors in latency-sensitive applications.

    For applications sensitive to high latency, such as databases, consider using low-latency disks like ESSD AutoPL disks or ESSDs.

  • Capacity: The storage space available. Units: TiB, GiB, MiB, KiB.

    Block storage capacity is expressed in binary units, with 1 GiB equaling 1,024 MiB. Capacity isn't a performance metric but affects performance levels. Larger capacity block storage devices offer greater processing capabilities. Devices within the same category deliver consistent I/O performance per unit capacity, with cloud disk performance linearly increasing with capacity up to the category's single-disk maximum.

Disk performance

The table below describes the performance and typical applications for various cloud disk categories.

Important
  • Cloud disk performance is not only bound by the disk's specifications but also by the attached instance's specifications. For more information, see storage I/O performance.

  • Standard SSDs, ultra disks, and basic disks represent previous generation cloud disks and may not be available in certain regions and zones. Consider using PL0 ESSDs or ESSD Entry disks as alternatives to ultra and basic disks, and ESSD AutoPL disks in place of standard SSDs.

Performance category

ESSD series disks

Previous generation disks

Zone-redundant ESSD (public preview)

ESSD AutoPL

PL3 ESSD

PL2 ESSD

PL1 ESSD

PL0 ESSD

ESSD Entry

Standard SSD

Ultra disk

Basic disk

Capacity range per disk (GiB)

10~65,536

1~65,536

1,261~65,536

461~65,536

20~65,536

1~65,536

10~32,768

20~32,768

20~32,768

5~2,000

Maximum IOPS

50,000

1,000,000

1,000,000

100,000

50,000

10,000

6,000

25,000

5,000

Hundreds

Maximum throughput (MB/s)

350

4,096

4,000

750

350

180

150

300

140

30~40

Single-disk IOPS performance formula

min{1,800 + 50 × Capacity, 50,000}

Baseline performance: max{min{1,800 + 50 × Capacity, 50,000}, 3,000}

Provisioned performance:

Capacity (GiB) <= 3: not configurable

Capacity (GiB) >= 4: [1, min{(1,000 IOPS per GiB × Capacity - Baseline IOPS), 50,000}]

Performance burst: Actual final IOPS - Baseline IOPS - Provisioned IOPS

min{1,800 + 50 × Capacity, 1,000,000}

min{1,800 + 50 × Capacity, 100,000}

min{1,800 + 50 × Capacity, 50,000}

min{1,800 + 12 × Capacity, 10,000}

min{1,800 + 8 × Capacity, 6,000}

min{1,800 + 30 × Capacity, 25,000}

min{1,800 + 8 × Capacity, 5,000}

None

Single-disk throughput performance formula (MB/s)

min{120 + 0.5 × Capacity, 350}

Baseline performance: max{min{120 + 0.5 × Capacity, 350}, 125}

Provisioned performance: 16 KB × Provisioned IOPS / 1,024

Performance burst: Actual final throughput - Baseline throughput - Provisioned throughput

min{120 + 0.5 × Capacity, 4,000}

min{120 + 0.5 × Capacity, 750}

min{120 + 0.5 × Capacity, 350}

min{100 + 0.25 × Capacity, 180}

min{100 + 0.15 × Capacity, 150}

min{120 + 0.5 × Capacity, 300}

min{100 + 0.15 × Capacity, 140}

None

Data reliability

99.9999999%

Average single-channel random write latency in milliseconds (block size = 4 KB)

  • Value when an instance accesses a disk in the same zone: < 2

  • Value when an instance accesses a disk in a different zone: < 4

0.2

0.2

0.2

0.2

0.3~0.5

1~3

0.5~2

1~3

5~10

  • Below are examples using an SSD Shared Block Storage device to calculate single disk performance:

    • Maximum IOPS for a PL0 ESSD: Baseline IOPS is 1,800, increasing by 12 IOPS for each additional GiB of storage, up to 10,000 IOPS.

    • Maximum throughput for a PL0 ESSD: Baseline throughput is 100 MB/s, increasing by 0.25 MB/s for each additional GiB of storage, up to 180 MB/s.

  • Standard SSD performance varies with block size. Smaller blocks yield lower throughput but higher IOPS, as the following table shows.

    I/O size (KiB)

    Maximum IOPS

    Throughput (MB/s)

    4

    Approximately 25,000

    Approximately 100

    16

    Approximately 17,200

    Approximately 260

    32

    Approximately 9,600

    Approximately 300

    64

    Approximately 4,800

    Approximately 300

  • Beyond baseline and provisioned performance, ESSD AutoPL disks also offer burst capabilities. Monitor burst details in real-time with CloudLens for EBS, including burst duration and total burst I/O. For more details, see disk analysis.

Performance of local disks

Warning

Local disks are not standalone; their data reliability hinges on the physical server's reliability, posing a single point of failure risk. A failure in the physical server can impact multiple instances and risk data loss. Avoid using local disks for long-term data storage. For more on local disks, see local disks.

NVMe SSD local disks

  • Performance metrics for local NVMe SSDs used by the d3c compute-intensive big data instance family are described in the following table.

    Metric

    Single-disk performance

    ecs.d3c.3xlarge

    ecs.d3c.7xlarge

    ecs.d3c.14xlarge

    Maximum read IOPS

    100,000

    100,000

    200,000

    400,000

    Maximum read throughput

    4 GB/s

    4 GB/s

    8 GB/s

    16 GB/s

    Maximum write throughput

    2 GB/s

    2 GB/s

    4 GB/s

    8 GB/s

  • Performance metrics for local NVMe SSDs used by the i4 instance family are detailed in the table below.

    NVMe SSD metric

    ecs.i4.large

    ecs.i4.xlarge

    ecs.i4.2xlarge

    ecs.i4.4xlarge

    ecs.i4.8xlarge

    ecs.i4.16xlarge

    ecs.i4.32xlarge

    Maximum read IOPS

    112,500

    225,000

    450,000

    900,000

    1,800,000

    3,600,000

    7,200,000

    Maximum read throughput

    0.75 GB/s

    1.5 GB/s

    3 GB/s

    6 GB/s

    12 GB/s

    24 GB/s

    48 GB/s

    Maximum write throughput

    0.375 GB/s

    0.75 GB/s

    1.5 GB/s

    3 GB/s

    6 GB/s

    12 GB/s

    24 GB/s

    Note

    For optimal performance, use the latest Linux images, such as Alibaba Cloud Linux 3, supported only by this instance family.

  • Performance metrics for local NVMe SSDs used by the i4g and i4r instance families are presented in the following table.

    NVMe SSD metric

    ecs.i4g.4xlarge and ecs.i4r.4xlarge

    ecs.i4g.8xlarge and ecs.i4r.8xlarge

    ecs.i4g.16xlarge and ecs.i4r.16xlarge

    ecs.i4g.32xlarge and ecs.i4r.32xlarge

    Maximum read IOPS

    250,000

    500,000

    1,000,000

    2,000,000

    Maximum read throughput

    1.5 GB/s

    3 GB/s

    6 GB/s

    12 GB/s

    Maximum write throughput

    1 GB/s

    2 GB/s

    4 GB/s

    8 GB/s

    Note

    The performance data in the preceding table represents the highest performance levels of local storage for the instance families. The instance families support only Linux images. We recommend that you use the most recent Linux image versions, such as Alibaba Cloud Linux 3, to obtain optimal performance.

  • Performance metrics for local NVMe SSDs used by the i3 instance family are shown in the table below.

    NVMe SSD metric

    ecs.i3.xlarge

    ecs.i3.2xlarge

    ecs.i3.4xlarge

    ecs.i3.8xlarge

    ecs.i3.13xlarge

    ecs.i3.26xlarge

    Maximum read IOPS

    250,000

    500,000

    1,000,000

    2,000,000

    3,000,000

    6,000,000

    Maximum read throughput

    1.5 GB/s

    3 GB/s

    6 GB/s

    12 GB/s

    18 GB/s

    36 GB/s

    Maximum write throughput

    1 GB/s

    2 GB/s

    4 GB/s

    8 GB/s

    12 GB/s

    24 GB/s

    Note

    The performance data in the preceding table represents the highest performance levels of local storage for the instance families. The instance families support only Linux images. We recommend that you use the most recent Linux image versions, such as Alibaba Cloud Linux 3, to obtain optimal performance.

  • Performance metrics for local NVMe SSDs used by the i3g instance family are outlined in the following table.

    NVMe SSD metric

    ecs.i3g.2xlarge

    ecs.i3g.4xlarge

    ecs.i3g.8xlarge

    ecs.i3g.13xlarge

    ecs.i3g.26xlarge

    Maximum read IOPS

    125,000

    250,000

    500,000

    750,000

    1,500,000

    Maximum read throughput

    0.75 GB/s

    1.5 GB/s

    3 GB/s

    4.5 GB/s

    9 GB/s

    Maximum write throughput

    0.5 GB/s

    1 GB/s

    2 GB/s

    3 GB/s

    6 GB/s

    Note

    The performance data in the preceding table represents the highest performance levels of local storage for the instance families. The instance families support only Linux images. We recommend that you use the most recent Linux image versions, such as Alibaba Cloud Linux 3, to obtain optimal performance.

  • Performance metrics for local NVMe SSDs used by the i2 and i2g instance families are described in the table below.

    NVMe SSD metric

    Single-disk performance

    Overall instance performance

    Only ecs.i2.xlarge and ecs.i2g.2xlarge

    Other i2 and i2g specifications

    Maximum capacity

    894 GiB

    1,788 GiB

    8 × 1,788 GiB

    Maximum read IOPS

    150,000

    300,000

    1,500,000

    Maximum read throughput

    1 GB/s

    2 GB/s

    16 GB/s

    Maximum write throughput

    0.5 GB/s

    1 GB/s

    8 GB/s

    Latency

    Within microseconds (μs)

    Overall instance performance data is specific to the ecs.i2.16xlarge instance type, representing the peak local storage performance for the i2 family.

  • Performance metrics for local NVMe SSDs used by the i2ne and i2gne instance families are detailed in the following table.

    NVMe SSD metric

    ecs.i2ne.xlarge and ecs.i2gne.2xlarge

    ecs.i2ne.2xlarge and ecs.i2gne.4xlarge

    ecs.i2ne.4xlarge and ecs.i2gne.8xlarge

    ecs.i2ne.8xlarge and ecs.i2gne.16xlarge

    ecs.i2ne.16xlarge

    Maximum capacity

    894 GiB

    1,788 GiB

    2 × 1,788 GiB

    4 × 1,788 GiB

    8 × 1,788 GiB

    Maximum read IOPS

    250,000

    500,000

    1,000,000

    2,000,000

    4,000,000

    Maximum read throughput

    1.5 GB/s

    3 GB/s

    6 GB/s

    12 GB/s

    24 GB/s

    Maximum write throughput

    1 GB/s

    2 GB/s

    4 GB/s

    8 GB/s

    16 GB/s

    Latency

    Within microseconds (μs)

  • Performance metrics for local NVMe SSDs used by the i1 instance family are shown in the table below.

    NVMe SSD metric

    Single-disk performance

    Overall instance performance

    Maximum capacity

    1,456 GiB

    2,912 GiB

    Maximum IOPS

    240,000

    480,000

    Write IOPS

    min{165 × Capacity, 240,000}

    2 × min{165 × Capacity, 240,000}

    Read IOPS

    Maximum read throughput

    2 GB/s

    4 GB/s

    Read throughput

    min{1.4 × Capacity, 2,000} MB/s

    2 × min{1.4 × Capacity, 2,000} MB/s

    Maximum write throughput

    1.2 GB/s

    2.4 GB/s

    Write throughput

    min{0.85 × Capacity, 1,200} MB/s

    2 × min{0.85 × Capacity, 1,200} MB/s

    Latency

    Within microseconds (μs)

    Below are examples using an NVMe SSD local disk to calculate single disk performance:

    • The write IOPS formula indicates each GiB of capacity contributes 165 write IOPS, up to 240,000 IOPS per disk.

    • The write throughput formula shows each GiB of capacity contributes 0.85 MB/s, up to a maximum of 1,200 MB/s.

    Overall instance performance data pertains only to the ecs.i1.14xlarge instance type, indicating the highest local storage performance for the i1 family.

SATA HDD local disks

Performance metrics for local SATA HDDs are provided in the table below.

SATA HDD metric

d1 and d1ne

d2c

d2s

d3s

Single-disk performance

Overall instance performance

Single-disk performance

Overall instance performance

Single-disk performance

Overall instance performance

Single-disk performance

Overall instance performance

Maximum capacity

5,500 GiB

154,000 GiB

3,700 GiB

44,400 GiB

7,300 GiB

219,000 GiB

11,100 GiB

355,200 GiB

Maximum throughput

190 MB/s

5,320 MB/s

190 MB/s

2,280 MB/s

190 MB/s

5,700 MB/s

260 MB/s

8,320 MB/s

Latency

Within milliseconds (ms)

Note

Overall instance performance data applies to specific instance types within the ecs.d1.14xlarge, ecs.d1ne.14xlarge, ecs.d2c.24xlarge, ecs.d2s.20xlarge, and ecs.d3s.16xlarge families, showcasing the top local storage performance for these families.

Elastic ephemeral disk performance

Note

Customize the capacity of elastic ephemeral disks for temporary data storage. For more about elastic ephemeral disks, see elastic ephemeral disk.

Two categories of elastic ephemeral disks are available: standard and premium. Standard elastic ephemeral disks are suitable for scenarios with large data volumes and high throughput needs, while premium elastic ephemeral disks are suitable for scenarios requiring small capacity but high IOPS. The following table describes the performance of each type:

Metric

Standard elastic ephemeral disks

Premium elasitc ephemeral disks

Single-disk capacity range (GiB)

64 to 8,192

64 to 8,192

Maximum read IOPS per disk

Either 100 times the capacity or 820,000, whichever is smaller

Either 300 times the capacity or 1,000,000, whichever is smaller

Maximum write IOPS per disk

Either 20 times the capacity or 160,000, whichever is smaller

Either 150 times the capacity or 500,000, whichever is smaller

Maximum read throughput per disk (MB/s)

Either 0.8 times the capacity or 4,096, whichever is smaller

Either 1.6 times the capacity or 4,096, whichever is smaller

Maximum write throughput per disk (MB/s)

Either 0.4 times the capacity or 2,048, whichever is smaller

Either the capacity or 2,048, whichever is smaller

Write I/O density①

20

150

Read I/O density①

100

300

①: I/O density = IOPS / disk capacity, unit: IOPS/GiB, indicating the IOPS capability per GiB.

Test block storage performance

Assess block storage performance using the following methods:

Troubleshooting slow read/write or high I/O on cloud disks

Monitor cloud disk performance in the ECS console, EBS console, or CloudMonitor console to verify if it meets your business needs or to identify performance bottlenecks. For more information, see view cloud disk monitoring information.

  1. Check if the cloud disk billing method is pay-as-you-go. If so, disk I/O may be throttled during overdue payment status but will resume once the account is recharged.

    Note: If the account remains unpaid for 15 days after becoming overdue, the cloud disk will be released, and data cannot be recovered. Please be aware of this risk.

  2. For Linux systems, consult how to check the I/O load on Linux systems to identify high IOPS consuming programs.

  3. Data import performance is influenced by both client and server capabilities.

  4. On the server side, use the atop tool to monitor Linux system metrics, which continuously tracks resource usage, recorded by default in the /var/log/atop directory for later analysis.

  5. If cloud disk performance falls short of your business requirements, consider methods to enhance it as outlined in how to improve cloud disk performance.

How to improve cloud disk performance

To boost cloud disk performance, employ one of the following strategies:

Important

Cloud disk performance is constrained by both the disk and instance specifications. If the instance's IOPS and bandwidth are lower than the disk's maximum, upgrading the instance is necessary to enhance performance. For details on instance limitations, see instance family.

Scenarios

Methods to improve performance

When the cloud disk type (such as standard SSD) cannot meet the higher IOPS or throughput requirements brought by business growth, you can choose to change to a higher-performance cloud disk type, such as ESSD PL1, to obtain higher IOPS and better response time. This method is suitable for applications with strict storage performance requirements and significant growth in business scale or access volume.

Change cloud disk type

If you are using an ESSD, you can adjust the performance level of the ESSD based on changes in business workload.

Modify ESSD performance level

If you are using an ESSD AutoPL disk, you can set the pre-configured performance or enable performance burst to improve the disk performance.

Modify ESSD AutoPL performance configuration

When your business requires not only higher IOPS but also more storage space, it is recommended to scale out the cloud disk. For some types of cloud disks (such as ESSD PL1), the baseline IOPS will increase with the increase in capacity, thereby enhancing the processing capability and improving the performance of the cloud disk. This method is suitable for scenarios in which data volume continues to grow and requires larger storage capacity and higher IOPS. For example, the IOPS of a 40-GiB ESSD PL1 is 3,800, which increases to 6,800 when extended to 100 GiB.

Scale out cloud disk

To flexibly manage and optimize storage resource allocation and improve cloud disk performance, you can choose Logical Volume Manager (LVM). By distributing data across multiple logical volumes, you can achieve parallel processing of read and write operations, thereby improving cloud disk performance. This method is particularly suitable for multi-threaded applications and databases that require high concurrency access.

Create logical volume

When you need to improve IOPS and throughput while ensuring data redundancy, you can choose to create a RAID array. For example, using RAID 0 can improve read and write speeds, while RAID 1 or RAID 10 can improve performance and provide data redundancy.

Create RAID array