edit-icon download-icon

FAQ about Shared Block Storage

Last Updated: Jul 24, 2018

How do I apply for use?

The service is currently in public beta. You can open a ticket to request a free test.

What is Shared Block Storage?

ECS Shared Block Storage is a data block-level storage device which supports concurrent reads/writes by multiple ECS instances. It features a high level of concurrency, performance, and reliability. A single Shared Block Storage device can be attached to a maximum of eight ECS instances at the same time.

Why do I need Shared Block Storage?

In a traditional cluster architecture, multiple computing nodes require access to the same copy of data so that the entire high availability cluster can continue providing business services even when one or more computing nodes suffer faults. If data files are stored in Shared Block Storage devices, which are under unified management through the cluster file system, data consistency is guaranteed between multiple nodes during concurrent reads/writes on multiple front-end computing nodes.

What is Shared Block Storage designed for?

Shared Block Storage is designed for the high availability architecture of enterprise-level applications. It provides shared access to block storage devices in a shared-everything architecture, such as the high availability server cluster architecture and the Oracle database with Oracle RAC architecture. The Oracle RAC architecture is common among government departments, enterprises, and financial customers.

How do I use Shared Block Storage?

Block devices do not provide a cluster file system for managing Shared Block Storage. You must install a cluster file system separately.

If you attach Shared Block Storage devices to multiple ECS instances but use a conventional file system to manage them, disk space allocation conflicts and data file inconsistencies may occur:

  • Disk space allocation conflicts

    If a Shared Block Storage device is attached to multiple instances, and one of those instances (Instance A) writes data to a file, the space allocation record of Instance A is changed, but the records of other instances are not changed. When another instance (Instance B) tries to write data to the file, it may allocate the disk space that has been already allocated by Instance A, resulting in a disk space allocation conflict.

  • Data file inconsistencies

    After an instance (Instance A) reads and caches data, another process requesting the same data will directly read the data from the cache. However, if a copy of the same data on another instance (Instance B) is modified during this period and Instance A is not aware of it and still retrieves the data from the cache, a data inconsistency occurs.

To avoid these issues, you can manage block devices by using a cluster file system, such as GFS and GPFS. For typical Oracle RAC business scenarios, we recommend that you use ASM for uniform management of the storage volume and file system.

Can I attach Shared Block Storage to instances across regions and zones?

A Shared Block Storage device can only be attached to instances in the same zone of the same region. That is, the block device should be in the same zone as the instance. If a block device is attached to more than one instance, then all of the instances and the device should be in the same zone.

How many Shared Block Storage devices can be attached to one ECS instance?

Up to 16 data disks can be attached to one ECS instance.

What types of Shared Block Storage are available?

Two Shared Block Storage types are currently available. See the following table for more information.

Item Shared Block Storage
Type SSD Ultra
Maximum single disk capacity 32 TiB 32 TiB
Random read/write IOPS* 30,000 5000
Sequential read/write throughput* 512 MBps 160 MBps
Formulas to calculate performance of a single disk IOPS=min{1600 + 40 * capacity, 30000} IOPS=min{1000 + 6 * capacity, 5000}
Throughput=min{100 + 0.5*capacity, 512}MBps Throughput=min{50 + 0.15*capacity, 160}MBps
Access latency 0.5−2 ms 1−3 ms
Expected price Free of charge during public beta
Data reliability 99.9999999%
Multi-node attachment 8 instances

* The maximum IOPS and throughput are the performance values of a bare device attached to two or more instances at the same time during stress tests.

What are the product features?

  • Shared access: Supports a maximum of eight instances.

  • High performance: Delivers a maximum of 30,000 random read/write IOPS, which is 50% higher than SSD cloud disks, and 512 MBps of sequential throughput, which is 100% higher than SSD cloud disks.

  • Large capacity: 32 TiB for a single disk, and 128 TiB for a single instance.

  • Security and reliability: Assures 99.9999999% data reliability and supports automatic snapshot policies.

What billing methods are available?

Shared Block Storage supports Pay-As-You-Go and Subscription billing methods. Only the Pay-As-You-Go billing method is available during the public beta period.

Shared Block Storage is free of charge and is available in all regions during the public beta period.

What are the commands for testing Shared Block Storage performance?

Number of instances in the stress test Performance test items Commands
2 Test random write IOPS fio -direct=1 -iodepth=128 -rw=randwrite -ioengine=libaio -bs=4k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=iotest -name=Rand_Write_Testing
Test random read IOPS fio -direct=1 -iodepth=128 -rw=randread -ioengine=libaio -bs=4k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=iotest -name=Rand_Read_Testing
Test write throughput fio -direct=1 -iodepth=64 -rw=write -ioengine=libaio -bs=64k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=iotest -name=Write_PPS_Testing
Test read throughput fio -direct=1 -iodepth=64 -rw=read -ioengine=libaio -bs=64k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=iotest -name=Read_PPS_Testing
4 Test random write IOPS fio -direct=1 -iodepth=96 -rw=randwrite -ioengine=libaio -bs=4k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=iotest -name=Rand_Write_Testing
Test random read IOPS fio -direct=1 -iodepth=96 -rw=randread -ioengine=libaio -bs=4k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=iotest -name=Rand_Read_Testing
Test write throughput fio -direct=1 -iodepth=64 -rw=write -ioengine=libaio -bs=64k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=iotest -name=Write_PPS_Testing
Test read throughput fio -direct=1 -iodepth=64 -rw=read -ioengine=libaio -bs=64k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=iotest -name=Read_PPS_Testing

Note: The sum of iodepth values of multiple clients must not exceed 384 when you use fio for performance stress tests (if stress tests are underway on four instances at the same time, the iodepth value of each client must not exceed 96).

Thank you! We've received your feedback.