Persistent memory on Elastic Compute Service (ECS) instances can be used as memory or local disks based on instance types. This topic describes how to configure persistent memory as local disks and how to resolve the following issue: An ecs.re7p persistent memory-optimized instance on which persistent memory can be used as local disks failed to allocate a memory pool by using the Low-Level Persistence Library (LLPL).
Prerequisites
Persistent memory is suitable for the following instance types and image versions:
Instance types
Instance type on which persistent memory can be used as memory: ecs.re6p-redis
ImportantIf you use persistent memory as memory on an instance, the following situations occur:
You can use persistent memory immediately after you purchase the instance, without the need to initialize persistent memory.
The persistent memory used as memory does not provide data persistence, and the data stored in the persistent memory is lost when the instance is stopped or restarted.
Instance type on which persistent memory can be used as local disks: ecs.re6p
ImportantIf you use persistent memory as local disks on an instance, the following situations occur:
You can initialize persistent memory after you purchase the instance. For more information, see Configure the usage mode of persistent memory.
The persistent memory used as local disks provides data persistence but may cause data loss. We recommend that you back up data in advance. For more information about local disks, see Local disks.
Image versions
Alibaba Cloud Linux 2
CentOS 7.6 and later
Ubuntu 18.04 and 20.04
Background information
The access latency to persistent memory is higher than that to regular memory. However, the data stored in persistent memory is retained when you stop or restart instances. Persistent memory can be used as memory or local disks.
When persistent memory is used as memory, you can move some data from regular memory to persistent memory, such as non-hot data that does not require high-speed storage access. Persistent memory offers large capacity at a lower price per GiB and helps reduce the total cost of ownership (TCO) per GiB of memory.
When persistent memory is used as local disks, it delivers ultra-high I/O performance and a read/write latency as low as 170 nanoseconds. You can use persistent memory for core application databases that require consistent response time. You can replace Non-Volatile Memory Express (NVMe) SSDs with persistent memory-based local disks to deliver higher IOPS, higher bandwidth, and lower latency and resolve performance bottlenecks.
The reliability of data stored in persistent memory depends on the reliability of persistent memory devices and the physical servers to which these devices are attached. This increases the risks of single points of failure (SPOFs). To ensure the reliability of application data, we recommend that you implement data redundancy at the application layer and use cloud disks for long-term data storage.
Configure persistent memory as a local disk
In this example, an instance that has the following configurations is used:
Instance type: ecs.re6p.2xlarge
Image: Alibaba Cloud Linux 2.1903 LTS 64-bit
Log on to a created instance.
For more information, see Connect to a Linux instance by using a password or key.
Install the utilities that are used to manage persistent memory, and delete all namespaces and labels.
sudo yuminstall-yndctldaxctl sudo ndctldisable-namespaceall&&sudo ndctldestroy-namespaceall#Delete all namespaces. sudo ndctldisable-regionall&&sudo ndctlzero-labelsall&&sudo ndctlenable-regionall#Delete all labels.
Check the size of persistent memory.
ndctl list -R
The following figure shows an example command output, in which the size parameter indicates the size of persistent memory.
Set the usage mode to fsdax.
sudo ndctl create-namespace --reconfig=namespace0.0 -m fsdax --size={region-size} --force
NoteReplace {region-size} with the size value obtained in the previous step.
Format and mount the persistent memory (/dev/pmem) device.
sudo mkfs -t ext4 /dev/pmem0 && \ sudo mkdir /mnt/sdb && \ sudo mount -o dax,noatime /dev/pmem0 /mnt/sdb
View the mounted /dev/pmem device.
df -h
After the /dev/pmem device is mounted, you can use disk performance test tools to test the performance of the device.
The following table describes the performance comparison between local NVMe SSDs, enhanced SSDs (ESSDs), and persistent memory-based local disks.
NoteThe performance data in the following table is for reference only. Data in the results of your own tests prevails.
Metric
128-GiB persistent memory
1,788-GiB NVMe SSD
800-GiB ESSD at performance level 1 (PL1)
Read bandwidth
8 GB/s to 10 GB/s
2 GB/s to 3 GB/s
0.2 GB/s to 0.3 GB/s
Read/write bandwidth
8 GB/s to 10 GB/s
1 GB/s to 2 GB/s
0.2 GB/s to 0.3 GB/s
Write bandwidth
2 GB/s to 3 GB/s
1 GB/s to 2 GB/s
0.2 GB/s to 0.3 GB/s
Read IOPS
1,000,000
500,000
20,000 to 30,000
Read/write IOPS
1,000,000
300,000
20,000 to 30,000
Write IOPS
1,000,000
300,000
20,000 to 30,000
Read latency
300 nanoseconds to 400 nanoseconds
100,000 nanoseconds
250,000 nanoseconds
Write latency
300 nanoseconds to 400 nanoseconds
20,000 nanoseconds
150,000 nanoseconds
Resolve the issue that an instance failed to allocate a memory pool by using LLPL
Problem description
An ecs.re7p instance on which persistent memory can be used as local disks failed to allocate a memory pool by using LLPL. The Failed to create heap. Cannot read unsafe shutdown count**
error message is returned, as shown in the following figure.
Cause
By default, unsafe shutdown detection
is enabled in the LLPL source code. However, virtualized non-volatile memory (NVM) does not support unsafe shutdown detection
. For more information, visit llpl.
Solution
Perform the following steps to disable unsafe shutdown detection
in the LLPL source code.
Add the following code to the src/main/cpp/com_intel_pmem_llpl_AnyHeap.cpp file of the LLPL source code:
intsds_write_value=0; pmemobj_ctl_set(NULL,"sds.at_create",&sds_write_value)
The following figure shows that the preceding code is added to the file.
Log on to the instance.
For more information, see Connect to a Linux instance by using a password or key.
Run the following command to run a test case by using LLPL:
mvn clean && mvn test -Dtest.heap.path=/mnt/sdb
If the "Failed to create heap. Cannot read unsafe shutdown count**" error message is not returned, you can proceed to allocate a memory pool.