NVMe SSD is a commonly used semiconductor storage medium today. It uses the PCIe interface to interact with the host, which greatly improves the read/write performance and unleashes the performance of the storage medium itself. Usually, NVMe SSDs use the NAND flash storage medium internally for data storage. The medium itself has issues such as read/write asymmetry and short service life. To resolve the preceding issues, NVMe SSDs use the Flash Translation Layer (FTL) internally to provide the same interface and usage mode for upper-layer apps as a common disk does.
At 4 KB access granularity, the random read IOPS and write IOPS of an NVMe SSD are about 5,000 times and 1,000+ times faster than a 15k rpm HDD respectively. With the further development of non-volatile storage media, semiconductor storage media will have even better performance and I/O QoS capabilities.
Now local NVMe SSD on Alibaba Cloud can be up to 240,000 IOPS, and just $2.50 a month makes you benefit from all advantages of SSD through the new packages, which not only offer you SSD cloud server but also including a data transfer plan that you must need. See more details of packages.
At present, mainstream NVMe SSDs use NAND flash as the storage medium and all mainstream SSDs use this technology. NAND flash can store three bits per cell, which is commonly known as triple-level cell (TLC) NAND flash. This year, the storage density of a single cell increased by 33% to store four bits, evolving to quad-level cell (QLC) NAND flash. The continuous evolution of NAND flash has driven the increasing storage density of SSDs. As of today, a single 3.5-inch SSD can have a capacity of up to 128 TB, which is much greater than that of a common disk.
Compared to a disk, the biggest change in NVMe SSDs is that the storage medium itself has changed. Currently, NVMe SSDs generally use 3D NAND flash as the storage medium. NAND flash consists of an array of internal memory cells, and uses a floating gate or a charge trap to store charges. It maintains the storage status of data based on the amount of stored charges. Because of the capacitance effect, wear, aging, and operating voltage disturbance, NAND flash inherently has a charge leakage issue that may cause stored data to change. Therefore, in essence, NAND flash is an unreliable medium because it is prone to bit flips. SSDs turn the unreliable NAND flash into a reliable data storage medium through controllers and firmware.
At the software level, an FTL is designed inside an SSD. The design idea of the FTL is similar to that of the log-structured file system. The FTL records data by appending records to a log file. It uses the LBA-to-PBA address mapping table to record the data organization method. One of the biggest issues with a log-structured system is garbage collection (GC). The high I/O performance of NAND flash is compromised when it is used in the SSD because of the GC issue. There is also a serious I/O QoS issue faced by the standard NVMe SSDs currently. SSDs use the FTL to resolve the issue that NAND flash cannot perform in-place write, use the wear leveling algorithm to resolve the issue of uneven wear of NAND flash, use the data retention algorithm to resolve the issue of long-term charge leakage of NAND flash, and use data migration to resolve the issue of read diatribe. The FTL is the core technology for the large-scale application of NAND flash, and is an important component of SSDs.
NAND flash itself has many concurrent units. A NAND flash chip consists of multiple targets, each containing multiple dies. Each die is an independent storage unit, which consists of multiple planes. Multiple planes share the same operation bus and can be combined into one unit for multi-plane concurrent operations. Each plane consists of several blocks. A block is an erase unit, and the size of a block determines the GC granularity at the SSD software level. Each block consists of multiple pages. A page is the smallest write (programming) unit, which is usually 16 KB in size. The SSD internal software (firmware) needs to make full use of these concurrent units to build a high-performance storage drive.
A common NVMe SSD has a simple physical hardware structure that consists of a large number of NAND flash memories. These NAND flash memories are controlled by the SSD controller system-on-chip (SoC). The FTL software runs inside the SoC and uses a multi-queue PCIe bus to interact with the host. To improve performance, enterprise-oriented SSDs require on-board Dynamic Random Access Memory (DRAM). DRAMs are used to cache data for better write performance, and to cache FTL mapping tables. Enterprise-oriented SSDs usually use flat mapping to improve performance, which requires more memory usage (0.1%). The memory capacity limits the development of large-capacity NVMe SSDs. A practical method to resolve this issue is to increase the sector size. A standard NVMe SSD uses 4 KB sectors. To further expand the capacity of an NVMe SSD, some vendors have used 16 KB sectors. The popularity of 16 KB sectors will accelerate the promotion of large-capacity NVMe SSDs.
For more factors that influence SSD performance, please go to Storage System Design Analysis: Factors Affecting NVMe SSD Performance (1).
The performance data provided by the NVMe SSD vendor specification is perfect. The performance comparison between the NVMe SSD and a common disk shows that NVMe SSD performance is indeed much better than disk performance. In the actual application, however, NVMe SSD performance may not be as good as expected, and it seems that the performance is not stable. The rules that make perfect NVMe SSD performance have not been found. Compared to a common disk, SSD performance is affected by many factors. To analyze SSD performance factors, we need to first understand the main parts of an SSD. As shown in the following figure, an SSD includes the following parts: host CPU, PCIe interconnection bandwidth, SSD controller and FTL software, backend NAND flash bandwidth, and NAND flash medium. The main factors affecting SSD performance can be divided into hardware, software, and other objective factors.
In this article, we will analyze hardware factors, software factors and other objective factors which will influence SSD performance and then we will learn how to improve SSD performance according to the analysis.
Flash storage technology is rapidly developing. Centered on flash storage, the flash media, SSD controller, storage system software, and storage hardware platform are all evolving. The value that flash memory brings to data storage is obvious. The use of flash memory in IDCs is a trend. NVMe SSD performance is affected by many factors. At the software level, we can optimize the I/O pattern to improve the performance of the SSD and the overall storage system.
Alibaba is deploying Alibaba Open Channel SSD as part of its next generation data center infrastructure, marking a major milestone after its announcement of Open Channel storage architecture in FAST 2018 conference.
Alibaba Open Channel SSD, or AOC SSD, is Alibaba's 3rd homebrew storage product and the first productionized Open Channel SSD in the industry. Its deployment not only marks a key step of Alibaba's next-generation storage architecture, but also lays ground for an innovative Open Channel ecosystem.
At the heart of AOC SSD is AliFSC, Alibaba's first customized high-performance storage controller.
AliFSC is a high-performance controller customized for Alibaba's Open Channel SSD requirements. It features 6 cores and 16 channels design, PCIe G3x8 interface, and can work with all major 3D TLC NAND components. Support for QLC NAND is also included, with preliminary QLC firmware development already started.
In addition to basic I/O performance, AOC SSD further optimized QoS capability along with Fusion Engine software, resulting in much lower read/write latencies for high-priority applications.
This topic describes how to modify the performance level of a running ESSD cloud disk in the ECS console.
When you create an ECS instance, you can set the ESSD cloud disk as the system disk or data disk. You can also create a separate ESSD cloud disk.
After the performance level is modified, the ESSD cloud disk is billed according to the new performance level.
You can also call the API action ModifyDiskSpec to complete this task.
This topic describes the local SSD-equiped instance type families i2, i2g, and i1, and lists the specific instance types within each of the instance type families.
An online computing service that offers elastic and secure virtual cloud servers to cater all your cloud hosting needs, with Intel Xeon E5-2682 v4CPU (Broadwell) processor, local SSD cloud disk capable of 240,000 IOPS and maximum PPS throughput of 7 million PPS.
Alibaba Clouder - January 15, 2019
Alibaba Clouder - January 15, 2019
Alibaba Clouder - July 5, 2018
Alibaba Clouder - April 9, 2018
Alibaba Clouder - March 21, 2018
Alibaba Clouder - August 12, 2019
More Posts by Alibaba Clouder