This topic describes how to use the Logical Volume Manager (LVM) to create a Logical Volume (LV) that consists of multiple cloud disks on a Linux ECS instance.

Prerequisites

You have created multiple cloud disks and attached them to the ECS instance. For more information, see Create a pay-as-you-go disk and Attach a cloud disk.

Background information

The LVM is a disk partition management mechanism for Linux ECS instances. It is featured by dynamic management of hard disks. In this mechanism, a logical layer is created on hard disks and partitions to improve flexible management of hard disks and partitions. You can dynamically adjust the LV size without losing existing data. The existing LV remains unchanged even if you add new data disks.

Notice A snapshot backs up only the data of a single cloud disk. When the LVM is used, data differences may occur when you roll back cloud disks.

Step 1: Create a Physical Volume (PV)

  1. Remotely connect to an ECS instance as a root user. For more information about detailed steps, see ECS instance creation overview.
  2. Run the lsblk command to view information of all cloud disks on an ECS instance.

    If the following information is displayed, you can use the LVM to create an LV that consists of five cloud disks.

    root@lvs06:~# lsblk
    NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    vda    252:0040G  0 disk
    └─vda1 252:1040G  0 part /
    vdb    252:1601T  0 disk
    vdc    252:3201T  0 disk
    vdd    252:4801T  0 disk
    vde    252:6401T  0 disk
    vdf    252:8001T  0 disk
  3. Run the pvcreate command to create a PV.
    pvcreate <Data disk name 1> ... <Data disk name N>
    If you want to add multiple data disks, you can add multiple data disk names separated with spaces.
    root@lvs06:~# pvcreate /dev/vdb /dev/vdc /dev/vdd /dev/vde /dev/vdf
      Physical volume "/dev/vdb" successfully created.
      Physical volume "/dev/vdc" successfully created.
      Physical volume "/dev/vdd" successfully created.
      Physical volume "/dev/vde" successfully created.
      Physical volume "/dev/vdf" successfully created.
  4. Run the lvmdiskscan command to view information of the PV that has been created on the ECS instance.
    root@lvs06:~# lvmdiskscan | grep LVM
      /dev/vdb  [       1.00 TiB] LVM physical volume
      /dev/vdc  [       1.00 TiB] LVM physical volume
      /dev/vdd  [       1.00 TiB] LVM physical volume
      /dev/vde  [       1.00 TiB] LVM physical volume
      /dev/vdf  [       1.00 TiB] LVM physical volume
      5 LVM physical volume whole disks
      0 LVM physical volumes

Step 2: Create a Volume Group (VG)

  1. Run the vgcreate command to create a VG.
    vgcreate <VG name> <PV name 1> ...... <PV name N>

    If you want to add multiple PVs, you can add multiple PV names separated with spaces. You can customize the VG name, such as lvm_01.

    root@lvs06:~# vgcreate lvm_01 /dev/vdb /dev/vdc /dev/vdd /dev/vde /dev/vdf
      Volume group "lvm_01" successfully created
  2. Run the vgextend command to add a PV to the VG.
    vgrecord VG name <PV name 1> ...... <PV name N>

    If you add a new PV to lvm_01, the command output is as follows:

    root@lvs06:~# vgextend lvm_01 /dev/vdb /dev/vdc /dev/vdd /dev/vde /dev/vdf
      Volume group "lvm_01" successfully extended
  3. Run the vgs or vgdisplay command to view the VG information.
    root@lvs06:~# vgs
      VG     #PV #LV #SN Attr   VSize  VFree
      lvm_01   600 wz--n- <6.00t <6.00t

Step 3: Create an LV

  1. Run the lvcreate command to create an LV.
    lvcreate [-L <LV size>][ -n <LV name>] <VG name>

    If you create a 5 TiB LV, the command output is as follows:

    root@lvs06:~# lvcreate -L 5T -n lv01 lvm_01
      Logical volume "lv01" created.
    Note
    • LV size: The LV size must be smaller than the free space of the VG. The unit can be MiB, GiB, or TiB.
    • LV name: You can customize the name.
    • VG name: the name of an existing VG.
  2. Run the lvdisplay command to view the LV details.
    root@lvs06:~# lvdisplay
      --- Logical volume ---
      LV Path                /dev/lvm_01/lv01
      LV Name                lv01
      VG Name                lvm_01
      LV UUID                svB00x-l6Ke-ES6M-ctsE-9P6d-dVj2-o0h***
      LV Write Access        read/write
      LV Creation host, time lvs06, 2019-06-0615:27:19 +0800
      LV Status              available
      # open                 0
      LV Size                5.00 TiB
      Current LE             1310720
      Segments               6
      Allocation             inherit
      Read ahead sectors     auto
      - currently set to     256
      Block device           253:0

Step 4: Create a file system and attach it

  1. Run the mkfs command to create a file system on the LV.
    mkfs. <File system format> <LV path>

    If you create an ext4 file system, the command output is as follows:

    root@lvs06:~# mkfs.ext4 /dev/lvm_01/lv01
    mke2fs 1.44.1 (24-Mar-2018)
    Creating filesystem with13421772804k blocks and 167772160 inodes
    Filesystem UUID: 2529002f-9209-4b6a-9501-106c1145c***
    Superblock backups stored on blocks:
            32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
            4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
            102400000, 214990848, 512000000, 550731776, 644972544
    
    Allocating group tables: done
    Writing inode tables: done
    Creating journal (262144 blocks): done
    Writing superblocks and filesystem accounting information:
    done
  2. (Optional) Create a mount point such as /media/lv01.
    You can also attach the cloud disk to an existing directory.
    mkdir /media/lv01
  3. Run the mount command to attach the file system.
    root@lvs06:~# mount /dev/lvm_01/lv01 /media/lv01
  4. Run the df command to check whether the LV is attached.
    root@lvs06:~# df -h
    Filesystem               Size  Used Avail Use% Mounted on
    udev                      12G     012G   0% /dev
    tmpfs                    2.4G  3.7M  2.4G   1% /run
    /dev/vda1                 40G  3.6G   34G  10% /
    tmpfs                     12G     0   12G   0% /dev/shm
    tmpfs                    5.0M     05.0M   0% /run/lock
    tmpfs                     12G     012G   0% /sys/fs/cgroup
    tmpfs                    2.4G     02.4G   0% /run/user/0
    /dev/mapper/lvm_01-lv01  5.0T   89M  4.8T   1% /media/lv01