All Products
Search
Document Center

:How do I install the NVMe driver for a custom image?

Last Updated:Nov 23, 2025

To improve storage performance using Non-Volatile Memory Express (NVMe) on ECS instances created from custom images, you must verify and install the NVMe driver. This topic details the steps to install the driver to ensure your instance boots correctly and operates stably.

NVMe is a high-speed interface protocol designed for solid-state storage (such as Flash SSDs). It delivers lower latency and higher bandwidth compared to traditional protocols like SCSI and virtio-blk.

Applicable scenarios

Perform the operations in this topic if you encounter the following issues:

  • The Image Check reports a missing NVMe driver when you import an image to Alibaba Cloud.

  • You are unable to select a specific custom image when purchasing an NVMe-enabled ECS instance. This occurs when the image's NVMe attributes do not match the instance type requirements.

    Note

Procedure

  1. Launch a temporary instance using your existing custom image. Then connect to the instance. It serves as an intermediate environment to update the driver.

    Important

    You are charged for the intermediate ECS instance. Release the instance after you create the new custom image.

  2. Verify and install the NVMe driver. Select your configuration method below.

    Automatic configuration

    The Cloud Assistant ecs_nvme_config plugin automates the NVMe driver configuration for supported operating systems.

    Supported operating systems

    • The ecs_nvme_config plug-in supports the following operating systems:

      • Alibaba Cloud Linux

      • Anolis OS

      • CentOS 6.6 and higher

      • CentOS Stream

      • Debian 9 and higher

      • Ubuntu 16 and higher

      • OpenSUSE 42 and higher

      • SUSE Linux Enterprise Server (SLES) 11.4 and higher

      • Red Hat Enterprise Linux

      • Fedora

      • Rocky Linux

      • AlmaLinux

    1. Run the following command to verify if the agent is installed and the ecs_nvme_config plugin is available:

      acs-plugin-manager --list

      image

      • If no output is returned, you must install Cloud Assistant Agent.

      • If the plugin list is returned and includes ecs_nvme_config, proceed to the next step.

    2. Configure NVMe-related settings.

      1. Use the ecs_nvme_config plugin to check if the instance has the NVMe module and supports configuration:

        sudo acs-plugin-manager --exec --plugin ecs_nvme_config --params --check
        • If the following message appears, the driver is already installed. You can proceed directly to creating the image without further configuration.

          [SUCCESS]  Summary: Your image can Runnig on nvme instance
        • If you receive an error stating the module is missing, proceed to the fix step below.

          [ERROR]  1.initrd/initramfs not has nvme module, Please run acs-plugin-manager --exec --plugin ecs_nvme_config --params -f/--fix to enable nvme;
      2. Run the following command to install the driver and configure parameters:

        sudo acs-plugin-manager --exec --plugin ecs_nvme_config --params --fix
      3. Reboot the instance to apply changes:

        sudo reboot
      4. After the instance restarts, run the check command again to verify the installation:

        sudo acs-plugin-manager --exec --plugin ecs_nvme_config --params --check

        Sample success output:

        [OK]  1.initrd/initramfs already contain nvme module;
        
        [OK]  2.fstab file looks fine and does not contain any device names;
        
        [OK]  3.The nvme parameters already included.
        
        [SUCCESS]  Summary: Your image can Runnig on nvme instance

    Manual configuration (CentOS/Alibaba Cloud Linux)

    1. Check whether the kernel loaded the NVMe driver:

      cat /boot/config-`uname -r` | grep -i nvme | grep -v "^#"

      image

      • If the output contains CONFIG_BLK_DEV_NVME=y, The kernel supports NVMe natively. You can proceed directly to Automatic configuration.

      • If the output contains CONFIG_BLK_DEV_NVME=m, the driver is compiled as a module. Proceed to the next step.

    2. Check the initial RAM filesystem (initramfs) for the NVMe driver.

      sudo lsinitrd /boot/initramfs-`uname -r`.img | grep -i nvme | awk '{print $NF}'

      image

      • If the command returns output (e.g., nvme.konvme-core.ko): The driver is present. You can proceed directly to Step d.

      • If no output is returned: The driver is missing. Proceed to the next step (Step c) to rebuild the initramfs.

    3. Configure initramfs to include the NVMe driver.

      mkdir -p /etc/dracut.conf.d
      echo 'add_drivers+=" nvme nvme-core "' | sudo tee /etc/dracut.conf.d/nvme.conf > /dev/null
      sudo dracut -v -f
      Note

      If the dracut tool is not installed in your operating system, run the sudo yum -y install dracut command to install dracut.

    4. Configure NVMe I/O timeout parameters in GRUB.

      Note
      • Configuring the io_timeout parameter prevents I/O failures caused by NVMe device timeouts. By increasing this value to the maximum supported limit, you ensure the system continues processing I/O requests without premature failure.

      • In most Linux distributions, the io_timeout parameter defaults to 30 seconds. To check whether you can set the io_timeout parameter to 4,294,967,295 seconds, run the echo 4294967295 > /sys/module/nvme_core/parameters/io_timeout or echo 4294967295 > /sys/module/nvme/parameters/io_timeout command. If you receive the error -bash: echo: write error: Invalid argument: Your kernel version is limited to a maximum of 255 seconds. Use 255 instead.

      Method 1: Add parameters via grubby
      1. Check if grubby is available.

        which grubby
      2. Run the grubby command to update the kernel arguments:

        sudo grubby --update-kernel=ALL --args="nvme_core.io_timeout=4294967295 nvme_core.admin_timeout=4294967295"
      Method 2: Add parameters via the GRUB config file
      1. Open the grub file:

        sudo vi /etc/default/grub
      2. Press the I key to enter Insert mode. Locate the GRUB_CMDLINE_LINUX= line, add the nvme_core.io_timeout and nvme_core.admin_timeout parameters and set them both to 4294967295.

        The following figure shows an example on how to add the parameters.

        image

        Note

        If the GRUB configuration file already contains the preceding parameter settings, you do not need to add the parameters again.

      3. Press the Esc key to exit Insert mode. Then, enter :wq and press the Enter key to save and close the file.

      4. Apply the GRUB configurations.

        Select a command based on the boot mode of the ECS instance.

        • Legacy BIOS boot

          sudo grub2-mkconfig -o /boot/grub2/grub.cfg
        • Unified Extensible Firmware Interface (UEFI) boot

          1. View the content of the GRUB configuration file:

            cat /boot/efi/EFI/centos/grub.cfg
          2. Select a command based on the output of the preceding command.

            • If the file points to another config file (e.g., /boot/grub/grub.cfg), run:

              image

              sudo grub2-mkconfig -o /boot/grub2/grub.cfg
            • Otherwise, update the UEFI config directly:

              sudo grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg
    5. Run the check command again to confirm the NVMe driver is properly configured in the initramfs:

      sudo lsinitrd /boot/initramfs-`uname -r`.img | grep -i nvme | awk '{print $NF}'

      If the command returns the driver filenames (e.g., nvme.ko), the configuration is complete. The operating system is now ready to boot on NVMe-enabled ECS instance types.

      image

    Manual configuration (Ubuntu/Debian)

    1. (Optional) Check the NVMe drivers included in initrd:

      lsinitramfs /boot/initrd.img-`uname -r` | grep -i nvme

      The following output indicates that the NVMe driver is loaded in the initrd of the Ubuntu operating system. image

    2. Add NVMe-related io_timeout parameters to the GRUB file.

      Note
      • Configuring the io_timeout parameter prevents I/O failures caused by NVMe device timeouts. By increasing this value to the maximum supported limit, you ensure the system continues processing I/O requests without premature failure.

      • In most Linux distributions, the io_timeout parameter defaults to 30 seconds. To check whether you can set the io_timeout parameter to 4,294,967,295 seconds, run the echo 4294967295 > /sys/module/nvme_core/parameters/io_timeout or echo 4294967295 > /sys/module/nvme/parameters/io_timeout command. If you receive the error -bash: echo: write error: Invalid argument: Your kernel version is limited to a maximum of 255 seconds. Use 255 instead.

      1. Open the /etc/default/grub file:

        sudo vi /etc/default/grub
      2. Press the I key to enter Insert mode. On the GRUB_CMDLINE_LINUX= line, add the nvme_core.multipath, nvme_core.io_timeout, and nvme_core.admin_timeout parameters. Then, set nvme_core.multipath to n and nvme_core.io_timeout and nvme_core.admin_timeout both to 4294967295.

        The following figure shows the correct configuration.image

        Note

        If the GRUB file already contains the preceding parameter settings, you do not need to add the parameters again.

      3. Press the Esc key to exit Insert mode. Then, enter :wq and press the Enter key to save and close the file.

    3. Apply the GRUB configurations.

      Run one of the following commands based on the boot mode of the ECS instance:

      • The following command is applicable to Ubuntu and Debian operating systems, regardless of the boot mode.

        sudo update-grub2
      • Legacy BIOS boot

        sudo grub-mkconfig -o /boot/grub/grub.cfg
      • UEFI boot

        sudo grub-mkconfig -o /boot/efi/EFI/debian/grub.cfg
  3. Create a custom image from the instance where you installed the driver. Then modify the attributes and tags of an image to enable NVMe support.

    Important

    If you do not explicitly set the NVMe driver property to Supported, the system will not recognize the image as NVMe-compatible. Consequently, you will remain unable to select NVMe-capable instance types when creating instances from this image.

  4. (Optional) Redeploy your workload using the new, NVMe-enabled custom image. For example, create an instance from custom image or shared image. During the creation process, ensure you select an instance type that supports the NVMe protocol.

    Note

    After verifying the deployment, delete the original custom image to avoid unnecessary charges for idle resources. For details, see Delete a custom image.

References