The network interface controller (NIC) multi-queue feature allows a NIC to process data packets in multiple receive (RX) and transmit (TX) queues in parallel. When you use the NIC multi-queue feature, you must configure Interrupt Request (IRQ) Affinity to assign interrupts for different queues to specific CPUs, instead of allowing the interrupts to be assigned to arbitrary CPUs. This helps reduce contention among CPUs and improve network performance. This topic describes how to configure IRQ Affinity and change the number of queues on NICs on a Linux Elastic Compute Service (ECS) instance.
Prerequisites
The NIC multi-queue feature is supported by the instance type of the Linux ECS instance.
For information about the instance types that support the NIC multi-queue feature, see Overview of instance families. If the number of NIC queues for an instance type is greater than one, the instance type supports the NIC multi-queue feature.
The NIC multi-queue feature is supported by the image of the Linux ECS instance.
NoteSpecific earlier public images that contain kernel versions earlier than 2.6 may not support the NIC multi-queue feature. We recommend that you use the latest public images.
By default, IRQ Affinity is enabled in all images without the need for additional configurations, except Red Hat Enterprise Linux images. IRQ Affinity is supported by but is not enabled in Red Hat Enterprise Linux images. Perform the operations that are described in this topic to configure IRQ Affinity for instances that use Red Hat Enterprise Linux images.
Procedure
This section describes how to use the ecs_mq
script to configure IRQ Affinity for a Linux ECS instance that uses a Red Hat 9.2 image.
Connect to the Linux ECS instance.
For more information, see Connect to a Linux instance by using a password or key.
(Optional) Disable the irqbalance service.
The irqbalance service dynamically modifies IRQ Affinity configurations. The
ecs_mq
script conflicts with the irqblanace service. We recommend that you disable the irqbalance service.systemctl stop irqbalance.service
Run the following command to download the package that contains the new version of the
ecs_mq
script:wget https://ecs-image-tools.oss-cn-hangzhou.aliyuncs.com/ecs_mq/ecs_mq_2.0.tgz
Compared with the old version of the
ecs_mq
script, the new version provides the following benefits:Preferentially binds interrupts for a NIC to CPUs on the Non-Uniform Memory Access (NUMA) node with which the Peripheral Component Interconnect Express (PCIe) interface of the NIC is associated.
Optimizes the logic for tuning multiple network devices.
Binds interrupts for NICs of different specifications based on the ratio of the number of NIC queues to the number of CPUs.
Optimizes the mechanism of binding interrupts based on the positions of CPU siblings.
Resolves the high latency issue that may occur in memory access across NUMA nodes.
Uses the new version of the
ecs_mq
script by default and provides the commands that can be used to switch between the old and new versions.The
ecs_mq_rps_rfs old
command is used to switch to the old version of theecs_mq
script.The
ecs_mq_rps_rfs new
command is used to switch to the new version of theecs_mq
script.
NoteIn network performance tests, the new version of the
ecs_mq
script outperforms the old version by 5% to 30% in most PPS and BPS metrics.Run the following command to extract the
ecs_mq
script:tar -xzf ecs_mq_2.0.tgz
Run the following command to change the working path:
cd ecs_mq/
Run the following command to install the environment that is required to run the
ecs_mq
script:bash install.sh <Operating system name> <Major version number of the operating system>
NoteReplace
<Operating system name>
and<Major version number of the operating system>
with actual values.For example, to install the environment that is required to run the ecs_mq script on Red Hat Enterprise Linux 9.2, run the following command:
bash install.sh redhat 9
Run the following command to start the
ecs_mq
script:systemctl start ecs_mq
After the script is started, IRQ Affinity is automatically enabled.
Change the number of NIC queues
You can perform the following steps to only change the number of NIC queues without the need to configure IRQ Affinity. In this example, an ECS instance that runs Alibaba Cloud Linux 3 is used. Alibaba Cloud Linux 3 supports the NIC multi-queue feature. The Linux ECS instance has two elastic network interfaces (ENIs) named eth0 and eth1. eth 0 is the primary ENI and eth1 is a secondary ENI.
Connect to the Linux ECS instance.
For more information, see Connect to a Linux instance by using a password or key.
Run the following command to check whether the NIC multi-queue feature is enabled on the primary ENI eth0:
ethtool -l eth0
If two
Combined
fields are displayed in the command output, the NIC multi-queue feature is enabled on the primary ENI. You can proceed to the next step to configure the number of queues on the primary ENI.Channel parameters for eth0: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 2 # This value indicates that the ENI supports up to two queues. Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 1 # This value indicates that one queue is in effect on the ENI.
Run the following command to configure the primary ENI to use two queues:
sudo ethtool -L eth0 combined 2
Run the following command to check whether the NIC multi-queue feature is enabled on the secondary ENI eth1:
ethtool -l eth1
If two
Combined
fields are displayed in the command output, the NIC multi-queue feature is enabled on the secondary ENI. You can proceed to the next step to configure the number of queues on the secondary ENI.Channel parameters for eth1: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 # This value indicates that the ENI supports up to four queues. Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 1 # This value indicates that one queue is in effect on the ENI.
Run the following command to configure the secondary ENI to use four queues:
sudo ethtool -L eth1 combined 4
Test results indicate that under identical packet forwarding rate and network bandwidth conditions, two queues outperform a single queue by 50% to 100% and performance improvements are much more significant when four queues are used. For information about how to test network performance, see Best practices for testing network performance.
References
For more information about IRQ Affinity, see IRQ Affinity.