All Products
Search
Document Center

Elastic Compute Service:NIC multi-queue

Last Updated:Feb 09, 2026

Network Interface Controller (NIC) multi-queue enables you to configure multiple transmit (Tx) and receive (Rx) queues on a NIC, with each queue processed by a separate CPU core. This improves network I/O throughput and reduces latency by distributing packet processing across multiple CPU cores.

Benefits

Traditional single-queue NICs use only one CPU core to process all network packets, creating a bottleneck. The single core becomes overloaded while other cores remain idle, causing increased latency and packet loss.

Multi-queue NICs distribute network traffic across multiple CPU cores, fully utilizing your multi-core architecture. Test results show performance improvements of 50-100% with two queues and significantly higher gains with four queues.

Key benefits:

  • Better CPU utilization: Distribute network traffic across multiple cores

  • Higher throughput: Process multiple packets simultaneously, especially under high loads

  • Lower latency: Reduce congestion by distributing packets across queues

  • Fewer packet drops: Prevent packet loss in high-traffic scenarios

Note

Improper configuration (incorrect queue count or CPU affinity settings) can degrade performance. Follow the guidance in this topic to optimize your setup.

How it works

Queue architecture

An Elastic Network Interface (ENI) supports multiple Combined queues. Each Combined queue consists of one Rx queue (receive) and one Tx queue (transmit), processed by an independent CPU core.

  • Rx queue: Handles incoming packets, distributed based on rules such as polling or flow-based distribution

  • Tx queue: Handles outgoing packets, sent based on factors like order or priority

IRQ Affinity

Each queue has an independent interrupt. Interrupt Request (IRQ) Affinity distributes interrupts for different queues to specific CPU cores, preventing any single core from becoming overloaded.

IRQ Affinity is enabled by default in all images except Red Hat Enterprise Linux (RHEL). For RHEL configuration, see Configure IRQ Affinity.

Prerequisites

Before you configure multi-queue settings, ensure that you have:

  • An ECS instance that supports the NIC multi-queue feature (see Instance family overview).

  • The appropriate permissions to modify ENI settings.

  • (Optional) For IRQ Affinity configuration on RHEL: public network access to download the ecs_mq script.

Instance type support

To check if an instance type supports multi-queue:

  1. View the NIC queues column in the instance type tables.

    • Values greater than 1 indicate multi-queue support

    • The value indicates the maximum number of queues per ENI

  2. Call the DescribeInstanceTypes API operation to query queue metrics:

Parameter

Description

PrimaryEniQueueNumber

Default queues for the primary ENI

SecondaryEniQueueNumber

Default queues for secondary ENIs

MaximumQueueNumberPerEni

Maximum queues allowed per ENI

TotalEniQueueQuantity

Total queue quota for the instance

Note

Early-version public images with kernel versions earlier than 2.6 may not support multi-queue. Use the latest public images for best compatibility.

View queue configuration

View in the console

  1. Go to the ENI page in the ECS console.

  2. In the top navigation bar, select your region and resource group.

  3. Click the ENI ID to view its details.

  4. In the Basic Information section, check the Queues parameter:

    • If you've modified the queue count, the new value appears here

    • If you've never modified the queue count:

      • No value if the ENI is unbound

      • Default queue count if the ENI is bound to an instance

View using the API

Call the DescribeNetworkInterfaceAttribute API operation and check the QueueNumber parameter in the response.

View on a Linux instance

  1. Connect to your Linux instance using Workbench.

    Note

    Queue configuration can only be viewed on Linux instances. For Windows instances, use the console or API.

  2. Run ip address show to view your network interfaces.

    image

  3. Check if multi-queue is enabled on an ENI (example uses eth0):

    ethtool -l eth0
  4. In the output:

    • Pre-set maximums - Combined: Maximum queues supported by the ENI

    • Current hardware settings - Combined: Number of queues currently in use

    Example output showing 3 queues supported and 3 queues in use:

    image

Change queue configuration

After you bind an ENI to an instance, the ENI's queue count is automatically set to the default for that instance type. You can manually change this configuration if needed.

Change the maximum queues (console or API)

You can change the maximum number of queues supported by an ENI using the console or API.

Requirements:

Requirement

Description

ENI state

Must be in Available state, or bound instance must be in Stopped state

Queue limit

Cannot exceed the MaximumQueueNumberPerEni limit for the instance type

Total quota

Total queues across all ENIs cannot exceed the TotalEniQueueQuantity quota

Using the console:

  1. Go to the ENI page.

  2. Find your ENI and click its ID to view details.

  3. In the Basic Information section, click the edit icon next to Queues.

  4. Enter the new queue count and click OK.

Note

If the ENI is bound to an instance, the new queue count takes effect after you start the instance.

Using the API:

Call the ModifyNetworkInterfaceAttribute API operation and specify the QueueNumber parameter.

Change queue usage in the OS (Linux only)

You can adjust the number of queues that an ENI actively uses in the operating system. This number must be less than or equal to the maximum queues supported by the ENI.

Note

Changes made at the OS level:

  • Don't affect the queue count displayed in the console or returned by API operations.

  • Don't persist after instance restart - the OS reverts to using the maximum available queues.

Example using Alibaba Cloud Linux 3:

  1. Connect to your Linux instance using Workbench.

  2. Check current queue configuration:

    ethtool -l eth0
  3. Change the number of queues in use (example changes to 2):

    sudo ethtool -L eth0 combined 2

    Replace 2 with your desired queue count (must not exceed the Pre-set maximums value).

  4. Verify the change:

    ethtool -l eth0

    The Current hardware settings - Combined value should now show your new queue count.

Configure IRQ Affinity

IRQ Affinity assigns interrupts for different queues to specific CPUs, improving network performance by reducing CPU contention.

Note

IRQ Affinity is enabled by default in all images except RHEL. Configuration is only required for RHEL instances.

Requirements for RHEL instances

  • Red Hat Enterprise Linux 9.2 or later

  • Public network access to download the ecs_mq script

  • The irqbalance service disabled (conflicts with ecs_mq)

Configure IRQ Affinity on RHEL

  1. Connect to your RHEL instance using Workbench.

  2. (Optional) Disable the irqbalance service to prevent conflicts:

    systemctl stop irqbalance.service
  3. Download the ecs_mq script package:

    wget https://ecs-image-tools.oss-cn-hangzhou.aliyuncs.com/ecs_mq/ecs_mq_2.0.5.tgz
  4. Extract the package:

    tar -xzf ecs_mq_2.0.5.tgz
  5. Navigate to the script directory:

    cd ecs_mq/
  6. Install the script environment (replace redhat and 9 with your OS name and major version):

    bash install.sh redhat 9
  7. Start the ecs_mq service:

    systemctl start ecs_mq

    IRQ Affinity is now enabled.

Benefits of ecs_mq 2.0.5

The new version of the ecs_mq script provides several improvements:

  • Preferentially binds interrupts to CPUs on the NUMA (Non-Uniform Memory Access) node associated with the ENI's PCIe interface

  • Optimizes logic for tuning multiple network devices

  • Binds interrupts based on the ratio of queues to CPUs

  • Optimizes binding based on CPU sibling positions

  • Resolves high latency issues during memory access across NUMA nodes

Performance improvements: Network performance tests show 5–30% improvement in most PPS (packets per second) and bps (bits per second) metrics compared to the older version.

Optimize performance

Configure queue count and IRQ Affinity based on your workload to achieve optimal network performance. To ensure load balancing:

  1. Assign an appropriate number of queues per CPU core based on your network load

  2. Configure IRQ Affinity according to your system's actual throughput and latency requirements

  3. Test various configurations to find the optimal setup for your specific workload

Monitor CPU utilization and network metrics (throughput, latency, packet loss) to evaluate the effectiveness of your configuration.

Related topics