All Products
Search
Document Center

Container Service for Kubernetes:Use Terway to enable pods in an ACK cluster to communicate

Last Updated:Jan 24, 2024

Terway is an open source Container Network Interface (CNI) plug-in developed by Alibaba Cloud. Terway works with Virtual Private Cloud (VPC) and allows you to use standard Kubernetes network policies to regulate how containers communicate with each other. You can use Terway to enable pods in an Container Service for Kubernetes (ACK) cluster to communicate.

Limits

  • The following features are in public preview. To use these features, submit an application in the .

    • IPv4/IPv6 dual stack

    • Support for Trunk elastic network interface (ENI)

    • One ENI for each pod

    • Network policy configuration in the console

      Note

      You do not need to submit an application if you want to use the CLI to configure network policies.

  • You can select a network plug-in (Terway or Flannel) only when you create an ACK cluster. You cannot change the plug-in after the cluster is created.

  • Terway uses the ENIs on a node, except for eth0, to configure the pod network. Therefore, you cannot configure additional ENIs. For more information about how to manually manage ENIs, see Configure an ENI filter.

Important
  • If your ACK cluster is using Terway or Flannel, no other CNI plug-ins, such as Cilium, are needed. In most cases, an ACK cluster uses only one CNI plug-in to manage networks. For more information about the release notes of Terway, see Terway.

  • Elastic Compute Service (ECS) instances have limits on the number of ENIs and IP addresses that they can use. These limits affect the number of pods that can be hosted on each node. When you design your ACK cluster, take the type of your ECS instance into consideration. For more information about ECS instance families, see Overview of instance families.

Billing

Enabling Terway or Flannel does not incur any fees. However, resource fees and resource management fees may be charged when you use the plug-in. For more information about the billing of Alibaba Cloud services used by ACK, see Cloud service billing.

Compare Terway and Flannel

When you create an ACK cluster, you can choose Terway or Flannel.

Comparison

Terway

Flannel

Source

An in-house network plug-in provided by Alibaba Cloud and optimized for ACK.

A CNI plug-in maintained by Alibaba Cloud.

Network performance

Terway assigns ENIs to pods. Pods use addresses allocated from the VPC. Therefore, no NAT is required and no IP addresses are wasted. You can assign an ENI to each pod.

Flannel works with VPC of Alibaba Cloud. The pod CIDR block that you specify must be different from that of the VPC where the cluster is deployed. Therefore, the NAT service is required and some IP addresses may be wasted.

Security

Terway supports network policies. You can create complex network policies to regulate access between containers.

Flannel does not support network policies.

IP address management

Terway allows you to assign IP addresses on demand. You do not need to assign CIDR blocks by node. This avoids IP address waste. Terway also allows you to specify a static IP address, a separate vSwitch, and a separate security group for each pod.

You can only assign CIDR blocks by node. In large-scale clusters, a great number of IP addresses may be wasted.

SLB

Server Load Balancer (SLB) directly forwards network traffic to pods. You can upgrade the pods without service interruptions.

SLB forwards network traffic to the NodePort Service. Then, the NodePort Service routes the network traffic to pods.

We recommend that you use Terway if you want to provide a high-performance network solution with network policies and bandwidth management or you want to avoid IP address waste. We recommend that you use Flannel if you want to experience the container network without network policies and fine-grained access control.

For more information about the comparison between Terway and Flannel, see Overview.

Introduction to Terway

Terway is an in-house network plug-in provided by ACK. Terway enables pods to communicate by assigning ENIs to pods. Terway allows you to configure Kubernetes network policies to regulate access between containers.

In a Terway network, each pod has a separate network stack and IP address. Pods on the same ECS instance communicate by forwarding packets inside the ECS instance. Pods on different ECS instances communicate through an ENI in the VPC where the ECS instances are deployed. This improves communication efficiency because no tunneling technologies, such as Virtual Extensible Local Area Network (VXLAN), are required to encapsulate packets.

Terway provides the shared ENI and exclusive ENI modes. The following sections introduce and compare the two modes.

Shared ENI

In shared ENI mode, multiple pods share the same ENI. However, each pod is assigned a separate IP address. This mode improves the density of pod deployment on a node and reduces the demand for ENIs.

Note

Veth and IPVLAN are Linux network virtualization technologies. Veth devices are virtual Ethernet devices that can act as tunnels to connect containers and hosts. IPVLAN allows a physical network interface to use multiple IP addresses. It is suitable for high-performance networks.

image

Exclusive ENI

Note

The exclusive mode may be limited by ECS quotas. For more information about how to create and increase quotas, see Limits.

In exclusive ENI mode, each pod is assigned a separate ENI and IP address. Each pod has a network interface. The network performance of pods is close to traditional virtual machines. This mode is suitable for high-performance networks, such as networks with high throughput and low latency.

image

Compare Terway modes

In Terway mode, the number of pods that can be created on a node depends on the number of ENIs provided by the ECS instance type. We recommend that you choose the latest ECS instance types or those with high specifications. Make sure that the ECS instance type you select meets the requirements of ACK. In shared ENI mode, the ENIs provided by the ECS instance type must be sufficient for at least 11 pods. In exclusive ENI mode, the ENIs provided by the ECS instance type must be sufficient for at least six pods. For more information about the resources provided by different ECS instance types, see Overview of instance families. You can also query the information in OpenAPI Explorer.

Note
  • The maximum number of pods that can be created in the node network must be equal to or greater than the maximum number of pods that can be created in the pod network.

  • Maximum number of pods on a node = Maximum number of pods in the node network + Maximum number of pods in the host network.

  • You cannot install terway-eni and terway-eniip in the same ACK cluster.

Components used in different Terway modes are different. The maximum number of pods you can configure, pod network, and data transmission method vary based on the components that are used. The following table describes the details.

Terway mode

Terway mode selected when you create the ACK cluster

Component

Feature

Maximum number of pods supported by the node network

Maximum number of pods supported by the pod network (including static IP addresses, vSwitches, and security groups)

Data transmission method

Shared ENI

By default, terway-eniip is installed.

terway-eniip

  • IP addresses from ENIs are assigned to pods. Multiple pods share the same ENI.

  • The density of pod deployment on a node is high.

(Number of ENIs provided by the ECS instance type - 1) × Number of private IP addresses provided by an ENI

Note

The maximum number of pods supported by the node network is greater than 11.

0

  • veth

  • ipvlan

Shared ENI + Trunk ENI

Select Trunk ENI when you create the ACK cluster.

terway-eniip + terway-controlplane

  • The Trunk ENI feature is supported to allow ENI trunking.

  • You can configure static IP addresses, separate vSwitches, and separate security groups for pods that use trunk ENIs.

  • The density of pod deployment on a node is high and the configuration is flexible.

(Number of ENIs provided by the ECS instance type - 1) × Number of private IP addresses provided by an ENI

Note

The maximum number of pods supported by the node network is greater than 11.

Maximum number of network interfaces supported by an ECS instance type - Number of ENIs provided by an ECS instance type

  • veth

  • ipvlan

Exclusive ENI

Select Assign One ENI to Each Pod when you create the ACK cluster.

terway-eni + terway-controlplane

An ENI is assigned to each pod to ensure the optimal network performance.

Number of ENIs provided by the ECS instance type - 1

Note

The maximum number of pods supported by the node network is greater than 6.

Number of ENIs provided by the ECS instance type - 1

Exclusive

You need to choose a Terway mode based on your business requirements, performance requirements, and network policies. For example, if you want to increase the number of pods hosted on a node, use the shared ENI mode. If you want to ensure the optimal network performance of each pod, use the exclusive ENI mode.

View the maximum number of pods supported by the node network

  • Method 1: When you create a node pool in the ACK console, you can view the maximum number of pods supported by an instance type in the Terway Mode (Supported Pods) column of the Instance Type section.

  • Method 2: Perform the following steps to manually calculate the maximum number of pods supported by an instance type:

    • Search the relevant documentation to obtain the number of ENIs provided by the instance type. For more information, see Overview of instance families.

    • Query the information in OpenAPI Explorer. Specify the instance type of the node in the InstanceTypes parameter and click Initiate Call. The EniQuantity parameter that the system returns indicates the number of ENIs provided by the instance type. The EniPrivateIpAddressQuantity parameter indicates the number of private IP addresses provided by each ENI.

Configure a Terway network

Step 1: Design and prepare the cluster network

To create an ACK cluster that uses Terway, you need to specify the VPC, vSwitches, pod CIDR block, and Service CIDR block. Therefore, you need to first create a VPC, and create at least three vSwitches that reside in different zones in the VPC.

For more information about how to design the network for a cluster that uses Terway, see Plan CIDR blocks for an ACK cluster.

    Note
    • IP addresses within the CIDR blocks of the vSwitches are assigned to nodes.

    • IP addresses within the CIDR block of the pod vSwitch are assigned to pods.

The following example demonstrates how to create a VPC and three vSwitches.

  1. Log on to the VPC console.

  2. In the top navigation bar, select the region where you want to create the VPC and click Create VPC.

    Important

    The VPC and ACK cluster must be deployed in the same region.

  3. On the Create VPC page, configure the parameters.

    Parameter

    Description

    Example

    Name

    Enter a name.

    vpc_192_168_0_0_16

    IPv4 CIDR Block

    We recommend that you specify a private RFC CIDR block.

    192.168.0.0/16

    IPv6 CIDR Block

    The IPv6 CIDR block that is allocated to the VPC consists of global unicast addresses. After an instance in the VPC is assigned an IPv6 address, the instance can access the Internet through an IPv6 gateway.

    Assign (Alibaba Cloud)

    vSwitch

    Create three vSwitches.

    Note

    To enable IPv6 for the vSwitches, you must specify IPv6 CIDR blocks.

    • The first vSwitch is named switch_192_168_0_0_20. Specify Zone and set IPv4 CIDR Block to 192.168.0.0/20.

    • The second vSwitch is named switch_192_168_16_0_2019. Specify Zone and set IPv4 CIDR Block to 192.168.16.0/20.

    • The third vSwitch is named switch_192_168_32_0_20. Specify Zone and set IPv4 CIDR Block to 192.168.32.0/20.

  4. After the configuration is completed, click OK.

Step 2: Configure Terway

You need to configure Terway when you create the ACK cluster. You cannot change the network plug-in after the ACK cluster is created. For more information about the parameters for creating an ACK cluster, see Create an ACK managed cluster.

  1. Log on to the ACK console.

  2. Configure the key network parameters for Terway.

    Parameter

    Description

    IPv6 Dual-stack

    Select Enable. If no IPv6 CIDR blocks are specified when you create the VPC, the IPv4/IPv6 dual stack feature does no take effect.

    If you enable IPv4/IPv6 dual stack, a dual-stack cluster is created. This feature is in public preview. To use this feature, submit an application in the Quota Center console.

    Important
    • This feature supports only Kubernetes 1.22 and later.

    • IPv4 addresses are used by worker nodes to communicate with control planes.

    • You must select Terway as the network plug-in.

    • You must use a VPC and ECS instances that support IPv4/IPv6 dual stack.

    VPC

    Select the VPC that you created in Step 1: Design and prepare the cluster network.

    Network Plug-in

    Select Terway.

    Terway Mode

    • Specify whether to enable the Assign One ENI to Each Pod feature. To use the Assign One ENI to Each Pod feature, you need to log on to the Quota Center console and submit an application.

      • If you select the check box, a separate ENI is assigned to each pod.

        Note

        After you select Assign One ENI to Each Pod, the maximum number of pods supported by a node is reduced. Exercise caution before you enable this feature.

      • If you clear the check box, an ENI is shared among multiple pods. A secondary IP address that is provided by the ENI is assigned to each pod.

    • Specify whether to use IPVLAN.

      • This option is available only when you clear Assign One ENI to Each Pod.

      • If you select IPVLAN, IPVLAN and extended Berkeley Packet Filter (eBPF) are used for network virtualization when an ENI is shared among multiple pods. This improves network performance. Only the Alibaba Cloud Linux operating system is supported.

      • If you clear IPVLAN, policy-based routes are used for network virtualization when an ENI is shared among multiple pods. The CentOS 7 and Alibaba Cloud Linux operating systems are supported. This is the default setting.

      • You can enable or disable IPVLAN only when you create a cluster. After the cluster is created, you can no longer enable or disable IPVLAN.

      For more information about the IPVLAN mode supported by Terway, see Use scenarios.

    • Select or clear Support for NetworkPolicy.

      • The NetworkPolicy feature is available only when you clear Assign One ENI to Each Pod. By default, Assign One ENI to Each Pod is unselected.

      • If you select Support for NetworkPolicy, you can use Kubernetes network policies to control the communication among pods.

      • If you clear Support for NetworkPolicy, you cannot use Kubernetes network policies to control the communication among pods. This prevents Kubernetes network policies from overloading the Kubernetes API server.

      For more information about how to use network policies in ACK clusters and the use scenarios of network policies, see Use network policies in ACK clusters.

    • Select or clear Support for ENI Trunking. To use the Support for ENI Trunking feature, you need to log on to the Quota Center console and submit an application. The Terway Trunk elastic network interface (ENI) feature allows you to specify a static IP address, a separate vSwitch, and a separate security group for each pod. This allows you to manage and isolate user traffic, configure network policies, and manage IP addresses in a fine-grained manner. For more information, see Configure a static IP address, a separate vSwitch, and a separate security group for each pod.

    vSwitch

    Select the VPC that you created in Step 1: Design and prepare the cluster network.

    Service CIDR

    Keep the default setting.

    IPv6 Service CIDR

    Keep the default setting. This parameter is available after IPv4/IPv6 dual stack is enabled.

  3. After the cluster is created, you can view the network plug-in on the Networking tab of the Add-ons page.

    Different components support different features and the maximum number of pods you can configure also varies. For more information, see Compare Terway modes.

Use scenarios

Terway IPVLAN

Enable the Terway IPVLAN mode when you create an ACK cluster. The Terway IPVLAN mode provides high-performance networks for pods and Services based on IPVLAN and Extended Berkeley Packet Filter (eBPF) technologies.

Limits

  • Only the Alibaba Cloud Linux operating system is supported.

  • The Sandboxed-Container runtime is not supported.

  • The implementation of network policies is different that of the default Terway mode.

    • The CIDR selector has a lower priority than the pod selector. If the CIDR selector includes the pod CIDR block, you need to add another pod selector.

    • The except keyword of the CIDR selector is not fully supported. We recommend that you do not use the except keyword.

    • If you use a network policy of the Egress type, you cannot access pods in the host network or the IP addresses of nodes in the cluster.

  • You may fail to access the Internet-facing SLB instance that is associated with a LoadBalancer Service from within the cluster due to loopback issues. For more information, see Why am I unable to access an SLB instance?

Introduction

Unlike the default Terway network mode, Terway IPVLAN optimizes the performance of the pod network, Service networks, and network policies.

  • Pod networks are directly implemented based on the sub-interfaces of ENIs in IPVLAN L2 mode. This significantly simplifies network forwarding on the host and reduces the latency by 30% compared with the traditional mode. The performance of pod networks is almost the same as the host network.

  • Service networks are implemented based on the eBPF technology instead of the kube-proxy mode. Traffic forwarding no longer depends on the host iptables or IP Virtual Server (IPVS). This maintains almost the same performance in larger-scale clusters and offers better scalability. Compared with traffic forwarding based on IPVS and iptables, this new approach greatly reduces network latency in scenarios that involve a large number of new connections and port reuse.

  • The network policies of pods are implemented based on the eBPF technology instead of iptables. This way, large numbers of iptables rules are no longer generated on the host and the impact of network policies on network performance is reduced.

Use scenarios

  • Middleware and microservices

    Avoids performance degradation in large-scale deployments and reduces the network latency of microservices.

  • Gaming and live streaming applications

    Significantly reduces network latency and resource contention among multiple instances.

  • High-performance computing

    High-performance computing requires high-throughput networks. The Terway IPVLAN mode reduces CPU overhead and saves computing resources for core workloads.

Network policies

If you use the Terway network plug-in and you want to control network traffic based on IP addresses or ports, you can configure network policies to control access to specific applications. For more information, see Use network policies in ACK clusters.

Trunk ENI

The Terway Trunk ENI mode allows you to specify a static IP address, a separate vSwitch, and a separate security group for each pod. This allows you to manage and isolate user traffic, configure network policies, and manage IP addresses in a fine-grained manner. For more information, see Configure a static IP address, a separate vSwitch, and a separate security group for each pod.