All Products
Search
Document Center

Container Service for Kubernetes:Work with Terway

Last Updated:Jul 26, 2023

Terway is an open source Container Network Interface (CNI) plug-in developed by Alibaba Cloud. Terway works with Virtual Private Cloud (VPC) and allows you to use standard Kubernetes network policies to regulate how containers communicate with each other. You can use Terway to enable internal communication within a Kubernetes cluster. This topic describes how to use Terway in a Container Service for Kubernetes (ACK) cluster.

Considerations

  • You can modify the Terway settings only when you create a cluster. You cannot change the settings after the cluster is created.

  • Terway uses the elastic network interfaces (ENIs) on a node, except for eth0, to configure the pod network. Therefore, you cannot configure additional ENIs.

  • The IPv4/IPv6 dual stack and Trunk ENI features are in public preview. To use these features, go to Quota Center and submit an application. For more information about the Trunk ENI feature, see Configure a static IP address, a separate vSwitch, and a separate security group for each pod.

Background information

Terway is a network plug-in developed by Alibaba Cloud for ACK. Terway allows you to configure networks for pods by associating Alibaba Cloud ENIs with the pods. Terway allows you to use standard Kubernetes network policies to regulate how containers communicate with each other.

In a cluster that has Terway installed, each pod has a separate network stack and is assigned a separate IP address. Pods on the same Elastic Compute Service (ECS) instance communicate with each other by forwarding packets inside the ECS instance. Pods on different ECS instances communicate with each other through ENIs in the VPC in which the ECS instances are deployed. This improves communication efficiency because no tunneling technologies, such as Virtual Extensible Local Area Network (VXLAN), are required to encapsulate packets.

Figure 1. How the Terway mode works terway

Terway and Flannel

When you create an ACK cluster, you can choose one of the following network plug-ins:

  • Terway: a network plug-in developed by Alibaba Cloud for ACK. Terway allows you to assign ENIs to containers and use standard Kubernetes network policies to regulate how containers communicate with each other. Terway also supports bandwidth throttling on individual containers. Terway uses flexible IP Address Management (IPAM) policies to allocate IP addresses to containers. This avoids IP address waste. If you do not want to use network policies, you can select Flannel as the network plug-in. Otherwise, we recommend that you select Terway.

  • Flannel: an open source CNI plug-in, which is simple and stable. You can use Flannel with VPC. This allows your clusters and containers to run in high-performance and stable networks. However, Flannel provides only basic features. It does not support standard Kubernetes network policies. For more information, see Flannel.

Item

Terway

Flannel

Performance

The IP address of each pod in a Kubernetes cluster is assigned from the CIDR block of the VPC where the cluster is deployed. Therefore, you do not need to use the NAT service to translate IP addresses. This avoids IP address waste. In addition, each pod in the cluster can use an exclusive ENI.

Flannel works with VPC of Alibaba Cloud. The CIDR block of pods that you specify must be different from that of the VPC where the cluster is deployed. Therefore, the NAT service is required and some IP addresses may be wasted.

Security

Terway supports network policies.

Flannel does not support network policies.

IP address management

Terway allows you to assign IP addresses on demand. You do not have to assign CIDR blocks by node. This avoids IP address waste. Terway allows you to specify a static IP address, a separate vSwitch, and a separate security group for each pod.

You can only assign CIDR blocks by node. In large-scale clusters, a great number of IP addresses may be wasted.

SLB

Server Load Balancer (SLB) directly forwards network traffic to pods. You can upgrade the pods without service interruption.

SLB forwards network traffic to the NodePort Service. Then, the NodePort Service routes the network traffic to pods.

Maximum number of pods supported by a node

The maximum number of pods supported by a node depends on the maximum number of ENIs supported by the instance type of the ECS instance on which that node is deployed. We recommend that you deploy nodes on ECS instances of new instance types and with high specifications. If you want to add an ECS instance to a cluster, the ENIs supported by the ECS instance must meet specific limits. For more information, see Overview of instance families. For more information about how to call API operations to query the maximum number of ENIs supported by an ECS instance type, visit OpenAPI Explorer. The following table describes the maximum number of pods that Terway supports in different scenarios.

Note

The maximum number of pods supported by the node network is equal to or larger than the maximum number of pods supported by the pod network.

Component

Maximum number of pods supported by the node network

Maximum number of pods supported by the pod network (including static IP addresses, vSwitches, and security groups)

Limit on instance types

terway-eni

Note

You must select Assign One ENI to Each Pod when you create the cluster.

Maximum number of ENIs supported by an ECS instance type - 1

EniQuantity-1

Maximum number of ENIs supported by an ECS instance type - 1

EniQuantity-1

Maximum number of pods supported by the node network > 6

terway-eniip

Note

By default, terway-eniip is selected.

(Maximum number of ENIs supported by an ECS instance type - 1) × Maximum number of private IP addresses supported by each ENI

(EniQuantity - 1) × EniPrivateIpAddressQuantity

0

Maximum number of pods supported by the node network > 11

terway-eniip+terway-controlplane

Note

You must select Trunk ENI when you create the cluster.

(Maximum number of ENIs supported by an ECS instance type - 1) × Maximum number of private IP addresses supported by each ENI

(EniQuantity - 1) × EniPrivateIpAddressQuantity

Total number of network interfaces supported by an ECS instance type - Number of ENIs supported by an ECS instance type

EniTotalQuantity-EniQuantity

Maximum number of pods supported by the node network > 11

Step 1: Plan CIDR blocks

When you create an ACK cluster, you must specify a VPC, vSwitches, the CIDR block of pods, and the CIDR block of Services. If you want to install the Terway plug-in, you must first create a VPC and two vSwitches in the VPC. The two vSwitches must be created in the same zone. For more information about how to plan the network for a cluster that uses Terway, see Plan CIDR blocks for an ACK cluster.

You can refer to the following table to assign CIDR blocks for a cluster that uses Terway.

VPC CIDR Block

vSwitch

Pod vSwitch

Service CIDR

192.168.0.0/16

192.168.0.0/19

192.168.32.0/19

172.21.0.0/20

Note
  • IP addresses within the CIDR block of the vSwitch are assigned to nodes.

  • IP addresses within the CIDR block of the pod vSwitch are assigned to pods.

The following example describes how to create a VPC and two vSwitches. The CIDR blocks in the preceding section are assigned in this example.

  1. Log on to the VPC console.

  2. In the top navigation bar, select the region where you want to create the VPC and click Create VPC.

    Note

    You must create the VPC in the same region as the cloud resources that you want to deploy in this VPC.

  3. On the Create VPC page, set Name to vpc_192_168_0_0_16 and enter 192.168.0.0/16 in the IPv4 CIDR Block field.

    If you want to enable IPv6, select Assign (Default) from the IPv6 CIDR Block drop-down list.

  4. In the VSwitch section, set the name to node_switch_192_168_0_0_19, select a zone for the vSwitch, and then set IPv4 CIDR Block to 192.168.0.0/19.

    To enable IPv6 for the vSwitch, you must specify an IPv6 CIDR block.

  5. Click Add, set the name to pod_switch_192_168_32_0_19, select a zone for the vSwitch, and then set IPv4 CIDR Block to 192.168.32.0/19.

    To enable IPv6 for the vSwitch, you must specify an IPv6 CIDR block.

    Important

    Make sure that the two vSwitches are created in the same zone.

  6. Click OK.

Step 2: Set up networks for a cluster that uses Terway

Log on to the ACK console and create a cluster that uses Terway as the network plug-in. For more information about other cluster parameters, see Create an ACK managed cluster. To install Terway in a cluster and set up networks for the cluster, set the following parameters.

image..png
Note

In this example, an ACK basic cluster that uses Terway and has IPv4/IPv6 dual stack enabled is created. For more information about how to create an ACK cluster, see Create an ACK managed cluster.

  • IPv6 Dual-stack: Select Enable.

  • VPC: Select the VPC that you created in Step 1: Plan CIDR blocks.

  • vSwitch: Select the vSwitch that you created in Step 1: Plan CIDR blocks.

  • Network Plug-in: Select Terway.

    • Specify whether to enable the Assign One ENI to Each Pod feature. To use the Assign One ENI to Each Pod feature, you need to log on to the Quota Center console and submit an application.
      • If you select the check box, a separate ENI is assigned to each pod.
        Note After you select Assign One ENI to Each Pod, the maximum number of pods supported by a node is reduced. Exercise caution before you enable this feature.
      • If you clear the check box, an ENI is shared among multiple pods. A secondary IP address that is provided by the ENI is assigned to each pod.
    • Specify whether to use IPVLAN.
      • This option is available only when you clear Assign One ENI to Each Pod.
      • If you select IPVLAN, IPVLAN and extended Berkeley Packet Filter (eBPF) are used for network virtualization when an ENI is shared among multiple pods. This improves network performance. Only the Alibaba Cloud Linux operating system is supported.
      • If you clear IPVLAN, policy-based routes are used for network virtualization when an ENI is shared among multiple pods. The CentOS 7 and Alibaba Cloud Linux operating systems are supported. This is the default setting.

      For more information about the IPVLAN feature in Terway mode, see Terway IPVLAN.

    • Select or clear Support for NetworkPolicy.
      • The NetworkPolicy feature is available only when you clear Assign One ENI to Each Pod. By default, Assign One ENI to Each Pod is unselected.
      • If you select Support for NetworkPolicy, you can use Kubernetes network policies to control the communication among pods.
      • If you clear Support for NetworkPolicy, you cannot use Kubernetes network policies to control the communication among pods. This prevents Kubernetes network policies from overloading the Kubernetes API server.
    • Select or clear Support for ENI Trunking. To use the Support for ENI Trunking feature, you need to log on to the Quota Center console and submit an application. The Terway Trunk elastic network interface (ENI) feature allows you to specify a static IP address, a separate vSwitch, and a separate security group for each pod. This allows you to manage and isolate user traffic, configure network policies, and manage IP addresses in a fine-grained manner. For more information, see Configure a static IP address, a separate vSwitch, and a separate security group for each pod.
  • Pod vSwitch: Select the pod vSwitch that you created in Step 1: Plan CIDR blocks.

  • Service CIDR: Use the default value.

  • IPv6 Service CIDR: This parameter is available after you enable IPv4/IPv6 dual stack. Use the default value.

Terway IPVLAN

If you select the Terway network plug-in when you create a cluster, you can enable the Terway IPVLAN mode. The Terway IPVLAN mode provides high-performance networks for pods and Services based on IPVLAN and Extended Berkeley Packet Filter (eBPF) technologies.

Compared with the default Terway mode, the Terway IPVLAN mode optimizes the performance of pod networks, Service networks, and network policies.

  • Pod networks are directly implemented based on the sub-interfaces of ENIs in IPVLAN L2 mode. This significantly simplifies network forwarding on the host and reduces the latency by 30% compared with the traditional mode. The performance of pod networks is almost the same as that of the host network.

  • Service networks are implemented based on the eBPF technology instead of the kube-proxy mode. Traffic forwarding no longer depends on the host iptables or IP Virtual Server (IPVS). This maintains almost the same performance in larger-scale clusters and offers better scalability. Compared with traffic forwarding based on IPVS and iptables, this new approach greatly reduces network latency in scenarios that involve a large number of new connections and port reuse.

  • The network policies of pods are implemented based on the eBPF technology instead of iptables. This way, large numbers of iptables rules are no longer generated on the host and the impact of network policies on network performance is reduced.

Limits on the Terway IPVLAN mode

  • Only the Alibaba Cloud Linux operating system is supported.

  • The Sandboxed-Container runtime is not supported.

  • The implementation of network policies is different from when the default Terway mode is used.

    • The CIDR selector has a lower priority than the pod selector. Additional pod selectors are required if the CIDR block of pods is within the CIDR range specified by the CIDR selector.

    • The except keyword of the CIDR selector is not fully supported. We recommend that you do not use the except keyword.

    • If you use a network policy of the Egress type, you cannot access pods in the host network or the IP addresses of nodes in the cluster.

  • You may fail to access the Internet-facing SLB instance that is associated with a LoadBalancer Service from within the cluster due to loopback issues. For more information, see Why am I unable to access an SLB instance?

Scenarios

  • Middleware and microservices

    Avoids performance degradation in large-scale deployments and reduces the network latency of microservices.

  • Gaming and live streaming applications

    Significantly reduces network latency and resource competition among multiple instances.

  • High-performance computing

    High-performance computing requires high-throughput networks. The Terway IPVLAN mode reduces CPU overhead and saves computing resources for core workloads.