Terway is an open source Container Network Interface (CNI) plug-in developed by Alibaba Cloud. Terway works with Virtual Private Cloud (VPC) and allows you to use standard Kubernetes network policies to regulate how containers communicate with each other. You can use Terway to enable internal communication within a Kubernetes cluster. This topic describes how to use Terway in a Container Service for Kubernetes (ACK) cluster.

Background information

Terway is a network plug-in developed by Alibaba Cloud for ACK. Terway allows you to configure networks for pods by associating Alibaba Cloud elastic network interfaces (ENIs) with the pods. Terway allows you to use standard Kubernetes network policies to regulate how containers communicate with each other. In addition, Terway is compatible with Calico network policies.

In a cluster that has Terway installed, each pod has a separate network stack and is assigned a separate IP address. Pods on the same Elastic Compute Service (ECS) instance communicate with each other by forwarding packets inside the ECS instance. Pods on different ECS instances communicate with each other through ENIs in the VPC in which the ECS instances are deployed. This improves communication efficiency because no tunneling technologies, such as Virtual Extensible Local Area Network (VXLAN), are required to encapsulate packets.

Figure 1. How Terway works
Terway

Terway and Flannel

When you create an ACK cluster, you can choose one of the following network plug-ins:
  • Terway: a network plug-in developed by Alibaba Cloud for ACK. Terway allows you to assign ENIs to containers and use standard Kubernetes network policies to regulate how containers communicate with each other. Terway also supports bandwidth throttling on individual containers. Terway uses flexible IP Address Management (IPAM) policies to allocate IP addresses to containers. This avoids IP address waste. If you do not want to use network policies, you can select Flannel as the network plug-in. Otherwise, we recommend that you select Terway.
  • Flannel: an open source CNI plug-in, which is simple and stable. You can use Flannel with VPC. This allows your clusters and containers to run in high-performance and stable networks. However, Flannel provides only basic features. It does not support standard Kubernetes network policies. For more information, see Flannel.
Item Terway Flannel
Performance The IP address of each pod in a Kubernetes cluster is assigned from the CIDR block of the VPC where the cluster is deployed. Therefore, you do not need to use the NAT service to translate IP addresses. This avoids IP address waste. In addition, each pod in the cluster can use an exclusive ENI. Flannel works with VPC of Alibaba Cloud. The CIDR block of pods that you specify must be different from that of the VPC where the cluster is deployed. Therefore, the NAT service is required and some IP addresses may be wasted.
Security Terway supports network policies. Flannel does not support network policies.
IP address management Terway allows you to assign IP addresses on demand. You do not have to assign CIDR blocks by node. This avoids IP address waste. You can only assign CIDR blocks by node. In large-scale clusters, a great number of IP addresses may be wasted.
SLB Server Load Balancer (SLB) directly forwards network traffic to pods. You can upgrade the pods without service interruption. SLB forwards network traffic to the NodePort Service. Then, the NodePort Service routes the network traffic to pods.

Considerations

  • To use the Terway plug-in, we recommend that you use ECS instances of higher specifications and newer types, such as ECS instance types that belong to the g5 instance family or higher and have at least 8 vCPUs. For more information, see Instance family.
  • The maximum number of pods that each node supports is based on the number of ENIs assigned to the node.
    • Maximum number of pods supported by each shared ENI = (Number of ENIs supported by each ECS instance - 1) × Number of private IP addresses supported by each ENI
    • Maximum number of pods supported by each exclusive ENI = Number of ENIs supported by each ECS instance - 1
    Note
    • You can view the number of ENIs supported by each ECS instance in the Instance Type section when you create or expand a cluster. For more information about how to create a cluster, see Create an ACK Pro cluster.
    • If the maximum number of pods supported by each shared ENI is less than 10 for an instance type, you cannot add ECS instances of this instance type to your cluster.
    • If the maximum number of pods supported by each exclusive ENI is less than 5 for an instance type, you cannot add ECS instances of this instance type to your cluster.
  • This feature is in public preview. To use this feature, Submit a ticket.

Step 1: Plan CIDR blocks

When you create an ACK cluster, you must specify a VPC, vSwitches, the CIDR block of pods, and the CIDR block of Services. If you want to install the Terway plug-in, you must first create a VPC and two vSwitches in the VPC. The two vSwitches must be created in the same zone. For more information about how to plan the network for a cluster that uses Terway, see Plan CIDR blocks for an ACK cluster.

You can refer to the following table to assign CIDR blocks for a cluster that uses Terway.
VPC CIDR block vSwitch Pod vSwitch Service CIDR
192.168.0.0/16 192.168.0.0/19 192.168.32.0/19 172.21.0.0/20
Note
  • IP addresses within the CIDR block of the vSwitch are assigned to nodes.
  • IP addresses within the CIDR block of the pod vSwitch are assigned to pods.

The following example describes how to create a VPC and two vSwitches. The CIDR blocks in the preceding section are assigned in this example.

  1. Log on to the VPC console.
  2. In the top navigation bar, select the region where you want to create the VPC and click Create VPC.
    Note You must create the VPC in the same region as the cloud resources that you want to deploy in this VPC.
  3. On the Create VPC page, set Name to vpc_192_168_0_0_16 and enter 192.168.0.0/16 in the IPv4 CIDR Block field.

    If you want to enable IPv6, select Assign (Default) from the IPv6 CIDR Block drop-down list.

  4. In the Create vSwitch section, set the name to node_switch_192_168_0_0_19, select a zone for the vSwitch, and then set IPv4 CIDR Block to 192.168.0.0/19. You can click Add to create another vSwitch.
    To enable IPv6 for the vSwitch, you must specify an IPv6 CIDR block. vSwitch
    Notice Make sure that the two vSwitches are created in the same zone.
  5. In the Create vSwitch section, configure a pod vSwitch, set the name to pod_switch_192_168_32_0_19, and then set IPv4 CIDR Block to 192.168.32.0/19.
    To enable IPv6 for the vSwitch, you must specify an IPv6 CIDR block.
  6. Click OK.

Step 2: Set up networks for a cluster that uses Terway

To install Terway in a cluster and set up networks for the cluster, set the following parameters.terway
Note In this example, an ACK standard cluster that uses Terway and has IPv4/IPv6 dual stack enabled is created. For more information about how to create an ACK cluster, see Create an ACK managed cluster.
  • IPv6 Dual-stack: Select Enable.
  • VPC: Select the VPC that you created in Step 1: Plan CIDR blocks.
  • vSwitch: Select the vSwitch that you created in Step 1: Plan CIDR blocks.
  • Network Plug-in: Select Terway.
    If you set Network Plug-in to Terway, you must set Terway Mode.
    • Select or clear Assign One ENI to Each Pod.
      • If you select the check box, an ENI is assigned to each pod.
      • If you clear the check box, an ENI is shared among multiple pods. A secondary IP address that is provided by the ENI is assigned to each pod.
      Note To select the Assign One ENI to Each Pod check box, you must submit a ticket to apply to be added to the whitelist.
    • Select or clear IPVLAN.
      • This option is available only when you clear Assign One ENI to Each Pod.
      • If you select IPVLAN, IPVLAN and extended Berkeley Packet Filter (eBPF) are used for network virtualization when an ENI is shared among multiple pods. This improves network performance. Only the Alibaba Cloud Linux 2 operating system is supported.
      • If you clear IPVLAN, policy-based routes are used for network virtualization when an ENI is shared among multiple pods. The CentOS 7 and Alibaba Cloud Linux 2 operating systems are supported. This is the default setting.

      For more information about the IPVLAN feature in Terway mode, see Terway IPvlan.

    • Select or clear Support for NetworkPolicy.
      • The NetworkPolicy feature is available only when you clear Assign One ENI to Each Pod. By default, Assign One ENI to Each Pod is unselected.
      • If you select Support for NetworkPolicy, you can use Kubernetes network policies to control the communication among pods.
      • If you clear Support for NetworkPolicy, you cannot use Kubernetes network policies to control the communication among pods. This prevents Kubernetes network policies from overloading the Kubernetes API server.
  • Pod vSwitch: Select the pod vSwitch that you created in Step 1: Plan CIDR blocks.
  • Service CIDR: Use the default value.
  • IPv6 Service CIDR: This parameter is available after you enable IPv4/IPv6 dual stack. Use the default value.

Terway IPvlan

If you select the Terway network plug-in when you create a cluster, you can enable the Terway IPvlan mode. The Terway IPvlan mode provides high-performance networks for pods and Services based on IPvlan and Extended Berkeley Packet Filter (eBPF) technologies.

Compared with the default Terway mode, the Terway IPvlan mode optimizes the performance of pod networks, Service networks, and network policies.
  • Pod networks are directly implemented based on the sub-interfaces of elastic network interfaces (ENIs) in IPvlan L2 mode. This significantly simplifies network forwarding on the host and reduces the latency by 30% compared with the traditional mode. The performance of pod networks is almost the same as that of the host network.
  • Service networks are implemented based on the eBPF technology instead of the kube-proxy mode. Traffic forwarding no longer depends on the host iptables or IP Virtual Server (IPVS). This maintains almost the same performance in larger-scale clusters and offers better scalability. Compared with traffic forwarding based on IPVS and iptables, this new approach greatly reduces network latency in scenarios that involve a large number of new connections and port reuse.
  • The network policies of pods are implemented based on the eBPF technology instead of iptables. This way, large numbers of iptables rules are no longer generated on the host and the impact of network policies on network performance is minimized.
Limits of the Terway IPvlan mode
  • Only the Alibaba Cloud Linux 2 operating system is supported.
  • The Sandboxed-Container runtime is not supported.
  • The implementation of network policies is different from when the default Terway mode is used.
    • The CIDR selector has a lower priority than the pod selector. Additional pod selectors are required if the CIDR block of pods is within the CIDR range specified by the CIDR selector.
    • The except keyword of the CIDR selector is not fully supported. We recommend that you do not use the except keyword.
    • If you use a network policy of the Egress type, you cannot access pods in the host network or the IP addresses of nodes in the cluster.
  • You may fail to access the Internet-facing SLB instance that is associated with a LoadBalancer Service from within the cluster due to loopback issues. For more information, see Why am I unable to access an SLB instance?.
Scenarios
  • Middleware and microservices

    Avoids performance degradation in large-scale deployments and reduces the network latency of microservices.

  • Gaming and live streaming applications

    Significantly reduces network latency and resource competition among multiple instances.

  • High-performance computing

    High-performance computing requires high-throughput networks. The Terway IPvlan mode reduces CPU overhead and saves computing resources for core workloads.