Terway is an open source Container Network Interface (CNI) plugin developed by Alibaba Cloud. Terway integrates with Virtual Private Cloud (VPC) and lets you use standard Kubernetes network policies to define access policies for containers.
Before you begin
To better understand how Terway works, we recommend that you read this topic before you use the Terway network plugin.
Before you proceed, make sure that you understand the basic concepts of container network plugins and have selected a suitable plugin. For more information, see Networking and Comparison between Terway and Flannel.
Before you create a cluster, you must plan its CIDR blocks. For more information, see Plan CIDR blocks for an ACK managed cluster.
Billing
Terway is free of charge. However, the pods that Terway deploys on each node consume a small amount of node resources. For more information about the billing of Alibaba Cloud services that are used by ACK, see Cloud resource fees.
Important notes
The Terway configuration file, eni-config, contains many system parameters. Modifying or deleting fields that are not explicitly allowed may cause issues, such as network interruptions or pod creation failures. For information about the parameters that you can modify, see Customize Terway configuration parameters.
The Terway component uses CustomResourceDefinitions (CRDs) to track resource status. If you incorrectly manipulate system resources, you may cause issues, such as network interruptions or pod creation failures.
Resource name | Resource type | Can users operate on the CRD? | Can users operate on the CR? |
podnetworkings.network.alibabacloud.com | User resource | No | Yes |
podenis.network.alibabacloud.com | System resource | No | No |
networkinterfaces.network.alibabacloud.com | System resource | No | No |
nodes.network.alibabacloud.com | System resource | No | No |
noderuntimes.network.alibabacloud.com | System resource | No | No |
*.cilium.io | System resource | No | No |
*.crd.projectcalico.org | System resource | No | No |
Method for calculating the maximum number of pods on a node
When you use the Terway network plugin, the maximum number of pods that a node can support is determined by the number of ENIs that the ECS instance type of the node supports. Terway sets a minimum limit on the number of pods for each node. The maximum number of pods that a node supports must meet this limit before the node can be added to the cluster. The following table provides more details.
Terway mode | Maximum number of pods per node | Example | Number of pods per node that support static IP addresses, independent vSwitches, and independent security groups |
Shared ENI mode | (Number of ENIs supported by the ECS instance type - 1) × (Number of private IP addresses supported by a single ENI) (EniQuantity - 1) × EniPrivateIpAddressQuantity Note The maximum number of pods per node must be greater than 11 for the node to be added to the cluster. | For example, for the ecs.g7.4xlarge instance type of the g7 general-purpose instance family, the instance type supports 8 ENIs, and each ENI supports 30 private IP addresses. The maximum number of pods per node is (8 - 1) × 30 = 210. Important The maximum number of pods that can use node ENIs is a fixed value determined by the node specifications. Modifying the | 0 |
Shared ENI mode + Trunk ENI | Maximum number of pods per node in Trunk mode: (Total number of network interface cards (NICs) supported by the ECS instance type) - (Number of ENIs supported by the ECS instance type) EniTotalQuantity - EniQuantity | ||
Exclusive ENI mode | ECS instances: Number of ENIs supported by the ECS instance type - 1 EniQuantity - 1 Lingjun instances: Create and manage Lingjun ENIs - 1 LeniQuota - 1 Note The maximum number of pods per node must be greater than 6 for the node to be added to the cluster. | For example, for the ecs.g7.4xlarge instance type of the g7 general-purpose instance family, the instance type supports 8 ENIs. The maximum number of pods per node is (8 - 1) = 7. | Number of ENIs supported by the ECS instance type - 1 EniQuantity - 1 Note Lingjun instances are not supported. |
In Terway v1.11.0 and later, you can select exclusive ENI mode or shared ENI mode for a node pool. Both types of node pools can coexist in a single cluster. For more information, see Terway release notes.
View the maximum number of pods supported by the container network
Method 1: When you create a node pool, go to the Instance Type section, where the Terway Compatibility (Supported Pods) column shows the number of pods that each instance type supports.
Method 2: Obtain the required data as described in the following steps and then manually calculate the number of pods that the ECS instance type supports.
Query the number of ENIs that an ECS instance supports by reading the instance families documentation.
You can query the information in OpenAPI Explorer. To do this, specify the instance type of an existing node for the
InstanceTypesparameter and click Initiate Call. In the response,EniQuantityindicates the maximum number of ENIs supported by the instance type,EniPrivateIpAddressQuantityindicates the number of private IP addresses supported by a single ENI, andEniTotalQuantityindicates the total number of network interfaces supported by the instance type.
Install the Terway network plugin when you create a cluster
You must install the Terway network plugin when you create a cluster. You cannot change the network plugin for an existing cluster.
Log on to the ACK console. In the left navigation pane, click Clusters.
On the Clusters page, click Create Kubernetes Cluster.
Configure the key network parameters for the Terway network plugin. For more information about other cluster parameters, see Create an ACK managed cluster.
Configuration item
Description
IPv6 Dual-stack
Select Enable to create a dual-stack cluster that supports both IPv4 and IPv6 addresses.
If you enable IPv4/IPv6 dual-stack, a dual-stack cluster is created.
ImportantOnly clusters that run Kubernetes 1.22 and later support this feature.
IPv4 addresses are used for communication between worker nodes and the control plane.
You must select Terway as the container network plugin.
If you use the shared ENI mode of Terway, the ECS instance type must support IPv6 addresses. To add ECS instances of the specified type to the cluster, the number of IPv4 addresses supported by the ECS instance type must be the same as the number of IPv6 addresses. For more information about ECS instance types, see Instance family overview.
The VPC used by the cluster must support IPv4/IPv6 dual-stack.
You must disable IPv4/IPv6 dual stack if you want to use eRDMA in a cluster.
VPC
The VPC that the cluster uses.
Network Plug-in
Select Terway.
DataPath V2
If you select this checkbox, DataPath V2 acceleration mode is used. After you select the acceleration mode, Terway uses a traffic forwarding link that is different from the regular shared ENI mode to achieve faster network communication. For more information about the features of this mode, see Network acceleration.
NoteFor new clusters that run Kubernetes 1.34 or later and have DataPath V2 selected, Terway nodes will no longer run kube-proxy.
This mode supports portmap by default. You do not need to configure the portmap plugin. For more information, see Configure a custom CNI chain.
DataPath V2 is supported only on the following operating system images and requires Linux kernel 5.10 or later:
Alibaba Cloud Linux 3 (all versions)
ContainerOS
Ubuntu
When enabled, the Terway policy container is expected to consume an additional 0.5 cores and 512 MB of resources on each worker node. This consumption increases as the cluster size grows. In the default Terway configuration, the CPU limit for the policy container is 1 core, and the memory is unlimited.
In DataPath V2 mode, container network connection tracking (conntrack) information is stored in an eBPF map. Similar to the native conntrack mechanism in Linux, eBPF conntrack is implemented based on the Least Recently Used (LRU) algorithm. When the map capacity is reached, the oldest connection records are automatically evicted to store new connections. You must configure the relevant parameters based on your business scale to prevent the number of connections from exceeding the limit. For more information, see Optimize conntrack configurations in Terway mode.
NetworkPolicy Support
If you select this checkbox, Kubernetes-native
NetworkPolicyis supported.NoteStarting from Terway v1.9.2, NetworkPolicy for new clusters is implemented by eBPF, and the DataPath V2 feature is enabled on the data plane.
The feature that lets you manage
NetworkPolicyresources in the console is in public preview. To use this feature, you must submit an application in the Quota Center console.
Trunk ENI Support
If you select this checkbox, Trunk ENI mode is enabled. This lets you configure a static IP address, an independent vSwitch, and an independent security group for each pod.
NoteYou can select the Trunk ENI option for an ACK managed cluster without submitting an application. If you want to enable the Trunk ENI feature in an ACK dedicated cluster, you must submit an application in the Quota Center console.
Starting from Kubernetes 1.31, the Trunk ENI feature is automatically enabled for new ACK managed clusters. You do not need to manually select this option.
vSwitch
The vSwitch CIDR block used by the nodes in the cluster. We recommend that you select vSwitches from three or more different zones to achieve higher cluster availability.
Pod vSwitch
The vSwitch CIDR block used by pods. This can overlap with the node vSwitch CIDR block.
Service CIDR
The CIDR block used by Services. This cannot overlap with the node and pod CIDR blocks.
IPv6 Service CIDR
This can be configured after you enable IPv6 dual-stack.
Terway working modes
The following sections describe the features, comparisons, and working principles of different Terway modes.
Shared ENI mode and exclusive ENI mode
Terway provides two modes for assigning IP addresses to pods: Shared ENI mode and Exclusive ENI mode.
In Terway v1.11.0 and later, you can select shared ENI or exclusive ENI mode for a node pool. This option is no longer available during cluster creation.
The primary ENI on a node is assigned to the node operating system (OS). The remaining ENIs are managed by Terway to configure the pod network. Therefore, you must not manually configure these ENIs. If you want to manage some ENIs yourself, see Configure a whitelist for ENIs.
Item | Shared ENI mode | Exclusive ENI mode | |
Pod IP address management | ENI allocation method | Multiple pods share one ENI. | Each pod exclusively occupies one ENI on its node. |
Pod deployment density | High pod deployment density. A single node can support hundreds of pods. | Low pod deployment density. A node of a common instance type supports only a small number of pods. | |
Network architecture | |||
Data link | When a pod accesses other pods or is accessed as a Service backend, the traffic passes through the network protocol stack of the node. | When a pod accesses a Service, the traffic passes through the protocol stack of the node OS. However, when a pod accesses other pods or is accessed as a Service backend, the pod directly uses the attached ENI to bypass the network protocol stack of the node. This provides higher performance. | |
Scenarios | Regular Kubernetes scenarios. | In this mode, the network performance is similar to that of a traditional virtual machine. This mode is suitable for scenarios that have high requirements for network performance, such as applications that require high network throughput or low latency. | |
Network acceleration | Supports DataPath V2 network acceleration. For more information, see Network acceleration. | Does not support network acceleration. However, pods exclusively use ENI resources, which provides excellent network performance. | |
NetworkPolicy support | Supports native Kubernetes | Does not support | |
Node-level network configuration | Supported. For more information, see Node-level network configuration. | Supported. For more information, see Node-level network configuration. | |
Access control | After you enable the Trunk ENI configuration, you can configure a static IP address, an independent security group, and an independent vSwitch for a pod. For more information, see Configure a static IP address, an independent vSwitch, and an independent security group for a pod. | You can configure a static IP address, an independent security group, and an independent vSwitch for a pod. | |
Network acceleration
When you use Terway in shared ENI mode, you can enable network acceleration mode. After you enable acceleration mode, Terway uses a different traffic forwarding path than the regular shared ENI mode to achieve higher performance. Terway currently supports the DataPath V2 acceleration mode. The following section describes the features of DataPath V2.
DataPath V2 is an upgraded version of the earlier IPvlan+eBPF acceleration mode. When you create a cluster and install the Terway plugin in Terway V1.8.0 and later, you can only select DataPath V2 for acceleration.
The DataPath V2 and IPvlan+eBPF acceleration modes apply only to node pools in shared ENI mode. They do not affect node pools in exclusive ENI mode.
DataPath V2 features | Description |
Applicable Terway version | Clusters created with Terway V1.8.0 and later. |
Network architecture | |
Accelerated data link |
|
Performance optimization |
|
How to use | When you create a cluster, set Network Plug-in to Terway and select the DataPath V2 option. |
Notes |
|
In clusters that were created earlier, you may have selected the IPvlan+eBPF acceleration mode. The following section describes its features.
Access control
Terway in shared ENI mode allows for more fine-grained management of network traffic within the cluster through its support for NetworkPolicy and the Trunk ENI option. Terway in exclusive ENI mode also supports some traffic control capabilities.
NetworkPolicy support
Node pools in Terway exclusive ENI mode do not support
NetworkPolicy.Node pools in Terway shared ENI mode support the native Kubernetes
NetworkPolicyfeature, which controls network traffic between pods using user-defined rules.When you create a cluster, set Network Plug-in to Terway and select the NetworkPolicy Support option to enable
NetworkPolicyfor the cluster. For more information, see Use network policies in ACK clusters.NoteThe feature that lets you manage
NetworkPolicyresources in the console is in public preview. To use this feature, you must submit an application in the Quota Center console.
Configure a static IP address, an independent vSwitch, and an independent security group for a pod
Node pools in Terway exclusive ENI mode support the configuration of a static IP address, an independent vSwitch, and an independent security group for each pod by default. This provides fine-grained traffic management, traffic isolation, network policy configuration, and IP address management capabilities.
Trunk ENI is an option for node pools in Terway shared ENI mode. After you enable Trunk ENI, you can configure a static IP address, an independent vSwitch, and a security group for each pod.
When you create a cluster, set Network Plug-in to Terway and select the Trunk ENI Support option. For more information, see Configure a static IP address, an independent vSwitch, and a security group for a pod.
NoteYou can select the Trunk ENI option for an ACK managed cluster without submitting an application. If you want to enable the Trunk ENI feature in an ACK dedicated cluster, you must submit an application in the Quota Center console.
Starting from Kubernetes 1.31, the Trunk ENI feature is automatically enabled for new ACK managed clusters. You do not need to manually select this option.
After you enable Trunk ENI mode, the terway-eniip and terway-controlplane components are installed.
Scale limits
Terway calls the OpenAPI operations of cloud products to manage node network interfaces and IP addresses. For more information about the limits on OpenAPI operations, see the documentation of the corresponding cloud products.
Shared ENI mode: You can allocate a maximum of 500 nodes in parallel.
Exclusive ENI/Trunk ENI mode: You can allocate a maximum of 100 pods in parallel.
The preceding quotas cannot be changed.
FAQ
How do I determine whether Terway is in exclusive ENI mode or shared ENI mode?
In Terway v1.11.0 and later, Terway uses the shared ENI mode by default. You can enable the exclusive ENI mode by configuring the exclusive ENI network mode for a node pool.
In versions earlier than Terway v1.11.0, you can select either exclusive or shared ENI mode when you create a cluster. After the cluster is created, you can identify the mode as follows:
Exclusive ENI mode: The name of the Terway DaemonSet in the kube-system namespace is
terway-eni.Shared ENI mode: The name of the Terway DaemonSet in the kube-system namespace is
terway-eniip.
Can I switch the network plugin for an existing ACK cluster?
You can only select a network plugin (Terway or Flannel) when you create a cluster. The network plugin cannot be changed after the cluster is created. To use a different network plugin, you must create a new cluster. For more information, see Create an ACK managed cluster.