×
Community Blog Analysis of Alibaba Cloud Container Network Data Link (5): Terway ENI-Trunking

Analysis of Alibaba Cloud Container Network Data Link (5): Terway ENI-Trunking

Part 5 of this series mainly introduces the forwarding links of data plane links in Kubernetes Terway Elastic Network Interface (ENI) – Trunking.

By Yu Kai

Co-Author: Xieshi, Alibaba Cloud Container Service

This article is the fifth part of the series. It mainly introduces the forwarding links of data plane links in Kubernetes Terway Elastic Network Interface (ENI) – Trunking. On the one hand, we can find out the reasons for different access results that customers receive in different scenarios and help customers further optimize the business architecture by understanding the forwarding links of the data plane in different scenarios. On the other hand, by understanding the forwarding links in-depth, customer O&M and staff from Alibaba Cloud can choose the link points to manual deployment and observation to further classify and locate the problems.

1. Architecture Design for Terway ENI-Trunking

Trunk ENIs are virtual network interfaces that can be bound with Elastic Computer Service (ECS) instances deployed in virtual private clouds (VPCs). Compared with Elastic Network Interface (ENI), the instance resource density of Trunk ENI is significantly improved. After you enable the Terway Trunk ENI, the pods you specified will use Trunk ENI resources. Custom configurations are optional for pods. By default, the Terway Trunk ENI feature is disabled for pods. In this case, pods use IP addresses allocated from a shared ENI. Terway can simultaneously use shared ENI and Trunk ENI to allocate IP addresses to a pod only after you claim the pod to use custom configurations. The two modes share the maximum number of pods on the node. The total deployment density is the same as before the mode is enabled.

Industries (such as finance, telecommunications, and government) have strict data security requirements for data information security. Generally, core important data is placed in a self-built data center, and there is a strict whitelist of the clients that access this data. Generally, specific IP access sources are restricted. When the business architecture is migrated to the cloud, we usually connect user-created data centers to cloud resources through leased lines, VPNs, etc. Since the PodIP in traditional containers is not fixed, Network Policies can only take effect in clusters, which poses a great challenge to the whitelist. When ENI is in the Trunk mode, you can configure independent security groups and vSwitches to provide a more refined network configuration and a highly competitive container network solution.

1

In the trunking namespace, you can see the relevant pod information and node information. The network of the IP address of the pod application will be described in detail later.

2
3
4

The pod has a default route that only points to eth0. This indicates that the pod accesses all address fields with eth0 as the unified gateway.

5

How do pods communicate with ECS OS? At the OS level, when we see the network interface card of calicxxxx, we can see it is attached to eth1. The communication connection between nodes and pods is similar to what's described in Panoramic Analysis of Alibaba Cloud Container Network Data Link (3) — Terway ENIIP. No more details are provided here. Through the OS Linux Routing, we can see that all traffic destined for Pod IP will be forwarded to pod's corresponding calico virtual network interface card. Here, ECS OS and Pod's network namespace have established a complete inbound and outbound link configuration.

6
7

Let's focus on ENI Trunking itself. How does ENI Trunking configure the switches and the security groups of pods? Terway allows you to use custom resource definitions (CRD) named PodNetworking to describe network configurations. You can create multiple PodNetworkings to design different network planes. After the PodNetworking is created, Terway automatically synchronizes the network configurations. The PodNetworking takes effect on pods only after the status of the PodNetworking changes to Ready. As shown in the following figure, the type is Elastic. As long as the tag of namespace conforms to tryunking:zoneb, the pod can use the specified security groups and switches.

8

When a pod is created, the pod will match PodNetworkings through labels. If a pod does not match a PodNetworking, the pod is assigned an IP address from a shared ENI by default. If a pod matches a PodNetworking, an ENI is assigned to the pod based on the configurations specified in the PodNetworking. Please see labels for more information about the Pod label.

Terway automatically creates CRDs named PodENI for pods that match PodNetworkings to track the resource usage of pods. PodENIs are managed by Terway and cannot be modified. The following centos-59cdc5c9c4-l5vf9 pod in the trunking namespace matches the podnetworking settings and is assigned with the corresponding member ENI, corresponding trunking ENI, security group, switch, and bound ECS instance. This way, we can configure and manage the switch and the security group in the Pod dimension.

9

In the ECS console, we can clearly see the relationship between the member ENI and the Trunking ENI with the information on the corresponding security groups and switches.

10
11

The configurations above allow us to know how to configure switches and security groups for each pod, and these configurations will take effect if we can enable each pod to automatically go to the corresponding configured Member ENI after it leaves the ECS through Trunking ENI. All configurations are implemented on the host through relevant policies. How does the Trunking ENI network interface card forward the traffic of the corresponding pod to the correct corresponding Member ENI? This is done through vlan. The VLAN ID can be seen at the tc level. Therefore, the VLAN ID is marked or removed at the egress or ingress stage.

12

Therefore, the overall Terway ENI-Trunking model can be summarized as:

  • Trunk ENIs are virtual network interface cards that can be bound with Elastic Computer Service (ECS) instances deployed in virtual private clouds (VPCs). Compared with ENIs, the instance resource density of Trunk ENIs is significantly improved.
  • The Terway Trunk ENI allows you to specify a fixed IP address, a separate Virtual Switch (vSwitch), and a separate security group for each pod. This allows you to manage and isolate user traffic, configure network policies, and manage IP addresses in a fine-grained manner.
  • You need to purchase 5th or 6th generation ECS Bare Metal Instances with eight cores or above to use the Terway plug-in. In addition, the instances must support Trunk ENIs. Please see Overview of Instance Family for more information.
  • The maximum number of pods that each node supports depends on the number of ENIs assigned to the node. The maximum number of pods supported by each shared ENI = (Number of ENIs supported by each ECS instance-1) × Number of private IP addresses supported by each ENI.
  • Pod security group rules do not apply to the traffic between pods on the same node and between nodes and pods on the same node. If any restrictions are needed, you can use NetworkPolicy to do it.
  • Pods and the corresponding MemberENI traffic are mapped by VLAN ID.

2. Terway ENI-Trunking Mode Container Network Data Link Analysis

Since the security group and switch can be set in the Pod dimension, access to different links will inevitably become more complicated in macro terms. We can roughly divide the network links in Terway ENI-TRunking mode into two large SOP scenarios (Pod IP and SVC), or we can further divide them into ten different small SOP scenarios.

13

The data links of these 11 scenarios can be summarized into the following ten typical scenarios:

  • Access pods through nodes (same or different security groups)
  • Mutual access between Trunk pods on the same node and security group (including access to SVC IP and the source and SVC backend deployed on the same node)
  • Mutual access between Trunk pods in different security groups on the same node (including access to SVC IP and the source and SVC backend deployed on the same node)
  • Mutual access between different nodes and Trunk pods in the same security group
  • Mutual access between Eth-Trunk pods on different nodes and different security groups
  • Source accesses to the SVC IP address in the cluster (different source and SVC backend nodes and the same security group, including access to the external IP address in Local mode)
  • Source accesses to the SVC IP in the cluster (different source and SVC backend nodes and different security groups, including access to external IP in Local mode)
  • In Cluster mode, the source in the cluster accesses the SVC ExternalIP (different source and SVC backend nodes and different security groups)
  • In Cluster mode, the source in the cluster accesses the SVC ExternalIP (same security group)
  • Access the SVC IP address outside the cluster.

2.1 Scenario 1: Use a Node to Access Pods (Same or Different Security Groups)

Environment

14

nginx-6f545cb57c-kt7r8 and 10.0.4.30 exist on the cn-hongkong.10.0.4.22 node.

Kernel Routing

The IP address of nginx-6f545cb57c-kt7r8 is 10.0.4.30. The PID of the container on the host is 1734171, and the container network namespace has a default route pointing to container eth0.

15
16

The container eth0 is a tunnel established in ECS through ipvlan tunnel and ECS's subsidiary ENI eth1. The subsidiary ENI eth1 also has a virtual calxxx network interface card.

17
18

In the ECS OS, there is a route that points to the IP address of the pod with calixxxxx as the next hop. As you can see from the preceding section, the calixxx network interface card is a pair that consists of veth1 in each pod. Therefore, the CIDR of the pod that accesses the SVC will point to veth1 instead of the default eth0 route. Therefore, the calixx network interface card here works to:

  1. Help node access the pod
  2. When the node or pod accesses the CIDR of the SVC, it helps the node or pod go through the ECS OS kernel protocol stack conversion to calixxx and veth1 to access the pod.

19

The nginx-6f545cb57c-kt7r8 pod in the trunking namespace matches the corresponding podnetworking settings and is assigned with the corresponding member ENI, corresponding trunking ENI, security group, switch, and bound ECS instance, thus realizing the configuration and management of the switch and security group in the Pod dimension.

20

VLAN ID 1027 can be seen at the tc level, so data traffic will be tagged or removed from VLAN IDs at the egress or ingress stage.

21

From the security group to which the network interface card of the ENI belongs, we can see that only the specified IP address is allowed to access port 80 of the nginx pod.

22

The traffic forwarding logic placed on the data surface at the OS level is similar to what's described in Panoramic Analysis of Alibaba Cloud Container Network Data Link (3) — Terway ENIIP. No more details are provided here.

Summary

Access to Target

23
Data Link Forwarding Diagram

  • It will pass through the calico network interface card. Each pod that is not the host network will form a veth pair with the calico network interface card, which is used to communicate with other pods or nodes.
  • The entire link and the request will not pass through the ENI allocated by the pod. The IP rule is hit directly in the ns of the OS and is forwarded.
  • The entire request link is ECS1 OS → calixxxx → ECS1 Pod1.
  • Since forwarding is performed through the OS kernel routing and does not pass through the member ENI, the security group does not take effect. This link has nothing to do with the member ENI security group to which the pod belongs.

2.2 Scenario 2: Mutual Access between Trunk Pods on the Same Node and in the Same Security Group (Including Access to SVC IP, the Source, and SVC Backend, Which Are Deployed on the Same Node)

Environment

24

nginx-6f545cb57c-kt7r8,10.0.4.30 and busybox-87ff8bd74-g8zs7,10.0.4.24 exists on cn-hongkong.10.0.4.22 node.

Kernel Routing

The IP address of nginx-6f545cb57c-kt7r8 is 10.0.4.30. The PID of the container on the host is 1734171, and the container network namespace has a default route pointing to container eth0.

25
26
The container eth0 is a tunnel established in ECS through the ipvlan tunnel and ECS's subsidiary ENI eth1. The subsidiary ENI eth1 also has a virtual calxxx network interface card.

27
28

In the ECS OS, there is a route that points to the IP address of the pod with calixxxxx as the next hop. As you can see from the preceding section, the calixxx network interface card is a pair that consists of veth1 in each pod. Therefore, the CIDR of the pod that accesses the SVC will point to veth1 instead of the default eth0 route. Therefore, the calixx network interface card here works to:

  1. Help node access the pod
  2. When the node or pod accesses the CIDR of the SVC, it helps the node or pod go through the ECS OS kernel protocol stack conversion to calixxx and veth1 to access the pod.

29

The busybox-87ff8bd74-g8zs7 and nginx-6f545cb57c-kt7r8 pod in the trunking namespace match the corresponding podnetworking settings and is assigned with the corresponding member ENI, corresponding trunking ENI, security group, switch, and bound ECS instance, thus realizing the configuration and management of the switch and security group in the Pod dimension.

30
31

VLAN ID 1027 can be seen at the tc level, so data traffic will be tagged or removed from VLAN IDs at the egress or ingress stage.

32

From the security group to which the network interface card of the ENI belongs, we can see that only the specified IP address is allowed to access port 80 of the nginx pod.

33

The traffic forwarding logic placed on the data surface at the OS level is similar to what's described in Panoramic Analysis of Alibaba Cloud Container Network Data Link (3) — Terway ENIIP.

Summary

Access to Target

34
Data Link Forwarding Diagram

  • It will pass through the calico network interface card. Each pod that is not the host network will form a veth pair with the calico network interface card, which is used to communicate with other pods or nodes.
  • The entire link and the request will not pass through the ENI allocated by the pod. The IP rule is hit directly in the ns of the OS and is forwarded.
  • The entire request link is ECS1 Pod1 eth0 → cali1xxxxxx → cali2xxxxxx → ECS1 Pod2 eth0.
  • Pods belong to the same or different ENIs. The link requests are the same and do not go through the ENI.
  • Since the forwarding is performed through the OS kernel routing and does not pass through the member ENI, the security group does not take effect. This link is independent of the member ENI security group to which the pod belongs.
  • The difference between accessing the Pod IP and accessing the SVC IP (external ipor clusterip) is rhe access to the SVC IP. SVC is captured by the eth0 and calixxx network interface card in the source pod but not in the target pod.

2.3 Scenario 3: Mutual Access between Trunk Pods in Different Security Groups on the Same Node (Including Access to the SVC IP, the Source, and SVC Backend, Which Are Deployed on the Same Node)

Environment

35

nginx-96bb9b7bb-wwrdm,10.0.5.35 and centos-648f9999bc-nxb6l,10.0.5.18 exist on cn-hongkong.10.0.4.244 node

Kernel Routing

The container network namespace and routing of related pods are not described here. Please see the preceding two sections for more information. You can use podeni to view the ENI, security group sg, and vsw of the centos-648f9999bc-nxb6l.

36

The security group sg-j6ccrxxxx shows that the CentOS pod can access all external addresses.

37

Similarly, it can be found that the nginx-96bb9b7bb-wwrdm's security group sg-j6ccrze8utxxxxx of the server pod only allows 192.168.0.0/16 to access.

38
39

Summary

Access to Target

40
Data Link Forwarding Diagram

  • It will pass through the calico network interface card. Each pod that is not the host network will form a veth pair with the calico network interface card, which is used to communicate with other pods or nodes.
  • The entire link and the request will not pass through the ENI allocated by the pod. The IP rule is hit directly in the ns of the OS and is forwarded.
  • The entire request link is ECS1 Pod1 eth0 → cali1xxxxxx → cali2xxxxxx → ECS1 Pod2 eth0.
  • Pods belong to the same or different ENIs. The link requests are the same and do not go through the ENI.
  • Since the forwarding is performed through the OS kernel routing and does not pass through the member ENI, the security group does not take effect. This link is independent of the member ENI security group to which the pod belongs.
  • The difference between accessing the Pod IP and accessing the SVC IP(external ipor clusterip) is access to the SVC IP. SVC is captured by the eth0 and calixxx network interface card in the source pod but not in the target pod.

2.4 Scenario 4: Mutually Access between Trunk Pods from Different Nodes in a Security Group

Environment

41

Client centos-59cdc5c9c4-l5vf9 and IP 10.0.4.27 exist on cn-hongkong.10.0.4.20 node

Server nginx-6f545cb57c-kt7r8 and IP 10.0.4.30 exist on cn-hongkong.10.0.4.22 node

Kernel Routing

The container network namespace and routing of related pods are not described here. Please see the preceding two sections for more information.

You can use podeni to view the ENI, security group sg, and vsw of the centos-59cdc5c9c4-l5vf9.

Through the security group sg-j6cf3sxrlbuwxxxxx, we can see that the CentOS and nginx pods belong to the same security group sg-j6cf3sxrlbuxxxxx.

42
43

Summary

Access depends on the security group configuration

44
Data Link Forwarding Diagram

  • It will pass through the calico network interface card. Each pod that is not the host network will form a veth pair with the calico network interface card, which is used to communicate with other pods or nodes.
  • The entire link and the request will not pass through the ENI allocated by the pod. The IP rule is hit directly in the ns of the OS and is forwarded.
  • After you exit the ECS instance, hit the VPC routing rules or direct layer 2 forwarding on the vSwitch according to the pod to be accessed and the VSwitch to which the pod ENI belongs.
  • The entire link is ECS1 Pod1 eth0 → cali1xxx > Trunk eni (ECS1) → Pod1 member ENI → vpc route rule (if any) → Pod2 member ENI → > Trunk ENI ( ECS2) cali2 xxx → ECS2 Pod1 eth0.
  • Since forwarding is performed through the OS kernel routing and passes through the member ENI, and the member ENI belongs to the same security group, the security groups are interconnected by default.

2.5 Scenario 5: Mutual Access between Trunk Pods in Different Security Groups on Different Nodes

Environment

45

Client centos-59cdc5c9c4-l5vf9 and IP 10.0.4.27 exist on cn-hongkong.10.0.4.20 node.

Server nginx-96bb9b7bb-wwrdm and IP address 10.0.5.35 exist on cn-hongkong.10.0.4.244 node.

Kernel Routing

The container network namespace and routing of related pods are not described here. Please see the preceding two sections for more information.

You can use podeni to view the ENI, security group sg, and vsw of the centos-59cdc5c9c4-l5vf9.

The security group sg-j6cf3sxrlbuwxxxxx shows that the CentOS pod can access all external addresses.

46
47

Similarly, it can be found that the nginx-96bb9b7bb-wwrdm's security group sg-j6ccrze8utxxxxx of the server pod only allows 192.168.0.0/16 access.

48
49

Summary

Access depends on the security group configuration

50
Data Link Forwarding Diagram

  • It will pass through the calico network interface card. Each pod that is not the host network will form a veth pair with the calico network interface card, which is used to communicate with other pods or nodes.
  • The entire link and the request will not pass through the ENI allocated by the pod. The IP rule is hit directly in the ns of the OS and is forwarded.
  • The entire link is ECS1 Pod1 eth0 → cali1xxx > Trunk ENI (ECS1) → Pod1 member ENI → vpc route rule (if any) → Pod2 member ENI → > Trunk ENI (ECS2) cali2 xxx → ECS2 Pod1 eth0.
  • Since the forwarding is performed through the OS kernel routing, and the traffic will pass through the member ENI, the security group configuration plays a decisive role in whether access can be done successfully.

2.6 Scenario 6: Source Accesses the SVC IP Address in the Cluster (Different Source and SVC Backend Nodes, the Same Security Group, and Access to External IP Address in Local Mode)

Environment

51
52

Client centos-59cdc5c9c4-l5vf9 and IP 10.0.4.27 exist on cn-hongkong.10.0.4.20 node

Server nginx-6f545cb57c-kt7r8 and IP 10.0.4.30 exist on cn-hongkong.10.0.4.22 node

The ClusterIP address of the SVC in NGINX is 192.168.81.92, and the external IP address is 8.210.162.178.

Kernel Routing

ENI-Trunking (compared with ENIIP) only adds the corresponding Truning and Member ENIs on the VPC side, which is the same as in OS.

Summary

Access depends on the security group configuration

53
Data Link Forwarding Diagram

  • It will pass through the calico network interface card. Each pod that is not the host network will form a veth pair with the calico network interface card, which is used to communicate with other pods or nodes.
  • The entire link and the request will not pass through the ENI allocated by the pod. The IP rule is hit directly in the ns of the OS and is forwarded.
  • After you exit the ECS instance, hit the VPC routing rules or direct layer 2 forwarding on the vSwitch according to the pod to be accessed and the VSwitch to which the pod ENI belongs.
  • The entire request link is:

Go: ECS1 Pod1 eth0 → cali1xxx > ECS eth0 → Pod1 member ENI → vpc route rule (if any) → Pod2 member ENI → Trunk ENI (ECS2) cali2 xxx → ECS2 Pod1 eth0

Back: ECS2 Pod1 eth0 → Eth-Trunk (ECS2) cali2 xxx → Pod2 member ENIi → vpc route rule (if any) → Pod1 member ENI → Eth-Trunk ENI (ECS1) → cali1xxx → ECS1 Pod1 eth0

  • Through the IPVS rule and the fnat conversion, the packet goes from the ECS eth0 with the source pod IP address and requests the destination pod IP address (access SVC clusterIP and access External IP in Local mode).
  • The ENI has the eth0, Pod1 member ENI, and Pod2 member ENI of ECS1. Therefore, the configuration of these three network interface cards' security groups will affect the connectivity of the data link.

2.7 Scenario 7: Source Accesses SVC IP Addresses in a Cluster (Different Sources and SVC Backend Nodes, Different Security Groups, and Access to External IP Address in Local Mode)

Environment

54
55
Client centos-59cdc5c9c4-l5vf9 and IP 10.0.4.27 exist on cn-hongkong.10.0.4.20 node

Server nginx-96bb9b7bb-wwrdm and IP 10.0.5.35 exist on cn-hongkong.10.0.4.244 node

The ClusterIP address of the SVC in NGINX is 192.168.31.83, and the external IP address is 47.243.87.204.

Kernel Routing

ENI-Trunking (compared with ENIIP) only adds the corresponding Truning and Member ENIs on the VPC side, which is the same as in OS.

Summary

Access depends on the security group configuration

56
Data Link Forwarding Diagram

  • It will pass through the calico network interface card. Each pod that is not the host network will form a veth pair with the calico network interface card, which is used to communicate with other pods or nodes.
  • The entire link and the request will not pass through the ENI allocated by the pod. The IP rule is hit directly in the ns of the OS and is forwarded.
  • After you exit the ECS instance, hit the VPC routing rules or direct layer 2 forwarding on the vSwitch according to the pod to be accessed and the VSwitch to which the pod ENI belongs.
  • The entire request link is:

Go: ECS1 Pod1 eth0 → cali1xxx > ECS eth0 → Pod1 member ENI → vpc route rule (if any) → Pod2 member ENI → Trunk ENI (ECS2) cali2 xxx → ECS2 Pod1 eth0

Back: ECS2 Pod1 eth0 → Eth-Trunk ( ECS2) cali2 xxx → Pod2 member ENI → vpc route rule (if any) → Pod1 member ENI → Eth-Trunk ENI (ECS1) → cali1xxx → ECS1 Pod1 eth0

  • Through the IPVS rule and the fnat conversion, the packet goes from the ECS eth0 with the source pod IP address and requests the destination pod IP address (access SVC clusterIP and access External IP in Local mode).
  • This ENI has the eth0, Pod1 member ENI, and Pod2 member ENI of ECS1. Therefore, the configuration of these three network interface cards' security groups will affect the connectivity of the data link. You need to ensure the security group allows the response IP addresses of the pods and ECS to communicate with each other.

2.8 Scenario 8: In Cluster Mode, the Source in the Cluster Accesses the SVC ExternalIP (Different Source and SVC Backend Nodes and Different Security Groups))

Environment

57
58

59

Client centos-59cdc5c9c4-l5vf9 and IP 10.0.4.27 exist on cn-hongkong.10.0.4.20 node.

Server nginx-96bb9b7bb-wwrdm and IP 10.0.5.35 exist on cn-hongkong.10.0.4.244 node.

The ClusterIP address of the SVC in NGINX is 192.168.31.83. The external IP address is 47.243.87.204. The ExternalTrafficPolicy is in cluster mode.

Kernel Routing

ENI-Trunking (compared with ENIIP) only adds the corresponding Truning and Member ENIs on the VPC side, which is the same as in OS.

Summary

Access depends on the security group configuration

60
Data Link Forwarding Diagram

  • It will pass through the calico network interface card. Each pod that is not the host network will form a veth pair with the calico network interface card, which is used to communicate with other pods or nodes.
  • The entire link and the request will not pass through the ENI allocated by the pod. The IP rule is hit directly in the ns of the OS and is forwarded.
  • After you exit the ECS instance, hit the VPC routing rules or direct layer 2 forwarding on the vSwitch according to the pod to be accessed and the VSwitch to which the pod ENI belongs.
  • The entire link is ECS1 Pod1 eth0 → cali1xxx > ECS eth0 → vpc route rule (if any) → Pod2 member ENI → Trunk ENI (ECS2) cali2 xxx → ECS2 Pod1 eth0
  • Through the IPVS rule and fnat conversion, the packet goes from the ECS eth0 with the source pod IP address and requests the destination pod IP address (access SVC clusterIP and access External IP in Local mode).
  • This ENI has the eth0, Pod2 member ENI of ECS1. Therefore, the configuration of these two network interface cards' security groups will affect the connectivity of the data link. You need to ensure the security group allows the response IP addresses of the pods and ECS to communicate with each other.

2.9 Scenario 9: In Cluster Mode, the Source in the Cluster Accesses the SVC ExternalIP (Different Source and SVC Backend Nodes and the Same Security Group)

Environment

61
62

63

Client centos-59cdc5c9c4-l5vf9 and IP 10.0.4.27 exist on cn-hongkong.10.0.4.20 node.

Server nginx-6f545cb57c-kt7r8 and IP 10.0.4.30 exist on cn-hongkong.10.0.4.22 node.

The ClusterIP address of the SVC in NGINX is 192.168.81.92. The external IP address is 8.210.162.178. The ExternalTrafficPolicy is Cluster.

Kernel Routing

ENI-Trunking (compared with ENIIP) only adds the corresponding Truning and Member ENIs on the VPC side, which is the same as in OS.

Summary

Access depends on the security group configuration

64
Data Link Forwarding Diagram

  • It will pass through the calico network interface card. Each pod that is not the host network will form a veth pair with the calico network interface card, which is used to communicate with other pods or nodes.
  • The entire link and the request will not pass through the ENI allocated by the pod. The IP rule is hit directly in the ns of the OS and is forwarded.
  • After you exit the ECS instance, hit the VPC routing rules or direct layer 2 forwarding on the vSwitch according to the pod to be accessed and the VSwitch to which the pod ENI belongs.
  • The entire link is ECS1 Pod1 eth0 → cali1xxx > ECS eth0 → vpc route rule (if any) → Pod2 member ENI → Trunk ENI (ECS2) cali2 xxx → ECS2 Pod1 eth0
  • Through the IPVS rule and the fnat conversion, the packet goes from the ECS eth0 with the source pod IP address and requests the destination pod IP address (access SVC clusterIP and access External IP in Local mode).
  • This ENI has the eth0, Pod2 member ENI of ECS1. Therefore, the configuration of these two network interface cards' security groups will affect the connectivity of the data link. You need to ensure the security group allows the response IP addresses of the pods and ECS to communicate with each other.

2.10 Scenario 10: Access an SVC IP Address outside a Cluster

Environment

65
66
67

Client centos-59cdc5c9c4-l5vf9 and IP 10.0.4.27 exist on cn-hongkong.10.0.4.20 node.

Server nginx-6f545cb57c-kt7r8 and IP 10.0.4.30 exist on cn-hongkong.10.0.4.22 node.

The ClusterIP address of the SVC in NGINX is 192.168.81.92. The external IP address is 8.210.162.178. The ExternalTrafficPolicy is Cluster.

SLB-Related Configurations

In the SLB console, we can see the backend server group of the lb-j6cmv8aaojf7nqdai2a6a Vserver group is the ENI eni-j6cgrqqrtvcwhhcyuc28, eni-j6c54tyfku5855euh3db, and eni-j6cf7e4qnfx22mmvblj0 of two backend nginx pods, which are all member ENIs.

68
69

Summary

Access depends on the security group configuration

70
Data Link Forwarding Diagram

  • It will pass through the calico network interface card. Each pod that is not the host network will form a veth pair with the calico network interface card, which is used to communicate with other pods or nodes.
  • Data link: client → SLB → Pod Member ENI + Pod Port → Trunking ENI → ECS1 Pod1 eth0
  • If the ExternalTrafficPolicy is set to Local or Cluster mode, SLB only mounts the member ENI allocated by the pod to the VServer group of the SLB.
  • SLB forwarding requests are only forwarded to the target member ENI, sent to the Eth-Trunk ENI through the vlan, and forwarded to the pod through Eth-Trunk ENI.

Summary

This article focuses on the data link forwarding paths of ACK in Terway ENI-Trunking mode in different SOP scenarios. Pod-dimension switch and security group configuration settings are introduced to meet customer requirements for more refined management of business networks. There are ten SOP scenarios in Terway ENI-Trunking mode. The technical implementation principles and cloud product configurations of these scenarios are sorted out and summarized step by step. This provides preliminary guidance for link jitter, optimal configuration, and link principles under Terway ENI-Trunking architectures.

In Terway ENI-Trunking mode, the veth pair is used to connect the network space between the host and pod. The pod address comes from the auxiliary IP address of the ENI, and policy-based routing needs to be configured on the node to ensure that the traffic of the IP address passes through the Elastic Network Interface to which it belongs. This way, multi-pod sharing of ENI can be realized, which significantly improves the deployment density of pods. At the same time, tc egress/ingress is used to mark or remove the VLAN tag when data flow is input into ECS, so data flow can really go to the Member ENI network card, thus realizing fine management. Currently, microservices are becoming more popular. Sidecar mode is adopted to enable each pod to become a network node, thus realizing different network behaviors and observability for different traffic in the pod.

Stay tuned for the next part!

0 2 1
Share on

Alibaba Cloud Native

164 posts | 12 followers

You may also like

Comments

Alibaba Cloud Native

164 posts | 12 followers

Related Products